首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems.

Results

In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters.

Conclusion

The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out on the constrained optimization problem and yield realistic model parameters that are more likely to hold up in extrapolations with the model.  相似文献   

2.
A major problem for the identification of metabolic network models is parameter identifiability, that is, the possibility to unambiguously infer the parameter values from the data. Identifiability problems may be due to the structure of the model, in particular implicit dependencies between the parameters, or to limitations in the quantity and quality of the available data. We address the detection and resolution of identifiability problems for a class of pseudo-linear models of metabolism, so-called linlog models. Linlog models have the advantage that parameter estimation reduces to linear or orthogonal regression, which facilitates the analysis of identifiability. We develop precise definitions of structural and practical identifiability, and clarify the fundamental relations between these concepts. In addition, we use singular value decomposition to detect identifiability problems and reduce the model to an identifiable approximation by a principal component analysis approach. The criterion is adapted to real data, which are frequently scarce, incomplete, and noisy. The test of the criterion on a model with simulated data shows that it is capable of correctly identifying the principal components of the data vector. The application to a state-of-the-art dataset on central carbon metabolism in Escherichia coli yields the surprising result that only $4$ out of $31$ reactions, and $37$ out of $100$ parameters, are identifiable. This underlines the practical importance of identifiability analysis and model reduction in the modeling of large-scale metabolic networks. Although our approach has been developed in the context of linlog models, it carries over to other pseudo-linear models, such as generalized mass-action (power-law) models. Moreover, it provides useful hints for the identifiability analysis of more general classes of nonlinear models of metabolism.  相似文献   

3.

Background

The identification of copy number aberration in the human genome is an important area in cancer research. We develop a model for determining genomic copy numbers using high-density single nucleotide polymorphism genotyping microarrays. The method is based on a Bayesian spatial normal mixture model with an unknown number of components corresponding to true copy numbers. A reversible jump Markov chain Monte Carlo algorithm is used to implement the model and perform posterior inference.

Results

The performance of the algorithm is examined on both simulated and real cancer data, and it is compared with the popular CNAG algorithm for copy number detection.

Conclusions

We demonstrate that our Bayesian mixture model performs at least as well as the hidden Markov model based CNAG algorithm and in certain cases does better. One of the added advantages of our method is the flexibility of modeling normal cell contamination in tumor samples.  相似文献   

4.

Background

During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been succesfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework.

Results

In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results.

Conclusions

The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.  相似文献   

5.

Introduction

Virtually all existing expectation-maximization (EM) algorithms for quantitative trait locus (QTL) mapping overlook the covariance structure of genetic effects, even though this information can help enhance the robustness of model-based inferences.

Results

Here, we propose fast EM and pseudo-EM-based procedures for Bayesian shrinkage analysis of QTLs, designed to accommodate the posterior covariance structure of genetic effects through a block-updating scheme. That is, updating all genetic effects simultaneously through many cycles of iterations.

Conclusion

Simulation results based on computer-generated and real-world marker data demonstrated the ability of our method to swiftly produce sensible results regarding the phenotype-to-genotype association. Our new method provides a robust and remarkably fast alternative to full Bayesian estimation in high-dimensional models where the computational burden associated with Markov chain Monte Carlo simulation is often unwieldy. The R code used to fit the model to the data is provided in the online supplementary material.  相似文献   

6.
Knowledge of the genetic variation of key economic traits in Eucalyptus globulus under cold conditions is crucial to the genetic improvement of environmental tolerances and other economic traits. A Bayesian analysis of genetic parameters for quantitative traits was carried out in 37 E. globulus open-pollinated families under cold conditions in southern Chile. The trial is located in the Andean foothills, in the Province of Bío-Bío. The Bayesian approach was performed using Gibbs sampling algorithm. Multi-trait linear and threshold models were fitted to phenotypic data (growth traits, survival, and stem straightness). Fifteen years after planting, height, diameter at breast height, and stem volume were found to be weakly to moderately heritable with Bayesian credible intervals (probability of 90 %): $ {\widehat{h}}^2 $ ?=?0.009–0.102, $ {\widehat{h}}^2 $ ?=?0.031–0.185, and $ {\widehat{h}}^2 $ ?=?0.045–0.205, respectively. Stem straightness was found to be weakly to moderately heritable ranging from 0.032 to 0.208 (Bayesian 90 % credible interval); posterior mode $ {\widehat{h}}^2 $ ?=?0.091. Tree survival at age of 15 years was high in the trial (84.8 %) with such heritability values ranging from 0.072 to 0.157. Survival was non-significantly genetically correlated to growth and stem straightness. Stem volume had the highest predicted genetic gains ranging from 17.9 to 23.7 % (selection rate of 15.8 and 8.3 %, respectively). The results of this study confirm the potential for selective breeding of this eucalypt in areas of southern Chile where cold is a significant constraint.  相似文献   

7.

Background

It is a daunting task to identify all the metabolic pathways of brain energy metabolism and develop a dynamic simulation environment that will cover a time scale ranging from seconds to hours. To simplify this task and make it more practicable, we undertook stoichiometric modeling of brain energy metabolism with the major aim of including the main interacting pathways in and between astrocytes and neurons.

Model

The constructed model includes central metabolism (glycolysis, pentose phosphate pathway, TCA cycle), lipid metabolism, reactive oxygen species (ROS) detoxification, amino acid metabolism (synthesis and catabolism), the well-known glutamate-glutamine cycle, other coupling reactions between astrocytes and neurons, and neurotransmitter metabolism. This is, to our knowledge, the most comprehensive attempt at stoichiometric modeling of brain metabolism to date in terms of its coverage of a wide range of metabolic pathways. We then attempted to model the basal physiological behaviour and hypoxic behaviour of the brain cells where astrocytes and neurons are tightly coupled.

Results

The reconstructed stoichiometric reaction model included 217 reactions (184 internal, 33 exchange) and 216 metabolites (183 internal, 33 external) distributed in and between astrocytes and neurons. Flux balance analysis (FBA) techniques were applied to the reconstructed model to elucidate the underlying cellular principles of neuron-astrocyte coupling. Simulation of resting conditions under the constraints of maximization of glutamate/glutamine/GABA cycle fluxes between the two cell types with subsequent minimization of Euclidean norm of fluxes resulted in a flux distribution in accordance with literature-based findings. As a further validation of our model, the effect of oxygen deprivation (hypoxia) on fluxes was simulated using an FBA-derivative approach, known as minimization of metabolic adjustment (MOMA). The results show the power of the constructed model to simulate disease behaviour on the flux level, and its potential to analyze cellular metabolic behaviour in silico.

Conclusion

The predictive power of the constructed model for the key flux distributions, especially central carbon metabolism and glutamate-glutamine cycle fluxes, and its application to hypoxia is promising. The resultant acceptable predictions strengthen the power of such stoichiometric models in the analysis of mammalian cell metabolism.  相似文献   

8.

Background

Given the complex mechanisms underlying biochemical processes systems biology researchers tend to build ever increasing computational models. However, dealing with complex systems entails a variety of problems, e.g. difficult intuitive understanding, variety of time scales or non-identifiable parameters. Therefore, methods are needed that, at least semi-automatically, help to elucidate how the complexity of a model can be reduced such that important behavior is maintained and the predictive capacity of the model is increased. The results should be easily accessible and interpretable. In the best case such methods may also provide insight into fundamental biochemical mechanisms.

Results

We have developed a strategy based on the Computational Singular Perturbation (CSP) method which can be used to perform a "biochemically-driven" model reduction of even large and complex kinetic ODE systems. We provide an implementation of the original CSP algorithm in COPASI (a COmplex PAthway SImulator) and applied the strategy to two example models of different degree of complexity - a simple one-enzyme system and a full-scale model of yeast glycolysis.

Conclusion

The results show the usefulness of the method for model simplification purposes as well as for analyzing fundamental biochemical mechanisms. COPASI is freely available at http://www.copasi.org.  相似文献   

9.

Background

With the growing abundance of microarray data, statistical methods are increasingly needed to integrate results across studies. Two common approaches for meta-analysis of microarrays include either combining gene expression measures across studies or combining summaries such as p-values, probabilities or ranks. Here, we compare two Bayesian meta-analysis models that are analogous to these methods.

Results

Two Bayesian meta-analysis models for microarray data have recently been introduced. The first model combines standardized gene expression measures across studies into an overall mean, accounting for inter-study variability, while the second combines probabilities of differential expression without combining expression values. Both models produce the gene-specific posterior probability of differential expression, which is the basis for inference. Since the standardized expression integration model includes inter-study variability, it may improve accuracy of results versus the probability integration model. However, due to the small number of studies typical in microarray meta-analyses, the variability between studies is challenging to estimate. The probability integration model eliminates the need to model variability between studies, and thus its implementation is more straightforward. We found in simulations of two and five studies that combining probabilities outperformed combining standardized gene expression measures for three comparison values: the percent of true discovered genes in meta-analysis versus individual studies; the percent of true genes omitted in meta-analysis versus separate studies, and the number of true discovered genes for fixed levels of Bayesian false discovery. We identified similar results when pooling two independent studies of Bacillus subtilis. We assumed that each study was produced from the same microarray platform with only two conditions: a treatment and control, and that the data sets were pre-scaled.

Conclusion

The Bayesian meta-analysis model that combines probabilities across studies does not aggregate gene expression measures, thus an inter-study variability parameter is not included in the model. This results in a simpler modeling approach than aggregating expression measures, which accounts for variability across studies. The probability integration model identified more true discovered genes and fewer true omitted genes than combining expression measures, for our data sets.  相似文献   

10.

Background

Whole genome duplication (WGD) occurs widely in angiosperm evolution. It raises the intriguing question of how interacting networks of genes cope with this dramatic evolutionary event.

Results

In study of the Arabidopsis metabolic network, we assigned each enzyme (node) with topological centralities (in-degree, out-degree and between-ness) to measure quantitatively their centralities in the network. The Arabidopsis metabolic network is highly modular and separated into 11 interconnected modules, which correspond well to the functional metabolic pathways. The enzymes with higher in-out degree and between-ness (defined as hub and bottleneck enzymes, respectively) tend to be more conserved and preferentially retain homeologs after WGD. Moreover, the simultaneous retention of homeologs encoding enzymes which catalyze consecutive steps in a pathway is highly favored and easily achieved, and enzyme-enzyme interactions contribute to the retention of one-third of WGD enzymes.

Conclusions

Our analyses indicate that the hub and bottleneck enzymes of metabolic network obtain great benefits from WGD, and this event grants clear evolutionary advantages in adaptation to different environments.  相似文献   

11.

Background

LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection.

Results

We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.  相似文献   

12.
The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC ( arameter stimation in a on- mensionalized -system with onstraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.  相似文献   

13.

Background

The study of biological interaction networks is a central theme of systems biology. Here, we investigate the relationships between two distinct types of interaction networks: the metabolic pathway map and the protein-protein interaction network (PIN). It has long been established that successive enzymatic steps are often catalyzed by physically interacting proteins forming permanent or transient multi-enzymes complexes. Inspecting high-throughput PIN data, it was shown recently that, indeed, enzymes involved in successive reactions are generally more likely to interact than other protein pairs. In our study, we expanded this line of research to include comparisons of the underlying respective network topologies as well as to investigate whether the spatial organization of enzyme interactions correlates with metabolic efficiency.

Results

Analyzing yeast data, we detected long-range correlations between shortest paths between proteins in both network types suggesting a mutual correspondence of both network architectures. We discovered that the organizing principles of physical interactions between metabolic enzymes differ from the general PIN of all proteins. While physical interactions between proteins are generally dissortative, enzyme interactions were observed to be assortative. Thus, enzymes frequently interact with other enzymes of similar rather than different degree. Enzymes carrying high flux loads are more likely to physically interact than enzymes with lower metabolic throughput. In particular, enzymes associated with catabolic pathways as well as enzymes involved in the biosynthesis of complex molecules were found to exhibit high degrees of physical clustering. Single proteins were identified that connect major components of the cellular metabolism and may thus be essential for the structural integrity of several biosynthetic systems.

Conclusion

Our results reveal topological equivalences between the protein interaction network and the metabolic pathway network. Evolved protein interactions may contribute significantly towards increasing the efficiency of metabolic processes by permitting higher metabolic fluxes. Thus, our results shed further light on the unifying principles shaping the evolution of both the functional (metabolic) as well as the physical interaction network.  相似文献   

14.

Background

This article describes classical and Bayesian interval estimation of genetic susceptibility based on random samples with pre-specified numbers of unrelated cases and controls.

Results

Frequencies of genotypes in cases and controls can be estimated directly from retrospective case-control data. On the other hand, genetic susceptibility defined as the expected proportion of cases among individuals with a particular genotype depends on the population proportion of cases (prevalence). Given this design, prevalence is an external parameter and hence the susceptibility cannot be estimated based on only the observed data. Interval estimation of susceptibility that can incorporate uncertainty in prevalence values is explored from both classical and Bayesian perspective. Similarity between classical and Bayesian interval estimates in terms of frequentist coverage probabilities for this problem allows an appealing interpretation of classical intervals as bounds for genetic susceptibility. In addition, it is observed that both the asymptotic classical and Bayesian interval estimates have comparable average length. These interval estimates serve as a very good approximation to the "exact" (finite sample) Bayesian interval estimates. Extension from genotypic to allelic susceptibility intervals shows dependency on phenotype-induced deviations from Hardy-Weinberg equilibrium.

Conclusions

The suggested classical and Bayesian interval estimates appear to perform reasonably well. Generally, the use of exact Bayesian interval estimation method is recommended for genetic susceptibility, however the asymptotic classical and approximate Bayesian methods are adequate for sample sizes of at least 50 cases and controls.  相似文献   

15.
Environmental distribution and bioremediation of hydrocarbon pollutants is described in the literature with complex mathematical models. Better understanding and easier model application require detailed model analysis. In this work, local sensitivity analysis of the kinetic parameters and metabolic control analysis of the biological part of the integrated BTEX bioremediation model were performed. Local sensitivity analysis revealed that the dissolved oxygen concentration (S O) and particulate iron (III) oxide concentration (S Fe) were the most sensitive to both positive and negative parameter value perturbations. In the case of model reactions, aerobic growth (r1) and aerobic growth on acetate (r13) were observed to be the most sensitive. The elasticity, flux control, and concentration control coefficients were estimated by applying the metabolic control analysis methodology. Metabolic control analysis revealed a positive effect of ammonium on all analysed model reactions. The results also indicated the importance of perturbation of the enzyme level catalysing iron reduction on acetate on model fluxes, as well as the importance of enzyme level catalysing aerobic growth on model metabolite concentration. These results can be used in planning optimal operating strategy for BTEX bioremediation.  相似文献   

16.

Background

It was recently shown that the treatment effect of an antibody can be described by a consolidated parameter which includes the reaction rates of the receptor-toxin-antibody kinetics and the relative concentration of reacting species. As a result, any given value of this parameter determines an associated range of antibody kinetic properties and its relative concentration in order to achieve a desirable therapeutic effect. In the current study we generalize the existing kinetic model by explicitly taking into account the diffusion fluxes of the species.

Results

A refined model of receptor-toxin-antibody (RTA) interaction is studied numerically. The protective properties of an antibody against a given toxin are evaluated for a spherical cell placed into a toxin-antibody solution. The selection of parameters for numerical simulation approximately corresponds to the practically relevant values reported in the literature with the significant ranges in variation to allow demonstration of different regimes of intracellular transport.

Conclusions

The proposed refinement of the RTA model may become important for the consistent evaluation of protective potential of an antibody and for the estimation of the time period during which the application of this antibody becomes the most effective. It can be a useful tool for in vitro selection of potential protective antibodies for progression to in vivo evaluation.  相似文献   

17.

Background

In many domains, scientists build complex simulators of natural phenomena that encode their hypotheses about the underlying processes. These simulators can be deterministic or stochastic, fast or slow, constrained or unconstrained, and so on. Optimizing the simulators with respect to a set of parameter values is common practice, resulting in a single parameter setting that minimizes an objective subject to constraints.

Results

We propose algorithms for post optimization posterior evaluation (POPE) of simulators. The algorithms compute and visualize all simulations that can generate results of the same or better quality than the optimum, subject to constraints. These optimization posteriors are desirable for a number of reasons among which are easy interpretability, automatic parameter sensitivity and correlation analysis, and posterior predictive analysis. Our algorithms are simple extensions to an existing simulation-based inference framework called approximate Bayesian computation. POPE is applied two biological simulators: a fast and stochastic simulator of stem-cell cycling and a slow and deterministic simulator of tumor growth patterns.

Conclusions

POPE allows the scientist to explore and understand the role that constraints, both on the input and the output, have on the optimization posterior. As a Bayesian inference procedure, POPE provides a rigorous framework for the analysis of the uncertainty of an optimal simulation parameter setting.
  相似文献   

18.
19.

Background

Aprotinin has been shown to be effective in reducing peri-operative blood loss and the need for re-operation due to continued bleeding in cardiac surgery. The lysine analogues tranexamic acid (TXA) and epsilon aminocaproic acid (EACA) are cheaper, but it is not known if they are as effective as aprotinin.

Methods

Studies were identified by searching electronic databases and bibliographies of published articles. Data from head-to-head trials were pooled using a conventional (Cochrane) meta-analytic approach and a Bayesian approach which estimated the posterior probability of TXA and EACA being equivalent to aprotinin; we used as a non-inferiority boundary a 20% increase in the rates of transfusion or re-operation because of bleeding.

Results

Peri-operative blood loss was significantly greater with TXA and EACA than with aprotinin: weighted mean differences were 106 mls (95% CI 37 to 227 mls) and 185 mls (95% CI 134 to 235 mls) respectively. The pooled relative risks (RR) of receiving an allogeneic red blood cell (RBC) transfusion with TXA and EACA, compared with aprotinin, were 1.08 (95% CI 0.88 to 1.32) and 1.14 (95% CI 0.84 to 1.55) respectively. The equivalent Bayesian posterior mean relative risks were 1.15 (95% Bayesian Credible Interval [BCI] 0.90 to 1.68) and 1.21 (95% BCI 0.79 to 1.82) respectively. For transfusion, using a 20% non-inferiority boundary, the posterior probabilities of TXA and EACA being non-inferior to aprotinin were 0.82 and 0.76 respectively. For re-operation the Cochrane RR for TXA vs. aprotinin was 0.98 (95% CI 0.51 to 1.88), compared with a posterior mean Bayesian RR of 0.63 (95% BCI 0.16 to 1.46). The posterior probability of TXA being non-inferior to aprotinin was 0.92, but this was sensitive to the inclusion of one small trial.

Conclusion

The available data are conflicting regarding the equivalence of lysine analogues and aprotinin in reducing peri-operative bleeding, transfusion and the need for re-operation. Decisions are sensitive to the choice of clinical outcome and non-inferiority boundary. The data are an uncertain basis for replacing aprotinin with the cheaper lysine analogues in clinical practice. Progress has been hampered by small trials and failure to study clinically relevant outcomes.  相似文献   

20.

Background and aims

Root length density (RLD) is a parameter that is difficult to measure, but crucial to estimate water and nutrient uptake by plants. In this study a novel approach is presented to characterize the 3-D root length distribution by supplementing data of the 3-D distribution of root intersections with data of root length density from a limited number of soil cores.

Methods

The method was evaluated in a virtual experiment using the RootTyp model and a field experiment with cauliflower (Brassica oleracea L. botrytis) and leek (Allium porrum, L.).

Results

The virtual experiment shows that total root length and root length distribution can be accurately estimated using the novel approach. Implementation of the method in a field experiment was successful for characterizing the growth of the root distribution with time both for cauliflower and leek. In contrast with the virtual experiment, total root length could not be estimated based upon root intersection measurements in the field.

Conclusions

The novel method of combining root intersection data with root length density data from core samples is a powerful tool to supply root water uptake models with root system information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号