首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
《动物分类学报》2017,(1):46-58
To distinguish species or populations using morphometric data is generally processed through multivariate analyses,in particular the discriminant analysis.We explored another approach based on the maximum likelihood method.Simple statistics based on the assumption of normal distribution at a single variable allows to compute the chance of observing a particular data (or sample) in a given reference group.When data are described by more than one variable,the maximum likelihood (MLi) approach allows to combine these chances to fmd the best fit for the data.Such approach assumes independence between variables.The assumptions of normal distribution of variables and independence between them are frequently not met in morphometrics,but improvements may be obtained after some mathematical transformations.Provided there is strict anatomical correspondence of variables between unknown and reference data,the MLi classification produces consistent classification.We explored this approach using various input data,and compared validated classification scores with the ones obtained after the Mahalanobis distance-based classification.The simplicity of the method,its fast computation,performance and versatility,make it an interesting complement to other classification techniques.  相似文献   

3.
The relationship between dose mean lineal energy and relative variance has been exploited previously to derive yD from the calculated variance in current measurements in steady and uniform radiation fields. Recently Kellerer and Rossi made the observation that utilization of two detectors can make the variance technique practicable in time-varying fields. We report here the first measurements of yD for 10 MeV X rays and 9 and 18 MeV electrons from a pulsed linear accelerator using the variance method. Two independent analog-to-digital converters were used to obtain data from two spherical proportional counters in synchrony with the beam pulse. The method is described in detail and results are reported for site diameters of 1/2, 1, and 2 microns. Data for an accurate determination of yD can be obtained with this technique in less than 1 min, making possible an essentially "on line" determination of yD or zD in a clinical situation.  相似文献   

4.
Anthropometric data on 12 variables in 19 villages of the Yanomama Indians demonstrate significant heterogeneity in physique among villages of this tribe. Mahalanobis' distances (D2) calculated from the data lead to the tentative conclusion of a general correspondence between anthropometric and geographic distances separating villages. The mean stature of the Yanomama is smaller than that of most other South American tribes which have been measured, and the Yanomama are genetically distinct from the other small Indians as shown by genetic distances based on allele frequencies for a variety of genetic markers. Since some subjects were measured more than once by the same and by different observers, it was possible to calculate approximate estimates of variance within and between observers. Univariate analysis indicates that face height and nose height are especially susceptible to systematic differences in technique between observers. The variances obtained in this field study compare favorably with those of some classical laboratory studies described in the literature. It was found that measurement error nevertheless probably makes a substantial contribution to anthropometric distance between villages. The median error variance as a fraction of that of Herskovits ('30) is 0.62 for the seven measurements in common with this study. The median value of the error variance for the 12 variables in this study is between 16% and 17% of the total variance.  相似文献   

5.

Background

An important use of data obtained from microarray measurements is the classification of tumor types with respect to genes that are either up or down regulated in specific cancer types. A number of algorithms have been proposed to obtain such classifications. These algorithms usually require parameter optimization to obtain accurate results depending on the type of data. Additionally, it is highly critical to find an optimal set of markers among those up or down regulated genes that can be clinically utilized to build assays for the diagnosis or to follow progression of specific cancer types. In this paper, we employ a mixed integer programming based classification algorithm named hyper-box enclosure method (HBE) for the classification of some cancer types with a minimal set of predictor genes. This optimization based method which is a user friendly and efficient classifier may allow the clinicians to diagnose and follow progression of certain cancer types.

Methodology/Principal Findings

We apply HBE algorithm to some well known data sets such as leukemia, prostate cancer, diffuse large B-cell lymphoma (DLBCL), small round blue cell tumors (SRBCT) to find some predictor genes that can be utilized for diagnosis and prognosis in a robust manner with a high accuracy. Our approach does not require any modification or parameter optimization for each data set. Additionally, information gain attribute evaluator, relief attribute evaluator and correlation-based feature selection methods are employed for the gene selection. The results are compared with those from other studies and biological roles of selected genes in corresponding cancer type are described.

Conclusions/Significance

The performance of our algorithm overall was better than the other algorithms reported in the literature and classifiers found in WEKA data-mining package. Since it does not require a parameter optimization and it performs consistently very high prediction rate on different type of data sets, HBE method is an effective and consistent tool for cancer type prediction with a small number of gene markers.  相似文献   

6.
Processes of adaption in measurements of performance - cycle-times, variance of cycle-times, informatory component of time and errors - and of physiological strain - electromyograms of musculus extensor digitorum and musculus rhomboideus, horizontal and vertical electrooculogram, heart rate and heart rate variability - are presented and described in type and frequency. Simultaneous reactions and successive reactions of measured variables in dependance of shift-time and of days are described. They are classified according to the causes "exercise" and "emotional habituation" and discussed in a model "experimenter - experimental situation".  相似文献   

7.
Due to the time scale of circular dichroism (CD) measurements, it is theoretically possible to deconvolute such a spectrum if the pure CD spectra differ significantly from one another. In the last decade several methods have been published aiming at obtaining the conformational weights, or percentages (which are the coefficients for a linear combination) of the so-called typical secondary structural elements making up the three-dimensional structure of proteins. Two methods that can be used to determine the secondary structures of proteins are described here. The first method, called LINCOMB, is a simple algorithm based on a least-squares fit with a set of reference spectra representing the known secondary structures and yielding an estimation of weights attributed to alpha-helix, beta-pleated sheet (mainly antiparallel), beta-turns, unordered form, and aromatic/disulfide (or nonpeptide) contributions of the protein being analyzed. This method requires a "template" or reference curve set, which was obtained from the second method. The second method, "convex constraint analysis," is a general deconvolution method for a CD spectra set of any variety of conformational type. The algorithm, based on a set of three constraints, is able to deconvolute a set of CD curves to its common "pure"-component curves and conformational weights. To analyze a single CD spectrum with this method, the spectrum is appended to the data set used as a reference data set. As a way to determine the reliability of the algorithm and provide a guideline to its usage, some applications are presented.  相似文献   

8.
Kinetics of the daunomycin--DNA interaction   总被引:2,自引:0,他引:2  
The kinetics of the interaction of daunomycin with calf thymus DNA are described. Stopped-flow and temperature-jump relaxation methods, using absorption detection, were used to study the binding reaction. Three relaxation times were observed, all of which are concentration dependent, although the two slower relaxations approach constant values at high reactant concentrations. Relaxation times over a wide range of concentrations were gathered, and the data were fit by a minimal mechanism in which a rapid bimolecular association step is followed by two sequential isomerization steps. The six rate constants for this mechanism were extracted from our data by relaxation analysis. The values determined for the six rate constants may be combined to calculate an overall equilibrium constant that is in excellent agreement with that obtained by independent equilibrium measurements. Additional stopped-flow experiments, using first sodium dodecyl sulfate to dissociate bound drug and second pseudo-first-order conditions to study the fast bimolecular step, provide independent verification of three of the six rate constants. The temperature dependence of four of the six rate constants was measured, allowing estimates of the activation energy of some of the steps to be made. We speculate that the three steps in the proposed mechanism may correspond to a rapid "outside" binding of daunomycin to DNA, followed by intercalation of the drug, followed by either conformational adjustment of the drug or DNA binding site or redistribution of bound drug to preferred sites.  相似文献   

9.
Bin Gao  Xu Liu  Hongzhe Li  Yuehua Cui 《Biometrics》2019,75(4):1063-1075
In a living organism, tens of thousands of genes are expressed and interact with each other to achieve necessary cellular functions. Gene regulatory networks contain information on regulatory mechanisms and the functions of gene expressions. Thus, incorporating network structures, discerned either through biological experiments or statistical estimations, could potentially increase the selection and estimation accuracy of genes associated with a phenotype of interest. Here, we considered a gene selection problem using gene expression data and the graphical structures found in gene networks. Because gene expression measurements are intermediate phenotypes between a trait and its associated genes, we adopted an instrumental variable regression approach. We treated genetic variants as instrumental variables to address the endogeneity issue. We proposed a two‐step estimation procedure. In the first step, we applied the LASSO algorithm to estimate the effects of genetic variants on gene expression measurements. In the second step, the projected expression measurements obtained from the first step were treated as input variables. A graph‐constrained regularization method was adopted to improve the efficiency of gene selection and estimation. We theoretically showed the selection consistency of the estimation method and derived the bound of the estimates. Simulation and real data analyses were conducted to demonstrate the effectiveness of our method and to compare it with its counterparts.  相似文献   

10.
In biomechanical joint-motion analyses, the continuous motion to be studied is often approximated by a sequence of finite displacements, and the Finite Helical Axis (FHA) or "screw axis" for each displacement is estimated from position measurements on a number of anatomical or artificial landmarks. When FHA parameters are directly determined from raw (noisy) displacement data, both the position and the direction of the FHA are ill-determined, in particular when the sequential displacement steps are small. This implies, that under certain conditions, the continuous pathways of joint motions cannot be adequately described. The purpose of the present experimental study is to investigate the applicability of smoothing (or filtering) techniques, in those cases where FHA parameters are ill-determined. Two different quintic-spline smoothing methods were used to analyze the motion data obtained with Roentgenstereophotogrammetry in two experiments. One concerning carpal motions in a wrist-joint specimen, and one relative to a kinematic laboratory model, in which the axis positions are a priori known. The smoothed and non-smoothed FHA parameter errors were compared. The influences of the number of samples and the size of the sampling interval (displacement step) were investigated, as were the effects of equidistant and nonequidistant sampling conditions and noise invariance.  相似文献   

11.
Summary Within a given root hair, the velocity of particle movement, here equated with cytoplasmic streaming, is usually very variable, making it difficult to distinguish experimentally-induced changes in velocity from those due to natural variability. Time-series analysis has been combined with piece-wise linear regression to provide an objective means of assessing the results of experiments in which the rate of streaming was recorded repeatedly from the same root hair. Ordinarily, regression and analysis of variance assume that the values in a data set are not strongly autocorrelated. These methods are applicable to sets of single observations (thus, they cannot be used with data obtained by repeated sampling of the same cell). By contrast, time-series analysis can take account of autocorrelations within the data and allows trends in the noise structure to be separated from treatment effects. The statistical methods described in the present paper are illustrated using the results from experiments in which the effect of -naphthalene acetic acid on streaming velocity in tomato root hairs was recorded. The conclusions reached are in accord with published accounts of variation in protoplasmic streaming and the known behaviour of cells in response to auxins. Our results also provide an explanation for the failure of some workers to observe any consistent changes in streaming velocity in response to exogenous auxin. The method described makes use of well documented statistical techniques and could be applied to other investigations in which sequential measurements of quantifiable parameters, such as fluorometric determination of intracellular pH or concentrations of calcium, are made on intact living cells.Abbreviations NAA -naphthalene acetic acid - ARMA auto-regressive moving average models - AIC akaike information criterion - d.f. degrees of freedom  相似文献   

12.
Patterns of growth in wild bottlenose dolphins, Tursiops truncatus   总被引:2,自引:0,他引:2  
A. J. Read    R. S. Wells    A. A. Hohn    M. D. Scott 《Journal of Zoology》1993,231(1):107-123
The growth of bottlenose dolphins is described from observations made during a capture release programme that has operated in coastal waters of the eastern Gulf of Mexico from 1970 to the present. Measurements of standard length, girth and body mass were recorded from 47 female and 49 male dolphins, some captured as many as nine times. Ages were known from approximate birth dates or estimated from counts of dentinal growth layers. In all three measurements. females grew at a faster initial rate than males, but reached asymptotic size at an earlier age. This extended period of growth in males resulted in significant sexual dimorphism in length, girth and mass at physical maturity. The growth of both sexes was well described by three-parameter Gompertz models using either cross-sectional data or a mixture of longitudinal and cross-sectional data. There was considerable variation in size-at-age for both sexes in all year classes. Residuals of size measurements were used to derive measures of relative size for individual dolphins; most dolphins demonstrated little ontogenetic change in relative size. Body mass was adequately predicted by multiple regression equations that incorporated both length and girth as independent variables.  相似文献   

13.
14.
Diagnostic test can be used to classify subjects as “diseased” or “undiseased”. If measurements are obtained on a quantitative scale they may serve additionally to quantify the damage caused by the disease. If a whole bundle of measurements is available but no gold standard exists, the evaluation of these measurements may be improved by using latent variables. The subject of this investigation is an application of latent variable techniques in the evaluation of diagnostic measurements concerning paired organs. A method is presented which allows to quantify the association between the true disease damage of affected organs. Furthermore, the corresponding association of the error components of several measurements can be quantified. The method is based upon a one factor model of the diagnostic measurements. The method supports the investigation of the pathogenetic process of the underlying disease and the improvement of diagnostic measurements. It is applied to data from the Erlangen Glaucoma Registry.  相似文献   

15.
Interrelations between some forms of group variation (FGVs) (age, sex, geographic, inter-species, differences among breeds) of 12 to 15 measurable skull traits are studied in 6 mammal species (pine marten, polar fox, Przewalskii horse, and 3 jird species) by means of dispersion analysis (model III, MANOVA). The above FGVs are considered as factors in the MANOVA, and skull traits are considered as dependent variables. To obtaine commeasurables estimates for the FGVs, each of them is assessed numerically as a portion of its dispersion in the entire morphological disparity defined for each character (or a set of characters) by MANOVA. The data obtained indicate a wide diversity of interrelations between FGVs. It is shown that statistical analysis of significance of joint effects of FGVs does not substitute the analysis of numerical interrelations of their dispersion portions. It is concluded that it is unproductive to study such interrelations as simple "statistical regularities" like the Kluge-Kerfoot phenomenon, so the character sets are not to be considered as statistical ensembles. A kind of content-wise null-model for FGVs of measurable traits is formulated according to which there is a "background" age variation while other FGVs are its derivatives. Respectively, other factors structuring the morphological disparity under investigation being absent, a positive correlation between FGVs is to be anticipated (strong succession). With the significant deviations of the postulated correlation being observed, other factors regulating respective FGVs that cannot be reduced to the age variation are to be supposed (weak succession). Possible interpretations of interrelations between age variation and some other FGVs in carnivores are considered. Craniological variation in the Przewalskii horse is just slightly effected by maintenance conditions under its domestication, a significant influence of other factors is to be supposed. Negative correlation between geographic and inter-species differences in the jirds (genus Meriones) could be interpreted as an evidence for the speciation described by the punctuated equilibrium model.  相似文献   

16.
We develop an approach for the exploratory analysis of gene expression data, based upon blind source separation techniques. This approach exploits higher-order statistics to identify a linear model for (logarithms of) expression profiles, described as linear combinations of "independent sources." As a result, it yields "elementary expression patterns" (the "sources"), which may be interpreted as potential regulation pathways. Further analysis of the so-obtained sources show that they are generally characterized by a small number of specific coexpressed or antiexpressed genes. In addition, the projections of the expression profiles onto the estimated sources often provides significant clustering of conditions. The algorithm relies on a large number of runs of "independent component analysis" with random initializations, followed by a search of "consensus sources." It then provides estimates for independent sources, together with an assessment of their robustness. The results obtained on two datasets (namely, breast cancer data and Bacillus subtilis sulfur metabolism data) show that some of the obtained gene families correspond to well known families of coregulated genes, which validates the proposed approach.  相似文献   

17.
A J Wolfe  N H Mendelson 《Microbios》1988,53(214):47-61
The range of macrofibre twist states that can be achieved by various strains of Bacillus subtilis has been examined as a function of two variables: growth temperature and medium composition. Two graphic techniques were utilized to organize and compare data which pertain to the complex phenotypes of macrofibre mutants. The steady state twist states of strains were determined by qualitative examination. Structures were produced at each of the extremes of temperature and medium composition. Patterns obtained from a graphical representation of these data permitted the strains to be grouped into three classes: (A) strains in which helix-hand inversion could be triggered by nutrition at both 20 degrees or 48 degrees C, and by temperature in either medium; (B) strains in which a more limited set of conditions could induce inversion, and (C) strains which were restricted to either the right- or left-hand domain of twist states. Genetic factors governing these patterns were examined. Quantitative measurements of static twist were obtained over the entire temperature and media range, providing a detailed picture of the dependence of twist upon these environmental influences. Although the macrofibre twist state phenotype (as a function of both variables over the entire range of conditions) of each strain was unique, common features were discernible in all strains. Although some strains were limited to a single helix hand under all conditions studied, none were found to be restricted to a single twist state.  相似文献   

18.
Normalization is an important step in the analysis of quantitative proteomics data. If this step is ignored, systematic biases can lead to incorrect assumptions about regulation. Most statistical procedures for normalizing proteomics data have been borrowed from genomics where their development has focused on the removal of so-called ‘batch effects.’ In general, a typical normalization step in proteomics works under the assumption that most peptides/proteins do not change; scaling is then used to give a median log-ratio of 0. The focus of this work was to identify other factors, derived from knowledge of the variables in proteomics, which might be used to improve normalization. Here we have examined the multi-laboratory data sets from Phase I of the NCI''s CPTAC program. Surprisingly, the most important bias variables affecting peptide intensities within labs were retention time and charge state. The magnitude of these observations was exaggerated in samples of unequal concentrations or “spike-in” levels, presumably because the average precursor charge for peptides with higher charge state potentials is lower at higher relative sample concentrations. These effects are consistent with reduced protonation during electrospray and demonstrate that the physical properties of the peptides themselves can serve as good reporters of systematic biases. Between labs, retention time, precursor m/z, and peptide length were most commonly the top-ranked bias variables, over the standardly used average intensity (A). A larger set of variables was then used to develop a stepwise normalization procedure. This statistical model was found to perform as well or better on the CPTAC mock biomarker data than other commonly used methods. Furthermore, the method described here does not require a priori knowledge of the systematic biases in a given data set. These improvements can be attributed to the inclusion of variables other than average intensity during normalization.The number of laboratories using MS as a quantitative tool for protein profiling continues to grow, propelling the field forward past simple qualitative measurements (i.e. cataloging), with the aim of establishing itself as a robust method for detecting proteomic differences. By analogy, semiquantitative proteomic profiling by MS can be compared with measurement of relative gene expression by genomics technologies such as microarrays or, newer, RNAseq measurements. While proteomics is disadvantaged by the lack of a molecular amplification system for proteins, successful reports from discovery experiments are numerous in the literature and are increasing with advances in instrument resolution and sensitivity.In general, methods for performing relative quantitation can be broadly divided into two categories: those employing labels (e.g. iTRAQ, TMT, and SILAC (1)) and so-called “label-free” techniques. Labeling methods involve adding some form of isobaric or isotopic label(s) to the proteins or peptides prior to liquid chromatography-tandem MS (LC-MS/MS) analysis. Chemical labels are typically applied during sample processing, and isotopic labels are commonly added during cell culture (i.e. metabolic labeling). One advantage of label-based methods is that the two (or more) differently-labeled samples can be mixed and run in single LC-MS analyses. This is in contrast to label-free methods which require the samples to be run independently and the data aligned post-acquisition.Many labs employ label-free methods because they are applicable to a wider range of samples and require fewer sample processing steps. Moreover, data from qualitative experiments can sometimes be re-analyzed using label-free software tools to provide semiquantitative data. Advances in these software tools have been extensively reviewed (2). While analysis of label-based data primarily uses full MS scan (MS1)1 or tandem MS scan (MS2) ion current measurements, analysis of label-free data can employ simple counts of confidently identified tandem mass spectra (3). So-called spectral counting makes the assumption that the number of times a peptide is identified is proportional to its concentration. These values are sometimes summed across all peptides for a given protein and scaled by protein length. Relative abundance can then be calculated for any peptide or protein of interest. While this approach may be easy to perform, its usefulness is particularly limited in smaller data sets and/or when counts are low.This report focuses only on the use of ion current measurements in label-free data sets, specifically those calculated from extracted MS1 ion chromatograms (XICs). In general terms, raw intensity values (i.e. ion counts in arbitrary units) cannot be used for quantitation in the absence of cognate internal standards because individual ion intensities depend on a response factor, related to the chemical properties of the molecule. Intensities are instead almost always reserved for relative determinations. Furthermore, retention times are sometimes used to align the chromatograms between runs to ensure higher confidence prior to calculating relative intensities. This step is crucial for methods without corresponding identity information, particularly for experiments performed on low-resolution instruments. To support a label-free workflow, peptide identifications are commonly made from tandem mass spectra (MS/MS) acquired along with direct electrospray signal (MS1). Or, in alternative workflows seeking deeper coverage, interesting MS1 components can be targeted for identification by MS/MS in follow-up runs (4).“Rolling up” the peptide ion information to the peptide and protein level is also done in different ways in different labs. In most cases, “peptide intensity” or “peptide abundance” is the summed or averaged value of the identified peptide ions. How the peptide information is transferred to the protein level differs between methods but typically involves summing one or more peptide intensities, following parsimony analysis. One such solution is the “Top 3” method developed by Silva and co-workers (5).Because peptides in label-free methods lack labeled analogs and require separate runs, they are more susceptible to analytical noise and systematic variations. Sources of these obscuring variations can come from many sources, including sample preparation, operator error, chromatography, electrospray, and even from the data analysis itself. While analytical noise (e.g. chemical interference) is difficult to selectively reject, systematic biases can often be removed by statistical preprocessing. The goal of these procedures is to normalize the data prior to calculations of relative abundance. Failure to resolve these issues is the common origin of batch effects, previously described for genomics data, which can severely limit meaningful interpretation of experimental data (6, 7).These effects have also been recently explored in proteomics data (8). Methods used to normalize proteomics data have been largely borrowed from the microarray community, or are based on a simple mean/median intensity ratio correction. Methods applied on microarray and/or gene chip and used on proteomics data include scaling, linear regression, nonlinear regression, and quantile normalizations (9). Moreover, work has also been done to improve normalization by subselecting a peptide basis (10). Other work suggests that linear regression, followed by run order analysis, works better than other methods tested (11). Key to this last method is the incorporation of a variable other than intensity during normalization. It is also important to note that little work has been done towards identifying the underlying sources of these variations in proteomics data. Although cause-and-effect is often difficult to determine, understanding these relationships will undoubtedly help remove and avoid the major underlying sources of systematic variations.In this report, we have attempted to combine our efforts focused on understanding variability with the work initiated by others for normalizing ion current-based label-free proteomics data. We have identified several major variables commonly affecting peptide ion intensities both within and between labs. As test data, we used a subset of raw data acquired during Phase I of the National Cancer Institute''s (NCI) Clinical Proteomics Technology Assessment for Cancer (CPTAC) program. With these data, we were able to develop a statistical model to rank bias variables and normalize the intensities using stepwise, semiparametric regression. The data analysis methods have been implemented within the National Institute of Standards and Technology (NIST) MS quality control (MSQC) pipeline. Finally, we have developed R code for removing systematic biases and have tested it using a reference standard spiked into a complex biological matrix (i.e. yeast cell lysate).  相似文献   

19.
Net Primary Production (NPP) is an important component of the carbon cycle and, among the pools and fluxes that make up the cycle, it is one of the steps that are most accessible to field measurement. While easier than some other steps to measure, direct measurement of NPP is tedious and not practical for large areas and so models are generally used to study the carbon cycle at a global scale. Nevertheless these models require field measurements of NPP for parameterization, calibration and validation. Most NPP data are for relatively small field plots that cannot represent the 0.5° × 0.5° grid cells that are commonly used in global scale models. Furthermore, technical difficulties generally restrict NPP measurements to aboveground parts and sometimes do not even include all components of aboveground NPP. Thus direct inter‐comparison between field data obtained in different studies or comparison of these results with coarse resolution model outputs can be misleading. We summarize and present a series of methods that were used by original authors to estimate NPP and how and what we have done to prepare a consistent data set of NPP for 0.5 °grid cells for a range of biomes from these studies. The methods used for estimation of NPP include: (i) aggregation of fine‐scale (plot or stand‐level) vegetation inventory data to larger grid cells, (ii) mapping of grid cells and area weighting of field NPP observations in each mapped class, (iii) direct correlation of extensive data sets of ground measurements with remotely sensed spectral vegetation indices, (iv) local modeling of NPP using key independent variables, for which maps are available at the scale of the grid cell, and (v) regression analysis to link productivity with controlling environmental variables. For a few grid cells whose NPP were obtained for multiple years, temporal analysis was conducted. The grid cells are grouped to the biome level and are compared with existing compilations of field NPP and the results of the Miami potential NPP model. Mean NPP was similar to the well‐known compilation of Whittaker and Likens, except for temperate evergreen needle‐leaved forest, woodland, and shrubland. The grid cell datasets are a contribution to the International Geosphere‐Biosphere Programme (IGBP) Data and Information System (DIS) Global Primary Production Data Initiative (GPPDI). The full dataset currently contains 3654 cells (including replicate measurements) developed from 15 studies representing NPP in croplands, sparse vegetation, shrub lands, grasslands, and forests worldwide. An edited subset consists of 2335 cells in which outliers were removed and all replicate measurements were averaged for each unique geographical location. Most of the data incorporated into GPPDI were wholly or partly developed by participants in the GPPDI, in addition to the present authors. These studies are gathered together here to provide a consistent account of the grid cell component of GPPDI and an analysis of the entire data set. The datasets have been deposited in an IGBP‐DIS GPPDI database ( http://daacl.esd.ornl.gov/npq/GPPDI/Combined_GPPDI_des.html ).  相似文献   

20.
This paper considers the use of hybrid models to represent the dynamic behaviour of biotechnological processes. Each hybrid model consists of a set of non linear differential equations and a neural model. The set of differential equations attempts to describe as much as possible the phenomenology of the process whereas neural networks model predict some key parameters that are an essential part of the phenomenological model. The neural model is obtained indirectly, that is, using the prediction errors of one or more state variables to adjust its weights instead of successive presentations of input-output data of the neural network. This approach allows to use actual measurements to derive a suitable neural model that not only represents the variation of some key parameters but it is also able to partly include dynamic behaviour unaccounted for by the phenomenological model. The approach is described in detail using three test cases: (1) the fermentation of glucose to gluconic acid by the micro-organism Pseudomonas ovalis, (2) the growth of filamentous fungi in a solid state fermenter, and (3) the propagation of filamentous fungi growing on a 2-D solid substrate. Results for the three applications clearly demon- strate that using a hybrid model is a viable alternative for modelling complex biotechnological bioprocesses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号