首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two algorithms of decomposition of composite protein tryptophan fluorescence spectra were developed based on the possibility that the shape of elementary spectral component could be accurately described by a uniparametric log-normal function. The need for several mathematically different algorithms is dictated by the fact that decomposition of spectra into widely overlapping smooth components is a typical incorrect problem. Only the coincidence of components obtained with various algorithms can guarantee correctness and reliability of results. In this paper we propose the following algorithms of decomposition: (1) the SImple fitting procedure using the root-Mean-Square criterion (SIMS) operating with either individual emission spectra or sets of spectra measured with various quencher concentrations; and (2) the pseudo-graphic analytical procedure using a PHase plane in coordinates of normalized emission intensities at various wavelengths (wavenumbers) and REsolving sets of spectra measured with various Quencher concentrations (PHREQ). The actual experimental noise precludes decomposition of protein spectra into more than three components.  相似文献   

2.
We illustrate through examples how monotonicity may help for performance evaluation of networks. We consider two different applications of stochastic monotonicity in performance evaluation. In the first one, we assume that a Markov chain of the model depends on a parameter that can be estimated only up to a certain level and we have only an interval that contains the exact value of the parameter. Instead of taking an approximated value for the unknown parameter, we show how we can use the monotonicity properties of the Markov chain to take into account the error bound from the measurements. In the second application, we consider a well known approximation method: the decomposition into Markovian submodels. In such an approach, models of complex networks or other systems are decomposed into Markovian submodels whose results are then used as parameters for the next submodel in an iterative computation. One obtains a fixed point system which is solved numerically. In general, we have neither an existence proof of the solution of the fixed point system nor a convergence proof of the iterative algorithm. Here we show how stochastic monotonicity can be used to answer these questions and provide, to some extent, the theoretical foundations for this approach. Furthermore, monotonicity properties can also help to derive more efficient algorithms to solve fixed point systems.  相似文献   

3.
Summary In a microarray experiment, one experimental design is used to obtain expression measures for all genes. One popular analysis method involves fitting the same linear mixed model for each gene, obtaining gene‐specific p‐values for tests of interest involving fixed effects, and then choosing a threshold for significance that is intended to control false discovery rate (FDR) at a desired level. When one or more random factors have zero variance components for some genes, the standard practice of fitting the same full linear mixed model for all genes can result in failure to control FDR. We propose a new method that combines results from the fit of full and selected linear mixed models to identify differentially expressed genes and provide FDR control at target levels when the true underlying random effects structure varies across genes.  相似文献   

4.
The use of ROC curves in evaluating a continuous or ordinal biomarker for the discrimination of two populations is commonplace. However, in many settings, marker measurements above or below a certain value cannot be obtained. In this paper, we study the construction of a smooth ROC curve (or surface in the case of three populations) when there is a lower or upper limit of detection. We propose the use of spline models that incorporate monotonicity constraints for the cumulative hazard function of the marker distribution. The proposed technique is computationally stable and simulation results showed a satisfactory performance. Other observed covariates can be also accommodated by this spline‐based approach.  相似文献   

5.
We previously reported that when the stress relaxation response of urinary bladder wall (UBW) tissue was analyzed using a single continuous reduced relaxation function (RRF), we observed non-uniformly distributed, time-dependent residuals (Ann Biomed Eng 32(10):1409-1419, 2004). We concluded that the single relaxation spectrum was inadequate and that a new viscoelastic model for bladder wall was necessary. In the present study, we report a new approach composed of independent RRFs for smooth muscle and the extracellular matrix components (ECM), connected through a stress-dependent recruitment function. In order to determine the RRF for the ECM component, biaxial stress relaxation experiments were first performed on decellularized extracellular matrix network of the bladder obtained from normal and spinal cord injured rats. While it was assumed that smooth muscle followed a single spectrum RRF, modeling the UBW ECM required a dual-Gaussian spectrum. Experimental results revealed that the ECM stress relaxation response was insensitive to the initial stress level. Thus, the average ECM RRF parameters were determined by fitting the average stress relaxation data. The resulting stress relaxation behavior of whole bladder tissue was modeled by combining the ECM RRF with the RRF for the smooth muscle component using an exponential recruitment function representing the recruitment of collagen fibers at higher stress levels. In summary, the present study demonstrated, for the first time, that stress relaxation response of bladder tissue can be better modeled when divided into the contributions of the extracellular matrix and smooth muscle components. This modeling approach is suitable for prediction of mechanical behaviors of the urinary bladder and other organs that exhibit rapid tissue remodeling (i.e., smooth muscle hypertrophy and altered ECM synthesis) under various pathological conditions.  相似文献   

6.
Schweiger O  Klotz S  Durka W  Kühn I 《Oecologia》2008,157(3):485-495
Traditional measures of biodiversity, such as species richness, usually treat species as being equal. As this is obviously not the case, measuring diversity in terms of features accumulated over evolutionary history provides additional value to theoretical and applied ecology. Several phylogenetic diversity indices exist, but their behaviour has not yet been tested in a comparative framework. We provide a test of ten commonly used phylogenetic diversity indices based on 40 simulated phylogenies of varying topology. We restrict our analysis to a topological fully resolved tree without information on branch lengths and species lists with presence-absence data. A total of 38,000 artificial communities varying in species richness covering 5-95% of the phylogenies were created by random resampling. The indices were evaluated based on their ability to meet a priori defined requirements. No index meets all requirements, but three indices turned out to be more suitable than others under particular conditions. Average taxonomic distinctness (AvTD) and intensive quadratic entropy (J) are calculated by averaging and are, therefore, unbiased by species richness while reflecting phylogeny per se well. However, averaging leads to the violation of set monotonicity, which requires that species extinction cannot increase the index. Total taxonomic distinctness (TTD) sums up distinctiveness values for particular species across the community. It is therefore strongly linked to species richness and reflects phylogeny per se weakly but satisfies set monotonicity. We suggest that AvTD and J are best applied to studies that compare spatially or temporally rather independent communities that potentially vary strongly in their phylogenetic composition-i.e. where set monotonicity is a more negligible issue, but independence of species richness is desired. In contrast, we suggest that TTD be used in studies that compare rather interdependent communities where changes occur more gradually by species extinction or introduction. Calculating AvTD or TTD, depending on the research question, in addition to species richness is strongly recommended.  相似文献   

7.
Analysis of fluorescence decay data for probes incorporated into model or biological membranes invariably requires fitting to more than one decay time even though the same probe exhibits nearly single-exponential decay in solution. The parinaric acids (cis and trans) are examples of this. Data are presented for both parinaric acid isomers in dimyristoylphosphatidylcholine membranes collected to higher precision than normally encountered, and the fluorescence decays are shown to be best described by a smooth distribution of decay times rather than by a few discrete lifetimes. The temperature dependence of the fluorescence decay reveals a clear shift in the distribution to longer lifetimes associated with the membrane phase transition at 23.5 degrees C. The physical significance is that fluorescence lifetime measurements appear to reflect a physical process with a distribution of lifetimes rather than several distinct physical processes.  相似文献   

8.
A method for flexible fitting of molecular models into three-dimensional electron microscopy (3D-EM) reconstructions at a resolution range of 8-12 A is proposed. The approach uses the evolutionarily related structural variability existing among the protein domains of a given superfamily, according to structural databases such as CATH. A structural alignment of domains belonging to the superfamily, followed by a principal components analysis, is performed, and the first three principal components of the decomposition are explored. Using rigid body transformations for the secondary structure elements (SSEs) plus the cyclic coordinate descent algorithm to close the loops, stereochemically correct models are built for the structure to fit. All of the models are fitted into the 3D-EM map, and the best one is selected based on crosscorrelation measures. This work applies the method to both simulated and experimental data and shows that the flexible fitting was able to produce better results than rigid body fitting.  相似文献   

9.
Stafford [Biophys. J. 17 (1996) MP452] has shown that it is possible, using the analytical ultracentrifuge in sedimentation velocity mode, to calculate the molecular weights of proteins with a precision of approximately 5%, by fitting Gaussian distributions to g(s*) profiles so long as partial specific volume and the radial position of the meniscus are known. This makes possible the analysis of systems containing several components by the fitting of multiple distributions to the total g(s*) profile. We have found the Stafford relationship to hold for a range of protein solutes, particularly good agreement being found when the g(s*) profiles are computed from Schlieren (dc/dr vs. r) data using the Bridgman equation [J. Am. Chem. Soc. 64 (1942) 2349] . On this basis, we have developed a new approach to the analysis of systems where two or more distinguishable conformations of a single species are present, either in the same sample cell or in different cells in the same rotor. In the former case, this allows us to analyse a given solution of pure protein (i.e. monodisperse with respect to M) to reveal the presence in that solution of two or more conformers under identical solvent conditions. In the latter case, we can detect with high sensitivity any conformational change occurring in the transition from one set of solvent conditions to another. Alternatively, in this case, we can analyse slightly different proteins (e.g. deletion mutants) for conformational changes under identical solvent conditions. Examples of these procedures using well-defined protein systems are given.  相似文献   

10.
Multiple components linear least-squares methods have been proposed for the detection of periodic components in nonsinusoidal longitudinal time series. However, a proper test for comparison of parameters obtained from this method for two or more time series is not yet available. Accordingly, we propose two methods, one parametric and one nonparametric, to compare parameters from rhythmometric models with multiple components. The parametric method is based on techniques commonly and generally employed in linear regression analysis. The comparison of parameters among two or more time series is accomplished by the use of so-called dummy variables. The nonparametric method is based on bootstrap techniques. This approach basically tests if the difference in any given parameter obtained by fitting a model with the same periods to two different longitudinal time series differs from zero. This method calculates a confidence interval for the difference in the tested parameter. If this interval does not contain zero, it can be concluded that the parameters obtained from the two time series are different with high probability. An estimation of the p-value for the corresponding test can also be calculated. By the use of similar bootstrap techniques, confidence intervals can also be obtained for any parameter derived from the multiple component fit of several periods to nonsinusoidal longitudinal time series, including the orthophase (peak time), bathyphase (trough time), and global amplitude (difference between the maximum and the minimum) of the fitted model waveform. These methods represent a valuable tool for the comparison of rhythm parameters obtained by multiple component analysis, and they render this approach as a generally applicable one for waveform representation and detection of periodicities in nonsinusoidal, sparse, and noisy longitudinal time series sampled with either equidistant or unequidistant observations.  相似文献   

11.
Dunson DB  Neelon B 《Biometrics》2003,59(2):286-295
In biomedical studies, there is often interest in assessing the association between one or more ordered categorical predictors and an outcome variable, adjusting for covariates. For a k-level predictor, one typically uses either a k-1 degree of freedom (df) test or a single df trend test, which requires scores for the different levels of the predictor. In the absence of knowledge of a parametric form for the response function, one can incorporate monotonicity constraints to improve the efficiency of tests of association. This article proposes a general Bayesian approach for inference on order-constrained parameters in generalized linear models. Instead of choosing a prior distribution with support on the constrained space, which can result in major computational difficulties, we propose to map draws from an unconstrained posterior density using an isotonic regression transformation. This approach allows flat regions over which increases in the level of a predictor have no effect. Bayes factors for assessing ordered trends can be computed based on the output from a Gibbs sampling algorithm. Results from a simulation study are presented and the approach is applied to data from a time-to-pregnancy study.  相似文献   

12.
Abstract

The microbial polysaccharides secreted and produced from various microbes into their extracellular environment is known as exopolysaccharide. These polysaccharides can be secreted from the microbes either in a soluble or insoluble form.Lactobacillus sp. is one of the organisms that have been found to produce exopolysaccharide. Exo-polysaccharides (EPS) have various applications such as drug delivery, antimicrobial activity, surgical implants and many more in different fields. Medium composition is one of the major aspects for the production of EPS from Lactobacillus sp., optimization of medium components can help to enhance the synthesis of EPS . In the present work, the production of exopolysaccharide with different medium composition was optimized by response surface methodology (RSM) followed by tested for fitting with artificial neural networks (ANN). Three algorithms of ANN were compared to investigate the highest yeild of EPS. The highest yeild of EPS production in RSM was achieved by the medium composition that consists of (g/L) dextrose 15, sodium dihydrogen phosphate 3, potassium dihydrogen phosphate 2.5, triammonium citrate 1.5, and, magnesium sulfate 0.25. The output of 32 sets of RSM experiments were tested for fitting with ANN with three algorithms viz. Levenberg–Marquardt Algorithm (LMA), Bayesian Regularization Algorithm (BRA) and Scaled Conjugate Gradient Algorithm (SCGA) among them LMA found to have best fit with the experiments as compared to the SCGA and BRA.  相似文献   

13.
The emerging view of smooth/nonmuscle myosin regulation suggests that the attainment of the completely inhibited state requires numerous weak interactions between components of the two heads and the myosin rod. To further examine the nature of the structural requirements for regulation, we engineered smooth muscle heavy meromyosin molecules that contained one complete head and truncations of the second head. These truncations eliminated the motor domain but retained two, one, or no light chains. All constructs contained 37 heptads of rod sequence. None of the truncated constructs displayed complete regulation of both ATPase and motility, reinforcing the idea that interactions between motor domains are necessary for complete regulation. Surprisingly, the rate of ADP release was slowed by regulatory light chain dephosphorylation of the truncated construct that contained all four light chains and one motor domain. These data suggest that there is a second step (ADP release) in the smooth muscle myosin-actin-activated ATPase cycle that is modulated by regulatory light chain phosphorylation. This may be part of the mechanism underlying "latch" in smooth muscle.  相似文献   

14.
An algorithm of decomposition of protein tryptophan spectra into components was developed. The spectral shape of components is described by a uniparametric log-normal function. Rise of certainty and accuracy of resolution of widely overlapping smooth spectral components (a typical uncorrect reverse problem) was achieved using several regularizing factors: (i) the set of experimental spectra used were measured at several quencher concentrations; (ii) the functional being minimized, along with the root mean square residuals of intensities, the term depending on the obedience to the Stern-Volmer law; (iii) an extra information is used--the number of experimental values greatly exceeds the number of parameters to be estimated. The minimum of functional is determined by a consecutive setting of all possible combinations of component spectral maxima values, which allows to avoid sticking in the local minima of noisy functional. The real experimental noise restricts the decomposition into not more than three components. The decomposition error does not exceed the experimental one. The algorithm functioning is illustrated by resolution of tryptophan fluorescence spectra of papain into one, two, and three components.  相似文献   

15.
We welcome Dr Thorpe's interesting discussion (Thorpe, 1988), and we would like to take this opportunity to clarify some points.
Both MGPCA (multiple group principal component analysis) and CPCA (common principal component analysis) serve essentially the same purpose, namely estimation of principal components simultaneously in several groups, based on the assumption of equality of principal component directions across groups, while eigenvalues may differ between groups. However, CPCA has the distinct advantage that this assumption can actually be tested, using the (CPC) statistic. In analyses involving more than two variables, it is usually difficult to decide, without a formal test, whether or not the assumption of common directions of principal components is reasonable.
There is also a conceptual difficulty with MGPCA. In statistical terms, both methods assume that:
(a) a certain set of parameters (namely those determining the eigenvectors) are common to all groups
(b) there are sets of parameters (namely p eigenvalues per group) which are specific to each group.
CPCA sets up a model that reflects this structure and estimates the parameters accordingly. MGPCA, on the other hand, ignores part (b), at least temporarily, by pooling the variance-covariance matrices and extracting eigenvectors from the single pooled matrix. This may lead to reasonable results, but there is no guarantee that it will indeed do so. The reader may find a more familiar analog in the fitting of regression lines when data are in groups. If it is assumed that all regression lines are parallel, one should set up an appropriate model based on a single slope parameter common to all groups, and groupwise intercepts. One should then estimate the parameters of this model, and not simply apply a technique which is appropriate in the one-group case only.  相似文献   

16.
Measuring the phylogenetic diversity of communities has become a key issue for biogeography and conservation. However, most diversity indices that rely on interspecies phylogenetic distances may increase with species loss and thus violate the principle of weak monotonicity. Moreover, most published phylogenetic diversity indices ignore the abundance distribution along phylogenetic trees, even though lineage abundances are crucial components of biodiversity. The recently introduced concept of phylogenetic entropy overcomes these limitations, but has not been decomposed across scales, i.e. into α, β and γ components. A full understanding of mechanisms sustaining biological diversity within and between communities needs such decomposition. Here, we propose an additive decomposition framework for estimating α, β and γ components of phylogenetic entropy. Based on simulated trees, we demonstrate its robustness to phylogenetic tree shape and species richness. Our decomposition fulfils the requirements of both independence between components and weak monotonicity. Finally, our decomposition can also be adapted to the partitioning of functional diversity across different scales with the same desirable properties.  相似文献   

17.
Endothelin is a potent vasoconstrictor peptide which has recently been localized in the gastrointestinal tract. We have investigated the transmembrane signaling properties of endothelin in isolated smooth muscle cells of the rabbit rectosigmoid. Endothelin induced a dose dependent contraction of smooth muscle cells in a range of 10−10 to 10−6M. In normal buffer, contraction peaked at 30 sec and was sustained for up to 8 min. Incubation in 0Ca/2mM EGTA abolished the sustained contraction induced by endothelin, but had no effect on the initial transient contraction. Preincubation of saponin treated cells with G protein antisera had no effect on control cell length. Preincubation of saponin treated isolated smooth muscle cells with specific G protein antisera (rabbit antisera) for Go or Gs for 60 minutes did not inhibit contraction induced by endothelin. Preincubation with an antiserum to Gi3 inhibited the initial transient contraction induced by endothelin and preincubation with an antiserum to Gi1−2 inhibited the sustained phase of the endothelin induced contraction. Our data indicate that: 1) Endothelin induces a direct sustained contraction of smooth cells from the rectosigmoid; 2) The transmembrane signalling of endothelin is through two specific GTP binding components that are Gi, one for the initial transient contraction, and the other for the sustained phase of the contraction.  相似文献   

18.
In quantitative genetics, the degree of resemblance between parents and offspring is described in terms of the additive variance (V(A)) relative to genetic (V(G)) and phenotypic (V(P)) variance. For populations with extreme allele frequencies, high V(A)/V(G) can be explained without considering properties of the genotype-phenotype (GP) map. We show that randomly generated GP maps in populations with intermediate allele frequencies generate far lower V(A)/V(G) values than empirically observed. The main reason is that order-breaking behaviour is ubiquitous in random GP maps. Rearrangement of genotypic values to introduce order-preservation for one or more loci causes a dramatic increase in V(A)/V(G). This suggests the existence of order-preserving design principles in the regulatory machinery underlying GP maps. We illustrate this feature by showing how the ubiquitously observed monotonicity of dose-response relationships gives much higher V(A)/V(G) values than a unimodal dose-response relationship in simple gene network models.  相似文献   

19.
Transitional endoplasmic reticulum (tER) consists of confluent rough and smooth endoplasmic reticulum (ER) domains. In a cell-free incubation system, low-density microsomes (1.17 g cc(-1)) isolated from rat liver homogenates reconstitute tER by Mg(2+)GTP- and Mg(2+)ATP-hydrolysis-dependent membrane fusion. The ATPases associated with different cellular activities protein p97 has been identified as the relevant ATPase. The ATP depletion by hexokinase or treatment with either N-ethylmaleimide or anti-p97 prevented assembly of the smooth ER domain of tER. High-salt washing of low-density microsomes inhibited assembly of the smooth ER domain of tER, whereas the readdition of purified p97 with associated p47 promoted reconstitution. The t-SNARE syntaxin 5 was observed within the smooth ER domain of tER, and antisyntaxin 5 abrogated formation of this same membrane compartment. Thus, p97 and syntaxin 5 regulate assembly of the smooth ER domain of tER and hence one of the earliest membrane differentiated components of the secretory pathway.  相似文献   

20.
In the presynaptic nerve terminals of the bullfrog sympathetic ganglia, repetitive nerve firing evokes [Ca2+] transients that decay monotonically. An algorithm based on an eigenfunction expansion method was used for fitting these [Ca2+] decay records. The data were fitted by a linear combination of two to four exponential functions. A mathematical model with three intraterminal membrane-bound compartments was developed to describe the observed Ca2+ decay. The model predicts that the number of exponential functions, n, contained in the decay data corresponds to n – 1 intraterminal Ca2+ stores that release Ca2+ during the decay. Moreover, when a store stops releasing or starts to release Ca2+, the decay data should be fitted by functions that contain one less exponential component for the former and one more for the latter than do the fitting functions for control data. Because of the current lack of a parameter by which quantitative comparisons can be made between two decay processes when at least one of them contained more than one exponential components, we defined a parameter, the overall rate (OR) of decay, as the trace of the coefficient matrix of the differential equation systems of our model. We used the mathematical properties of the model and of the OR to interpret effects of ryanodine and of a mitochondria uncoupler on Ca2+ decay. The results of the analysis were consistent with the ryanodine-sensitive store, mitochondria, and another, yet unidentified store release Ca2+ into the cytosol of the presynaptic nerve terminals during Ca2+ decay. Our model also predicts that mitochondrial Ca2+ buffering accounted for more than 86% of all the flux rates across various membranes combined and that there are type 3 and type 1 and/or type 2 ryanodine receptors in these terminals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号