共查询到20条相似文献,搜索用时 31 毫秒
1.
Mehmet Kahraman Ayse Ozbay Handan Yuksel Ramazan Solmaz Baran Demir Humeyra Caglayan 《Plasmonics (Norwell, Mass.)》2018,13(3):785-795
Surface-enhanced Raman scattering (SERS) is an emerging analytical method used in biological and non-biological structure characterization. Since the nanostructure plasmonic properties is a significant factor for SERS performance, nanostructure fabrication with tunable plasmonic properties are crucial in SERS studies. In this study, a novel method for fabrication of tunable plasmonic silver nanodomes (AgNDs) is presented. The convective-assembly method is preferred for the deposition of latex particles uniformly on a regular glass slide and used as a template for polydimethylsiloxane (PDMS) to prepare nanovoids on a PDMS surface. The obtained nanovoids on the PDMS are used as a mold for AgNDs fabrication. The nanovoids are filled with Ag deposition by the electrochemical method to obtain metallic AgNDs. Scanning electron microscopy (SEM) and atomic force microscopy (AFM) are used for characterization of the structural properties of all fabricated AgNDs. The optical properties of AgNDs are characterized with the evaluation of SERS activity of 4-aminothiphonel and rhodamine 6G. In addition to experimental characterizations, the finite difference time domain (FDTD) method is used for the theoretical plasmonic properties calculation of the AgNDs. The experimental and theoretical results show that the SERS performance of AgNDs is strongly dependent on the heights and diameters of the AgNDs. 相似文献
2.
Bayesian phylogenetic inference using DNA sequences: a Markov Chain Monte Carlo Method 总被引:39,自引:18,他引:21
An improved Bayesian method is presented for estimating phylogenetic trees
using DNA sequence data. The birth-death process with species sampling is
used to specify the prior distribution of phylogenies and ancestral
speciation times, and the posterior probabilities of phylogenies are used
to estimate the maximum posterior probability (MAP) tree. Monte Carlo
integration is used to integrate over the ancestral speciation times for
particular trees. A Markov Chain Monte Carlo method is used to generate the
set of trees with the highest posterior probabilities. Methods are
described for an empirical Bayesian analysis, in which estimates of the
speciation and extinction rates are used in calculating the posterior
probabilities, and a hierarchical Bayesian analysis, in which these
parameters are removed from the model by an additional integration. The
Markov Chain Monte Carlo method avoids the requirement of our earlier
method for calculating MAP trees to sum over all possible topologies (which
limited the number of taxa in an analysis to about five). The methods are
applied to analyze DNA sequences for nine species of primates, and the MAP
tree, which is identical to a maximum-likelihood estimate of topology, has
a probability of approximately 95%.
相似文献
3.
Spreadsheet Method for Evaluation of Biochemical Reaction Rate Coefficients and Their Uncertainties by Weighted Nonlinear Least-Squares Analysis of the Integrated Monod Equation 总被引:5,自引:1,他引:4
下载免费PDF全文
![点击此处可从《Applied microbiology》网站下载免费的PDF全文](/ch/ext_images/free.gif)
A convenient method for evaluation of biochemical reaction rate coefficients and their uncertainties is described. The motivation for developing this method was the complexity of existing statistical methods for analysis of biochemical rate equations, as well as the shortcomings of linear approaches, such as Lineweaver-Burk plots. The nonlinear least-squares method provides accurate estimates of the rate coefficients and their uncertainties from experimental data. Linearized methods that involve inversion of data are unreliable since several important assumptions of linear regression are violated. Furthermore, when linearized methods are used, there is no basis for calculation of the uncertainties in the rate coefficients. Uncertainty estimates are crucial to studies involving comparisons of rates for different organisms or environmental conditions. The spreadsheet method uses weighted least-squares analysis to determine the best-fit values of the rate coefficients for the integrated Monod equation. Although the integrated Monod equation is an implicit expression of substrate concentration, weighted least-squares analysis can be employed to calculate approximate differences in substrate concentration between model predictions and data. An iterative search routine in a spreadsheet program is utilized to search for the best-fit values of the coefficients by minimizing the sum of squared weighted errors. The uncertainties in the best-fit values of the rate coefficients are calculated by an approximate method that can also be implemented in a spreadsheet. The uncertainty method can be used to calculate single-parameter (coefficient) confidence intervals, degrees of correlation between parameters, and joint confidence regions for two or more parameters. Example sets of calculations are presented for acetate utilization by a methanogenic mixed culture and trichloroethylene cometabolism by a methane-oxidizing mixed culture. An additional advantage of application of this method to the integrated Monod equation compared with application of linearized methods is the economy of obtaining rate coefficients from a single batch experiment or a few batch experiments rather than having to obtain large numbers of initial rate measurements. However, when initial rate measurements are used, this method can still be used with greater reliability than linearized approaches. 相似文献
4.
Attempts to estimate human energy expenditure by use of doubly labeled water have produced three methods currently used for calculating carbon dioxide production from isotope disappearance data: 1) the two-point method, 2) the regression method, and 3) the integration method. An ideal data set was used to determine the error produced in the calculated energy expenditure for each method when specific variables were perturbed. The analysis indicates that some of the calculation methods are more susceptible to perturbations in certain variables than others. Results from an experiment on one adult human subject are used to illustrate the potential for error in actual data. Samples of second void urine, 24-h urine, and breath collected every other day for 21 days are used to calculate the average daily energy expenditure by three calculation methods. The difference between calculated energy expenditure and metabolizable energy on a weight-maintenance diet is used to estimate the error associated with the doubly labeled water method. 相似文献
5.
The eye-estimation method is widely used in practice. Several agronomic and biological measures are currently estimated by this method. If a simple linear regression is the kernel model a shrinkage technique can be used for correcting the bias associated with this method. Two predictors of the population total are proposed and the corresponding model-based errors are deduced. A simulation study fixes the behaviour of the predictors. 相似文献
6.
Bioaffinity binding assays such as the immunoassay are widely used in life science research. In an immunoassay, specific antibodies are used to bind target molecules in the sample, and quantification of the binding reaction reveals the amount of the target molecules. Here we present a method to measure bioaffinity assays using the two-photon excitation of fluorescence. In this method, microparticles are used as solid phase in binding the target molecules. The degree of binding is then quantified from individual microparticles by use of two photon excitation of fluorescence. We demonstrated the effectiveness of the method using the human alpha-fetoprotein (AFP) immunoassay, which is used to detect fetal disorders. The sensitivity and dynamic range we obtained with this assay indicate that this method can provide a cost-effective and simple way to measure various biomolecules in solution for research and clinical applications. 相似文献
7.
N. Rashevsky 《Bulletin of mathematical biology》1947,9(3):123-126
Periodic vibrations of the walls of a distensible elastic tube through which a fluid is flowing are studied by the method
used by Lord Rayleigh in his theory of vibrations of jets. The results are found to conform with those obtaine previously
by a more general but approximate method. 相似文献
8.
We apply conformational space annealing (CSA), an efficient global optimization method, to the study of protein-protein interaction. The CSA is incorporated into the Tinker molecular modeling package along with a B-spline method for CAPRI Round 5 experiments. We have used an energy function for the protein-protein interaction that consists of electrostatic interaction, van der Waals interaction, and solvation energy terms represented by the occupancy desolvation method. The parameters of the AMBER94 all-atom empirical force field are used. Each energy term is calculated by precalculated grid potentials and B-spline method approximation. The ligand protein is placed inside a sphere of 50 A radius centered at an appropriate location, and the CSA rigid docking studies are carried out to find stable complexes. Up to 10 complexes are selected using the K-mean clustering method and biological information when available. These complexes are energy-minimized for further refinement by considering the flexibility of interacting proteins. The results show that the CSA method has a potential for the study of protein-protein interaction. 相似文献
9.
Path reconstruction as a tool for actin filament speed determination in the in vitro motility assay.
The in vitro motility assay is used to measure speed of actin filaments moving over a glass surface coated with heavy meromyosin. In this paper a new method, the path reconstruction method, is presented to evaluate observed speeds. The method is compared with the commonly used centroid method, in which the centroids of the filaments are followed from frame to frame. Instead, in the path reconstruction method speed is evaluated from determination of perimeters of the filaments in each frame and by reconstruction of the traversed paths of the filaments over a number of frames. Biases in the determination of speed occurring in the centroid method due to curvature of paths and to video noise and Brownian motion are eliminated in the path reconstruction method, allowing measurement over a range of frame rates from 5 to 25 per second. The path reconstruction method leads to a clear separation of motile and nonmotile filaments provided that filaments are analyzed over at least 10 successive frames and allows easier separation of uniform and nonuniform sliding behavior. 相似文献
10.
单元内混合家系相关法和阈性状法估计遗传力的比较研究 总被引:3,自引:3,他引:0
本文提出了一种估计遗传力的方法即单元内混合家系相关法,并用这种方法及阈性状分析法进行了猪肢蹄结实度的遗传力估计,结果表明,对于多阈性状,用单元内混合家系相关法估计的结果准确性好。 相似文献
11.
Simulation of the diffusion-controlled reaction between superoxide and superoxide dismutase. II. Detailed models 总被引:5,自引:0,他引:5
A two-stage Brownian dynamics simulation method is used to study the diffusion-influenced bimolecular reaction between superoxide and superoxide dismutase (SOD). The crystal structure of the dimeric enzyme is used in constructing detailed topographical and electrostatic models. Several electrostatic models are considered. In the most realistic, the excluded volume of the protein, which is impermeable to penetration by mobile ions, is assigned a dielectric constant of 2 and the surrounding “solvent” is assigned a value of 78. A finite difference method is used to solve the linearized Poisson–Boltzmann equation. For native SOD, the simulations reproduce the pronounced salt dependence of the rate constant observed experimentally. This salt dependence is attributed to electrostatic interactions between enzyme and substrate that are inherently attractive and amplified by the low dielectric constant of the protein interior. The simulation method is also applied to a modified enzyme, acylated SOD. 相似文献
12.
The responsibility borne by governmental departments measured by a set of indicators is a key factor affecting the performance of urban sustainability. Thus, responsibility analysis can guide the selection of sustainability indicators. In line with the principle of Management by Objective (MBO), this paper aims to introduce a responsibility-based method, named Strategic goal-Responsibility department-Response (SRR), for selecting sustainable urbanization indicators. By applying this method, indicators are selected from the perspective of concerned departments’ responsibility. In developing the SRR model, the tool of Responsibility Assignment Matrix is used to identify the concerned departments who assume responsibilities in the process of implementing sustainable urbanization. The content analysis is used to analyze the work scope and definitions of the concerned departments and sustainable urbanization indicators that can measure the responsibility performance of the concerned departments are filtered out. A case study of Jinan city in China is used to demonstrate the procedures of using the proposed method. Based on the strategic goals of Jinan city, 20 responsibility departments and 152 initial indicators are identified by using the SRR framework. The case study reveals that the method is a feasible and effective tool in assisting policy makers to select sustainable urbanization indicators. 相似文献
13.
14.
Improvement of image classification using wavelet coefficients with structured-based neural network 总被引:1,自引:0,他引:1
Image classification is a challenging problem in organizing a large image database. However, an effective method for such an objective is still under investigation. A method based on wavelet analysis to extract features for image classification is presented in this paper. After an image is decomposed by wavelet, the statistics of its features can be obtained by the distribution of histograms of wavelet coefficients, which are respectively projected onto two orthogonal axes, i.e., x and y directions. Therefore, the nodes of tree representation of images can be represented by the distribution. The high level features are described in low dimensional space including 16 attributes so that the computational complexity is significantly decreased. 2,800 images derived from seven categories are used in experiments. Half of the images were used for training neural network and the other images used for testing. The features extracted by wavelet analysis and the conventional features are used in the experiments to prove the efficacy of the proposed method. The classification rate on the training data set with wavelet analysis is up to 91%, and the classification rate on the testing data set reaches 89%. Experimental results show that our proposed approach for image classification is more effective. 相似文献
15.
Gösta Nyberg Erik Mårtensson 《Journal of chromatography. B, Analytical technologies in the biomedical and life sciences》1977,143(5):491-497
A method for the quantitative analysis of tricyclic antidepressants in the serum of psychiatric patients is described. The method can be used for determining arnitriptyline, nortriptyline, imipramine, demethyllimipramine, clomipramine, demethylclomipramine, trimipramine and protriptyline. The method consists in a series of extraction steps followed by gas chromatography with a flame-ionization detector. The drugs are determined in their native state. The internal standard method is used for the quantitation. 相似文献
16.
Toshinori Okuyama 《Biological Control》2012,60(2):103-107
Functional response is an important determinant of community dynamics, and thus empirical methods for characterizing functional responses are as important in understanding ecological processes. The most commonly used method is based on the sum of squares, and the maximum likelihood method is rarely used. When the likelihood method is used, potentially inappropriate probability distributions such as binomial distributions are typically assumed for the number of prey eaten in experiments. In this study, I present a likelihood approach in which the probability distributions are generated by mechanistic understanding of predation processes using Monte Carlo simulations. An example is given on the Holling type II functional response model, but the method is flexible and allows characterization of a wide variety of functional response models. In the example, the likelihood method consistently resulted in superior estimates than the least squares method. 相似文献
17.
Relative efficiencies of the maximum-likelihood, neighbor-joining, and maximum-parsimony methods when substitution rate varies with site 总被引:10,自引:5,他引:5
The relative efficiencies of the maximum-likelihood (ML), neighbor- joining
(NJ), and maximum-parsimony (MP) methods in obtaining the correct topology
and in estimating the branch lengths for the case of four DNA sequences
were studied by computer simulation, under the assumption either that there
is variation in substitution rate among different nucleotide sites or that
there is no variation. For the NJ method, several different distance
measures (Jukes-Cantor, Kimura two- parameter, and gamma distances) were
used, whereas for the ML method three different transition/transversion
ratios (R) were used. For the MP method, both the standard unweighted
parsimony and the dynamically weighted parsimony methods were used. The
results obtained are as follows: (1) When the R value is high, dynamically
weighted parsimony is more efficient than unweighted parsimony in obtaining
the correct topology. (2) However, both weighted and unweighted parsimony
methods are generally less efficient than the NJ and ML methods even in the
case where the MP method gives a consistent tree. (3) When all the
assumptions of the ML method are satisfied, this method is slightly more
efficient than the NJ method. However, when the assumptions are not
satisfied, the NJ method with gamma distances is slightly better in
obtaining the correct topology than is the ML method. In general, the two
methods show more or less the same performance. The NJ method may give a
correct topology even when the distance measures used are not unbiased
estimators of nucleotide substitutions. (4) Branch length estimates of a
tree with the correct topology are affected more easily than topology by
violation of the assumptions of the mathematical model used, for both the
ML and the NJ methods. Under certain conditions, branch lengths are
seriously overestimated or underestimated. The MP method often gives
serious underestimates for certain branches. (5) Distance measures that
generate the correct topology, with high probability, do not necessarily
give good estimates of branch lengths. (6) The likelihood-ratio test and
the confidence-limit test, in Felsenstein's DNAML, for examining the
statistical of branch length estimates are quite sensitive to violation of
the assumptions and are generally too liberal to be used for actual data.
Rzhetsky and Nei's branch length test is less sensitive to violation of the
assumptions than is Felsenstein's test. (7) When the extent of sequence
divergence is < or = 5% and when > or = 1,000 nucleotides are used,
all three methods show essentially the same efficiency in obtaining the
correct topology and in estimating branch lengths.(ABSTRACT TRUNCATED AT
400 WORDS)
相似文献
18.
Genetic programming for creating Chou’s pseudo amino acid based features for submitochondria localization 总被引:2,自引:0,他引:2
Given a protein that is localized in the mitochondria it is very important to know the submitochondria localization of that protein to understand its function. In this work, we propose a submitochondria localizer whose feature extraction method is based on the Chou's pseudo-amino acid composition. The pseudo-amino acid based features are obtained by combining pseudo-amino acid compositions with hundreds of amino-acid indices and amino-acid substitution matrices, then from this huge set of features a small set of 15 "artificial" features is created. The feature creation is performed by genetic programming combining one or more "original" features by means of some mathematical operators. Finally, the set of combined features are used to train a radial basis function support vector machine. This method is named GP-Loc. Moreover, we also propose a very few parameterized method, named ALL-Loc, where all the "original" features are used to train a linear support vector machine. The overall prediction accuracy obtained by GP-Loc is 89% when the jackknife cross-validation is used, this result outperforms the performance obtained in the literature (85.2%) using the same dataset. While the overall prediction accuracy obtained by ALL-Loc is 83.9%. 相似文献
19.
Ambroise J Giard J Gala JL Macq B 《IEEE/ACM transactions on computational biology and bioinformatics / IEEE, ACM》2011,8(6):1700-1707
A B-cell epitope is a part of an antigen that is recognized by a specific antibody or B-cell receptor. Detecting the immunogenic region of the antigen is useful in numerous immunodetection and immunotherapeutics applications. The aim of this paper is to find relevant properties to discriminate the location of potential epitopes from the rest of the protein surface. The most relevant properties, identified using two evaluation approaches, are the geometric properties, followed by the conservation score and some chemical properties, such as the proportion of glycine. The selected properties are used in a patch-based epitope localization method including a Single-Layer Perceptron for regression. The output of this Single-Layer Perceptron is used to construct a probability map on the antigen surface. The predictive performances of the method are assessed by computing the AUC using cross validation on two benchmark data sets and by computing the AUC and the precision for a third independent test set. 相似文献