首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

The two-ellipsoid model (TEM) is proposed as a versatile single-site model which can be used in the study of liquid crystal phases. This TEM uses two ellipsoids to describe a molecule, one ellipsoid for the geometry and the other for the interaction strengths of the molecule. The present TEM can mimic asymmetric interactions of a liquid crystal molecule by separating the center of the interaction ellipsoid from that of the geometry ellipsoid. The potential energy surfaces of the present TEMs compare favorably with those of the corresponding Gay-Berne and the site–site models.

Monte Carlo simulations with 320 particles are performed for a symmetric interaction TEM and an asymmetric interaction TEM. The asymmetric interaction TEM displays a slightly higher transition temperature than the symmetric interaction TEM indicating that asymmetric interactions can be a driving force in a phase transition. Radial and cylindrical distribution functions of two models in the isotropic phase are similar, but those in the nematic phase are quite different.  相似文献   

2.
A new version of the test particle method for determining the chemical potential by Monte Carlo simulations is proposed. The method, applicable to any fluid at any density, combines the Widom's test particle insertion method with the ideas of the scaled particle theory, gradual insertion method and multistage sampling. Its applicability is exemplified by evaluating the chemical potential of the hard sphere fluid at a very high density in semi-grand-canonical and grand-canonical ensembles. A theory estimating the efficiency (i.e. statistical errors) of the method is proposed and the results are compared with the Widom's and gradual insertion methods, and the analytic results.  相似文献   

3.
Abstract

The principle purpose of this paper is to demonstrate the use of the Inverse Monte Carlo technique for calculating pair interaction energies in monoatomic liquids from a given equilibrium property. This method is based on the mathematical relation between transition probability and pair potential given by the fundamental equation of the “importance sampling” Monte Carlo method. In order to have well defined conditions for the test of the Inverse Monte Carlo method a Metropolis Monte Carlo simulation of a Lennard Jones liquid is carried out to give the equilibrium pair correlation function determined by the assumed potential. Because an equilibrium configuration is prerequisite for an Inverse Monte Carlo simulation a model system is generated reproducing the pair correlation function, which has been calculated by the Metropolis Monte Carlo simulation and therefore representing the system in thermal equilibrium. This configuration is used to simulate virtual atom displacements. The resulting changes in atom distribution for each single simulation step are inserted in a set of non-linear equations defining the transition probability for the virtual change of configuration. The solution of the set of equations for pair interaction energies yields the Lennard Jones potential by which the equilibrium configuration has been determined.  相似文献   

4.
Abstract

The chemical potential of a trimer and hexamer model ring system was determined by computer simulation over a range of temperatures and densities. Such ring molecules are important as model aromatic and naphthenic hydrocarbons. Thermodynamic integration of the pressure along a reversible path, Widom's ghost particle insertion method and Kirkwood's charging parameter method were used over a molecular density range of 0.05 to 0.30. Data were obtained by Monte Carlo simulation of a 96 molecule system that was modelled with a Lennard-Jones 6-12 truncated potential. The original insertion method, which does not take into account the orientation of the molecule when it is inserted, gives results for the chemical potential which deviate from that obtained using the thermodynamic pressure integration. At high density or temperature the deviation is significant. We have modified the Widom insertion technique to account for this short range orientation and find good agreement between this technique and the thermodynamic integration method for the chemical potential. We also calculated the free energy difference between our model ring molecules and ring molecules made up of hard spheres.  相似文献   

5.
Knowledge of the carbon footprint (CF) of a scientific publication can help to guide changes in behavior for mitigating global warming. A knowledge gap, however, still exists in academic circles. We quantified the CF of a publication by parameterizing searches, downloads, reading, and writing in the processes of publication with both direct and indirect emissions covered. We proposed a time-loaded conversion coefficient to transfer indirect emissions to final consumers. A questionnaire survey, certification database of Energy Star, fixed-asset databases specific to our campus, and reviewed life-cycle-assessment studies on both print media and electronic products were integrated with Monte Carlo simulations to quantify uncertainties. The average CF [(CI: 95%), SD] of a scientific publication was 5.44 kg CO2-equiv. [(1.65, 14.78), 4.97], with 37.65 MJ [(0.00, 71.32), 30.40] of energy consumed. Reading the literature contributed the most, followed by writing and searching. A sensitivity analysis indicated that reading efficiency, the proportion of e-reading, and reference quantity were the most dominant of 52 parameters. Durable media generated a higher CF (4.24 kg CO2-equiv.) than consumable media (1.35 kg CO2-equiv.) due to both direct and indirect reasons. Campus policy makers should thus not promote the substitution of e-reading for print reading at the present stage, because their environmental advantages are highly dependent on time-loaded and behavioral factors. By comparison, replacing desktops with laptops is more attractive, by potentially reducing CFs by 50% and the disproportionate consumption of energy.  相似文献   

6.
The purpose of this note is to illustrate the feasibility of simulating kinetic systems, such as commonly encountered in photosynthesis research, using the Monte Carlo (MC) method. In this approach, chemical events are considered at the molecular level where they occur randomly and the macroscopic kinetic evolution results from averaging a large number of such events. Their repeated simulation is easily accomplished using digital computing. It is shown that the MC approach is well suited to the capabilities and resources of modern microcomputers. A software package is briefly described and discussed, allowing a simple programming of any kinetic model system and its resolution. The execution is reasonably fast and accurate; it is not subject to such instabilities as found with the conventional analytical approach.Abbreviations MC Monte Carlo - RN random number - PSU photosynthetic unit Dedicated to Prof. L.N.M. Duysens on the occasion of his retirement.  相似文献   

7.
Ab initio folding simulations have been performed on three peptides, using a genetic algorithm-based search method which operates on a full atom representation. Conformations are evaluated with an empirical force field parameterized by a potential of mean force analysis of experimental structures. The dominant terms in the force field are local and nonlocal main chain electrostatics and the hydrophobic effect. Two of the simulated structures were for fragments of complete proteins (eosinophil-derived neurotoxin (EDN) and the subtilisin propeptide) that were identified as being likely initiation sites for folding. The experimental structure of one of these (EDN) was subsequently found to be consistent with that prediction (using local hydrophobic burial as the determinant for independent folding). The simulations of the structures of these two peptides were only partly successful. The most successful folding simulation was that of a 22-residue peptide corresponding to the membrane binding domain of blood coagulation factor VIII (Membind). Three simulations were performed on this peptide and the lowest energy conformation was found to be the most similar to the experimental structure. The conformation of this peptide was determined with a Cα rms deviation of 4.4 Å. Although these simulations were partly successful there are still many unresolved problems, which we expect to be able to address in the next structure prediction experiment. © 1995 Wiley-Liss, Inc.  相似文献   

8.
New Monte Carlo procedures in open ensembles are proposed. Non-stationary Markov chain procedure in the μl;pT - ensemble provides a direct estimation for the critical size of a condensation nucleus at given p and T. A stationary procedure in the μlpT ensemble with two allowed particle numbers n and n + 1 provides the direct way to calculate the chemical potential and Gibbs free energy of a cluster; in the grand canonical (μlVT) ensemble the same approach gives μl and the Helmholtz free energy. The same procedures are readily applicable to periodic systems representing bulk phases.  相似文献   

9.
Carbon Nanotubes (CNTs) are a product of the nanotechnology revolution and show great promise in industrial applications. However, their relative toxicity is still not well understood and has drawn comparison to asbestos fibers due to their size and shape. In this study, a predictive Bayesian dose-response assessment was conducted with extremely limited initial dose-response data to compare the toxicity of long-fiber CNTs with that of crocidolite, an asbestos fiber associated with human mesothelioma. In the assessment, a new, theoretically derived emergent dose-response model was used and compared with the single-hit and multistage models. The multistage and emergent DRFs were selected for toxicity assessment based on two criteria: visual fit to several datasets, and a goodness-of-fit test using an available data-rich study with crocidolite. The predictive assessment supports previous concerns that long-fiber CNTs have toxicity comparable to crocidolite in intratracheal and intraperitoneal applications. Collection of further dose-response data on these materials is strongly recommended.  相似文献   

10.
For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero‐altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).  相似文献   

11.
12.
A Monte Carlo procedure is proposed for testing homogeneity of variances in linear models. The method is applicable to a variety of common experimental designs. It is valid when errors are independently normally distributed. Under nonnormality the test is expected to behave robust in a similar fashion as Levene's test. Three examples are given to demonstrate the method.  相似文献   

13.
Summary In functional data classification, functional observations are often contaminated by various systematic effects, such as random batch effects caused by device artifacts, or fixed effects caused by sample‐related factors. These effects may lead to classification bias and thus should not be neglected. Another issue of concern is the selection of functions when predictors consist of multiple functions, some of which may be redundant. The above issues arise in a real data application where we use fluorescence spectroscopy to detect cervical precancer. In this article, we propose a Bayesian hierarchical model that takes into account random batch effects and selects effective functions among multiple functional predictors. Fixed effects or predictors in nonfunctional form are also included in the model. The dimension of the functional data is reduced through orthonormal basis expansion or functional principal components. For posterior sampling, we use a hybrid Metropolis–Hastings/Gibbs sampler, which suffers slow mixing. An evolutionary Monte Carlo algorithm is applied to improve the mixing. Simulation and real data application show that the proposed model provides accurate selection of functional predictors as well as good classification.  相似文献   

14.
This study aims to quantitatively assess the risk of pesticides (used in Irish agriculture) and their degradation products to groundwater and human health. This assessment uses a human health Monte-Carlo risk-based approach that includes the leached quantity combined with an exposure estimate and the No Observed Adverse Effect Level (NOAEL) as a toxicity ranking endpoint, resulting in a chemical intake toxicity ratio statistic (R) for each pesticide. A total of 34 active substances and their metabolites registered and used in the agricultural field were evaluated. MCPA obtained the highest rank (i.e., in order of decreasing human health risk), followed by desethly-terbuthylazine and deethylatrazine (with risk ratio values of 1.1 × 10?5, 9.5 × 10?6, and 5.8 × 10?6, respectively). A sensitivity analysis revealed that the soil organic carbon content and soil sorption coefficient were the most important parameters that affected model predictions (correlation coefficient of –0.60 and –0.58, respectively), highlighting the importance of soil and pesticide properties in influencing risk estimates. The analysis highlights the importance of taking a risk-based approach when assessing pesticide risk. The model can help to prioritize pesticides, with potentially negative human health effects, for monitoring programs as opposed to traditional approaches based on pesticide leaching potential.  相似文献   

15.
Garcia LG  Araújo AF 《Proteins》2006,62(1):46-63
Monte Carlo simulations of a hydrophobic protein model of 40 monomers in the cubic lattice are used to explore the effect of energetic frustration and interaction heterogeneity on its folding pathway. The folding pathway is described by the dependence of relevant conformational averages on an appropriate reaction coordinate, pfold, defined as the probability for a given conformation to reach the native structure before unfolding. We compare the energetically frustrated and heterogeneous hydrophobic potential, according to which individual monomers have a higher or lower tendency to form contacts unspecifically depending on their hydrophobicities, to an unfrustrated homogeneous Go-type potential with uniformly attractive native interactions and neutral non-native interactions (called Go1 in this study), and to an unfrustrated heterogeneous potential with neutral non-native interactions and native interactions having the same energy as the hydrophobic potential (called Go2 in this study). Folding kinetics are slowed down dramatically when energetic frustration increases, as expected and previously observed in a two-dimensional model. Contrary to our previous results in two dimensions, however, it appears that the folding pathway and transition state ensemble can be significantly dependent on the energy function used to stabilize the native structure. The sequence of events along the reaction coordinate, or the order along this coordinate in which different regions of the native conformation become structured, turns out to be similar for the hydrophobic and Go2 potentials, but with analogous events tending to occur at lower pfold values in the first case. In particular, the transition state obtained from the ensemble around pfold = 0.5 is more structured for the hydrophobic potential. For Go1, not only the transition state ensemble but the order of events itself is modified, suggesting that interaction heterogeneity, in addition to energetic frustration, can have significant effects on the folding mechanism, most likely by modifying the probability of different contacts in the unfolded state, the starting point for the folding reaction. Although based on a simple model, these results provide interesting insight into how sequence-dependent switching between folding pathways might occur in real proteins.  相似文献   

16.
An integrated mathematical model, which incorporates scaffold proteins into a mitogen-activated protein kinases cascade, is constructed. By employing Monte Carlo simulation, regulatory property of scaffold protein on signaling ability for the mitogen-activated protein kinases cascade is investigated theoretically. It is found that (i) scaffold binding increases signal amplification if dephosphorylation is slow and decreases amplification if dephosphorylation is rapid. Also, increasing the number of scaffold decreases amplification if dephosphorylation is slow. (ii) The scaffold number can control the timing of kinase activation so that the time flexibility of signaling is enhanced. (iii) It is observed that for slow dephosphorylation case, scaffolds decrease the sharpness of the dose–response curves. While for fast dephosphorylation case, increasing scaffold number decreases the height of response, but the shape of graded response is sustained. Furthermore, the underlying mechanism and the correlation of our results with real biological systems are clarified.  相似文献   

17.
Leeyoung Park  Ju H. Kim 《Genetics》2015,199(4):1007-1016
Causal models including genetic factors are important for understanding the presentation mechanisms of complex diseases. Familial aggregation and segregation analyses based on polygenic threshold models have been the primary approach to fitting genetic models to the family data of complex diseases. In the current study, an advanced approach to obtaining appropriate causal models for complex diseases based on the sufficient component cause (SCC) model involving combinations of traditional genetics principles was proposed. The probabilities for the entire population, i.e., normal–normal, normal–disease, and disease–disease, were considered for each model for the appropriate handling of common complex diseases. The causal model in the current study included the genetic effects from single genes involving epistasis, complementary gene interactions, gene–environment interactions, and environmental effects. Bayesian inference using a Markov chain Monte Carlo algorithm (MCMC) was used to assess of the proportions of each component for a given population lifetime incidence. This approach is flexible, allowing both common and rare variants within a gene and across multiple genes. An application to schizophrenia data confirmed the complexity of the causal factors. An analysis of diabetes data demonstrated that environmental factors and gene–environment interactions are the main causal factors for type II diabetes. The proposed method is effective and useful for identifying causal models, which can accelerate the development of efficient strategies for identifying causal factors of complex diseases.  相似文献   

18.
Bayesian shrinkage analysis is arguably the state-of-the-art technique for large-scale multiple quantitative trait locus (QTL) mapping. However, when the shrinkage model does not involve indicator variables for marker inclusion, QTL detection remains heavily dependent on significance thresholds derived from phenotype permutation under the null hypothesis of no phenotype-to-genotype association. This approach is computationally intensive and more importantly, the hypothetical data generation at the heart of the permutation-based method violates the Bayesian philosophy. Here we propose a fully Bayesian decision rule for QTL detection under the recently introduced extended Bayesian LASSO for QTL mapping. Our new decision rule is free of any hypothetical data generation and relies on the well-established Bayes factors for evaluating the evidence for QTL presence at any locus. Simulation results demonstrate the remarkable performance of our decision rule. An application to real-world data is considered as well.  相似文献   

19.
Müller BU  Stich B  Piepho HP 《Heredity》2011,106(5):825-831
Control of the genome-wide type I error rate (GWER) is an important issue in association mapping and linkage mapping experiments. For the latter, different approaches, such as permutation procedures or Bonferroni correction, were proposed. The permutation test, however, cannot account for population structure present in most association mapping populations. This can lead to false positive associations. The Bonferroni correction is applicable, but usually on the conservative side, because correlation of tests cannot be exploited. Therefore, a new approach is proposed, which controls the genome-wide error rate, while accounting for population structure. This approach is based on a simulation procedure that is equally applicable in a linkage and an association-mapping context. Using the parameter settings of three real data sets, it is shown that the procedure provides control of the GWER and the generalized genome-wide type I error rate (GWER(k)).  相似文献   

20.
Count data sets are traditionally analyzed using the ordinary Poisson distribution. However, such a model has its applicability limited as it can be somewhat restrictive to handle specific data structures. In this case, it arises the need for obtaining alternative models that accommodate, for example, (a) zero‐modification (inflation or deflation at the frequency of zeros), (b) overdispersion, and (c) individual heterogeneity arising from clustering or repeated (correlated) measurements made on the same subject. Cases (a)–(b) and (b)–(c) are often treated together in the statistical literature with several practical applications, but models supporting all at once are less common. Hence, this paper's primary goal was to jointly address these issues by deriving a mixed‐effects regression model based on the hurdle version of the Poisson–Lindley distribution. In this framework, the zero‐modification is incorporated by assuming that a binary probability model determines which outcomes are zero‐valued, and a zero‐truncated process is responsible for generating positive observations. Approximate posterior inferences for the model parameters were obtained from a fully Bayesian approach based on the Adaptive Metropolis algorithm. Intensive Monte Carlo simulation studies were performed to assess the empirical properties of the Bayesian estimators. The proposed model was considered for the analysis of a real data set, and its competitiveness regarding some well‐established mixed‐effects models for count data was evaluated. A sensitivity analysis to detect observations that may impact parameter estimates was performed based on standard divergence measures. The Bayesian ‐value and the randomized quantile residuals were considered for model diagnostics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号