首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 654 毫秒
1.
Abstract

Taboo-based Monte Carlo search which restricts the sampling of the region near an old configuration, is developed. In this procedure, Monte Carlo simulation and random search method are combined to improve the sampling efficiency. The feasibility of this method is tested on global optimization of a continuous model function, melting of the 256 Lennard-Jones particles at T? = 0.680 and ρ? = 0.850 and polypeptides (alanine dipeptide and Metenkephalin). From the comparison of results for the model function between our method and other methods, we find the increase of convergence rate and the high possibility of escaping from the local energy minima. The results of the Lennard-Jones solids and polypeptides show that the convergence property to reach the equilibrium state is better than that of others. It is also found that no significant bias in ensemble distribution is detected, though taboo-based Monte Carlo search does not sample the correct ensemble distribution owing to the restriction of the sampling of the region near an old configuration.  相似文献   

2.
G H Paine  H A Scheraga 《Biopolymers》1985,24(8):1391-1436
A new methodology for theoretically predicting the native, three-dimensional structure of a polypeptide is presented. Based on equilibrium statistical mechanics, an algorithm has been designed to determine the probable conformation of a polypeptide by calculating conditional free-energy maps for each residue of the macromolecule. The conditional free-energy map of each residue is computed from a set of probability integrals, obtained by summing over the interaction energies of all pairs of nonbonded atoms of the whole molecule. By locating the region(s) of lowest free energy for each map, the probable conformation for each residue can be identified. The native structure of the polypeptide is assumed to be the combination of the probable conformations of the individual residues. All multidimensional probability integrals are evaluated by an adaptive Monte Carlo algorithm (SMAPPS —Statistical-Mechanical Algorithm for Predicting Protein Structure). The Monte Carlo algorithm searches the entire conformational space, adjusting itself automatically to concentrate its sampling in regions where the magnitude of the integrand is largest (“importance sampling”). No assumptions are made about the native conformation. The only prior knowledge necessary for the prediction of the native conformation is the amino acid sequence of the polypeptide. To test the effectiveness of the algorithm, SMAPPS was applied to the prediction of the native conformation of the backbone of Met-enkephalin, a pentapeptide. In the calculations, only the backbone dihedral angles (? and ψ) were allowed to vary; all side-chain (χ) and peptide-bond (ω) dihedral angles were kept fixed at the values corresponding to the alleged global minimum energy previously determined by direct energy minimization. For each conformation generated randomly by the Monte Carlo algorithm, the total conformational energy of the polypeptide was obtained from established empirical potential energy functions. Solvent effects were not included in the computations. With this initial application of SMAPPS , three distinct low-free-energy β-bend structures of Met-enkephalin were found. In particular, one of the structures has a conformation remarkably similar to the one associated with the previously alleged global minimum energy. The two additional structures of the pentapeptide have conformational energies lower than the previously computed low-energy structure. However, the Monte Carlo results are in agreement with an improved energy-minimization procedure. These initial results on the backbone structure of Met-enkephalin indicate that an equilibrium statistical-mechanical procedure, coupled with an adaptive Monte Carlo algorithm, can overcome many of the problems associated with the standard methods of direct energy minimization.  相似文献   

3.
AimIn this study, we investigated initial electron parameters of Siemens Artiste Linac with 6 MV photon beam using the Monte Carlo method.BackgroundIt is essential to define all the characteristics of initial electrons hitting the target, i.e. mean energy and full width of half maximum (FWHM) of the spatial distribution intensity, which is needed to run Monte Carlo simulations. The Monte Carlo is the most accurate method for simulation of radiotherapy treatments.Materials and methodsLinac head geometry was modeled using the BEAMnrc code. The phase space files were used as input file to DOSXYZnrc simulation to determine the dose distribution in a water phantom. We obtained percent depth dose curves and the lateral dose profile. All the results were obtained at 100 cm of SSD and for a 10 × 10 cm2 field.ResultsWe concluded that there existed a good conformity between Monte Carlo simulation and measurement data when we used electron mean energy of 6.3 MeV and 0.30 cm FWHM value as initial parameters. We observed that FWHM values had very little effect on PDD and we found that the electron mean energy and FWHM values affected the lateral dose profile. However, these effects are between tolerance values.ConclusionsThe initial parameters especially depend on components of a linac head. The phase space file which was obtained from Monte Carlo Simulation for a linac can be used as calculation of scattering, MLC leakage, to compare dose distribution on patients and in various studies.  相似文献   

4.
PurposeThe main focus of the current paper is the clinical implementation of a Monte Carlo based platform for treatment plan validation for Tomotherapy and Cyberknife, without adding additional tasks to the dosimetry department.MethodsThe Monte Carlo platform consists of C++ classes for the actual functionality and a web based GUI that allows accessing the system using a web browser. Calculations are based on BEAMnrc/DOSXYZnrc and/or GATE and are performed automatically after exporting the dicom data from the treatment planning system. For Cyberknife treatments of moving targets, the log files saved during the treatment (position of robot, internal fiducials and external markers) can be used in combination with the 4D planning CT to reconstruct the actually delivered dose. The Monte Carlo platform is also used for calculation on MRI images, using pseudo-CT conversion.ResultsFor Tomotherapy treatments we obtain an excellent agreement (within 2%) for almost all cases. However, we have been able to detect a problem regarding the CT Hounsfield units definition of the Toshiba Large Bore CT when using a large reconstruction diameter. For Cyberknife treatments we obtain an excellent agreement with the Monte Carlo algorithm of the treatment planning system. For some extreme cases, when treating small lung lesions in low density lung tissue, small differences are obtained due to the different cut-off energy of the secondary electrons.ConclusionsA Monte Carlo based treatment plan validation tool has successfully been implemented in clinical routine and is used to systematically validate all Cyberknife and Tomotherapy plans.  相似文献   

5.
Purposeto elucidate the effects of multiple scattering and energy-loss straggling on electron beams slowing down in materials.MethodsEGSnrc Monte Carlo simulations are done using a purpose-written user-code.ResultsPlots are presented of the primary electron’s energy as a function of pathlength for 20 MeV electrons incident on water and tantalum as are plots of the overall distribution of pathlengths as the 20 MeV electrons slow down under various Monte Carlo scenarios in water and tantalum. The distributions range from 1 % to 135 % of the CSDA range in water and from 1 % to 186 % in tantalum. The effects of energy-loss straggling on energy spectra at depth and electron fluence at depth are also presented.ConclusionsThe role of energy-loss straggling and multiple scattering are shown to play a significant role in the range straggling which determines the dose fall-off region in electron beam dose vs depth curves and a significant role in the energy distributions as a function of depth.  相似文献   

6.
PurposeTo demonstrate the feasibility of gold-specific spectral CT imaging for the detection of liver lesions in humans at low concentrations of gold as targeted contrast agent.MethodsA Monte Carlo simulation study of spectral CT imaging with a photon-counting and energy-resolving detector (with 6 energy bins) was performed in a realistic phantom of the human abdomen. The detector energy thresholds were optimized for the detection of gold. The simulation results were reconstructed with the K-edge imaging algorithm; the reconstructed gold-specific images were filtered and evaluated with respect to signal-to-noise ratio and contrast-to-noise ratio (CNR).ResultsThe simulations demonstrate the feasibility of spectral CT with CNRs of the specific gold signal between 2.7 and 4.8 after bilateral filtering. Using the optimized bin thresholds increases the CNRs of the lesions by up to 23% compared to bin thresholds described in former studies.ConclusionsGold is a promising new CT contrast agent for spectral CT in humans; minimum tissue mass fractions of 0.2 wt% of gold are required for sufficient image contrast.  相似文献   

7.
D R Ripoll  H A Scheraga 《Biopolymers》1988,27(8):1283-1303
A new approach to the multiple-minima problem in protein folding is presented. It is assumed that the molecule is driven toward the native structure by three types of mechanism. The first one involves an optimization of the electrostatic interactions, whereby the molecule evolves toward conformations in which the charge distribution becomes energetically more favorable. The second mechanism involves a Monte Carlo–energy minimization approach, and the third one is a backtrack mechanism that acts in the opposite direction, increasing the energy—the third type of movement provides a means to perturb the molecule when it is trapped in a stable but energetically unfavorable local energy minimum. This paper describes the implementation of a model based on these mechanisms, and illustrates its effectiveness by computations on different arbitrary starting conformations of a terminally blocked 19-residue chain of poly(L -alanine) for which the global minimum apparently corresponds to the right-handed α-helix. In all cases, the global minimum was attained, even when the starting conformation was a left-handed α-helix. In the latter case, the trajectory of conformations passed through partially melted forms of the left-handed α-helix (because of electrostatic defects at the ends), and then through the formation of structures leading to the more stable right-handed α-helix.  相似文献   

8.
PurposeThis work describes the integration of the M6 Cyberknife in the Moderato Monte Carlo platform, and introduces a machine learning method to accelerate the modelling of a linac.MethodsThe MLC-equipped M6 Cyberknife was modelled and integrated in Moderato, our in-house platform offering independent verification of radiotherapy dose distributions. The model was validated by comparing TPS dose distributions with Moderato and by film measurements. Using this model, a machine learning algorithm was trained to find electron beam parameters for other M6 devices, by simulating dose curves with varying spot size and energy. The algorithm was optimized using cross-validation and tested with measurements from other institutions equipped with a M6 Cyberknife.ResultsOptimal agreement in the Monte Carlo model was reached for a monoenergetic electron beam of 6.75 MeV with Gaussian spatial distribution of 2.4 mm FWHM. Clinical plan dose distributions from Moderato agreed within 2% with the TPS, and film measurements confirmed the accuracy of the model. Cross-validation of the prediction algorithm produced mean absolute errors of 0.1 MeV and 0.3 mm for beam energy and spot size respectively. Prediction-based simulated dose curves for other centres agreed within 3% with measurements, except for one device where differences up to 6% were detected.ConclusionsThe M6 Cyberknife was integrated in Moderato and validated through dose re-calculations and film measurements. The prediction algorithm was successfully applied to obtain electron beam parameters for other M6 devices. This method would prove useful to speed up modelling of new machines in Monte Carlo systems.  相似文献   

9.
The relaxed potential energy surfaces of chitobiose were calculated based on the MM3-force field by optimizing dimer structures on a 10° grid spacing of the torsional angles about the glycosidic bonds (Φ,Ψ). The 36 conformations; the four combinations of the hydroxymethyl group orientations coupled with the nine of the secondary group ones— were assumed for each Φ,Ψ conformation. The four conformations, each differing in the hydroxymethyl group orientations, were considered for the whole Φ,Ψ space, and all the 36 conformations, for the restricted space of low energy. While the resulting energy map and the structures of the energy minima were similar to those proposed for cellobiose in many respects, more restricted energy profile was suggested for the relaxed map of chitobiose where differences in the energy level between the global minimum and the local minima were within 5.4 kcal/mol, compared with the equivalent value of 3.6 kcal/mol for cellobiose. Further depression of the global minimum occurred when the acidic residue was used. The Monte Carlo samples of the chitosan chain were generated based on the relaxed map to predict the unperturbed coil dimension in solution. The chitosan chains showed Gaussian behavior at x = 500 (x, degree of polymerization) and gave the characteristic ratio Cx, of about 70, which was much larger than the experimental values observed for the chitosan and cellulosic chains. © 1994 John Wiley & Sons, Inc.  相似文献   

10.
PurposeTo develop a particle transport code to compute w-values and stopping power of swift ions in liquid water and gases of interest for reference dosimetry in hadrontherapy. To analyze the relevance of inelastic and post-collisional processes considered.MethodsThe Monte Carlo code MDM was extended to the case of swift ion impact on liquid water (MDM-Ion). Relativistic corrections in the inelastic cross sections and the post-collisional Auger emission were considered. The effects of introducing different electronic excitation cross sections were also studied.ResultsThe stopping power of swift ions on liquid water, calculated with MDM-Ion, are in excellent agreement with recommended data. The w-values show a strong dependence on the electronic excitation cross sections and on the Auger electron emission. Comparisons with other Monte Carlo codes show the relevance of both the processes considered and of the cross sections employed. W and w-values for swift electron, proton, and carbon ions calculated with the MDM and MDM-Ion codes are in very close agreement with each other and with the 20.8 eV experimental value.ConclusionWe found that w-values in liquid water are independent of ion charge and energy, as assumed in reference dosimetry for hadrontherapy from sparse experimental results for electron and ion impact on gases. Excitation cross sections and Auger emission included in Monte Carlo codes are critical in w-values calculations. The computation of this physical parameter should be used as a benchmark for micro-dosimetry investigations, to assess the reliability of the cross sections employed.  相似文献   

11.
PurposePatient dose estimation in X-ray computed tomography (CT) is generally performed by Monte Carlo simulation of photon interactions within anthropomorphic or cylindrical phantoms. An accurate Monte Carlo simulation requires an understanding of the effects of the bow-tie filter equipped in a CT scanner, i.e. the change of X-ray energy and air kerma along the fan-beam arc of the CT scanner. To measure the effective energy and air kerma distributions, we devised a pin-photodiode array utilizing eight channels of X-ray sensors arranged at regular intervals along the fan-beam arc of the CT scanner.MethodsEach X-ray sensor consisted of two plate type of pin silicon photodiodes in tandem – front and rear photodiodes – and of a lead collimator, which only allowed X-rays to impinge vertically to the silicon surface of the photodiodes. The effective energy of the X-rays was calculated from the ratio of the output voltages of the photodiodes and the dose was calculated from the output voltage of the front photodiode using the energy and dose calibration curves respectively.ResultsThe pin-photodiode array allowed the calculation of X-ray effective energies and relative doses, at eight points simultaneously along the fan-beam arc of a CT scanner during a single rotation of the scanner.ConclusionsThe fan-beam energy and air kerma distributions of CT scanners can be effectively measured using this pin-photodiode array.  相似文献   

12.
IntroductionWe present a beam model for Monte Carlo simulations of the IBA pencil beam scanning dedicated nozzle installed at the Skandion Clinic. Within the nozzle, apart from entrance and exit windows and the two ion chambers, the beam traverses vacuum, allowing for a beam that is convergent downstream of the nozzle exit.Materials and methodsWe model the angular, spatial and energy distributions of the beam phase space at the nozzle exit with single Gaussians, controlled by seven energy dependent parameters. The parameters were determined from measured profiles and depth dose distributions. Verification of the beam model was done by comparing measured and GATE acquired relative dose distributions, using plan specific log files from the machine to specify beam spot positions and energy.ResultsGATE-based simulations with the acquired beam model could accurately reproduce the measured data. The gamma index analysis comparing simulated and measured dose distributions resulted in >95% global gamma index pass rates (3%/2 mm) for all depths.ConclusionThe developed beam model was found to be sufficiently accurate for use with GATE e.g. for applications in quality assurance (QA) or patient motion studies with the IBA pencil beam scanning dedicated nozzles.  相似文献   

13.
AimThe aim of this work was to develop multiple-source models for electron beams of the NEPTUN 10PC medical linear accelerator using the BEAMDP computer code.BackgroundOne of the most accurate techniques of radiotherapy dose calculation is the Monte Carlo (MC) simulation of radiation transport, which requires detailed information of the beam in the form of a phase-space file. The computing time required to simulate the beam data and obtain phase-space files from a clinical accelerator is significant. Calculation of dose distributions using multiple-source models is an alternative method to phase-space data as direct input to the dose calculation system.Materials and methodsMonte Carlo simulation of accelerator head was done in which a record was kept of the particle phase-space regarding the details of the particle history. Multiple-source models were built from the phase-space files of Monte Carlo simulations. These simplified beam models were used to generate Monte Carlo dose calculations and to compare those calculations with phase-space data for electron beams.ResultsComparison of the measured and calculated dose distributions using the phase-space files and multiple-source models for three electron beam energies showed that the measured and calculated values match well each other throughout the curves.ConclusionIt was found that dose distributions calculated using both the multiple-source models and the phase-space data agree within 1.3%, demonstrating that the models can be used for dosimetry research purposes and dose calculations in radiotherapy.  相似文献   

14.
Abstract

The Detailed Balance Energy-scaled Displacement Monte Carlo method that stems from the previously published Energy Scaled Displacement Monte Carlo method is presented. The results of tests performed on a dense Lennard-Jones liquid and on two particles in one dimension are reported.  相似文献   

15.
Abstract

We show that the classical Metropolis Monte Carlo (MMC) algorithm converges very slowly when applied to the primitive electrolyte environment for a high charge-density polyelectrolyte. This slowness of convergence, which is due to the large density inhomogeneity around the polyelectrolyte, produces noticeable errors in the ion distribution functions for MMC runs of 1.3 × 106 trial steps started from nonequilibrium distributions. We report that an algorithm which we call DSMC (for density-scaled Monte Carlo) overcomes this problem and provides relatively rapid convergence in this application. We suggest that DSMC should be well-suited for other Monte Carlo simulations on physical systems where large density inhomogeneities occur.  相似文献   

16.
Abstract

Pseudoexperimental data of high accuracy on the pressure and the internal energy of the Lennard-Jones fluid have been generated both by the Monte Carlo and molecular dynamics methods for five subcritical and three supercritical isotherms. Values of the chemical potential of the Lennard-Jones fluid computed by a new version of the gradual insertion particle method for two isotherms up to very high densities are also reported and discussed, and compared with existing data.  相似文献   

17.
Introduction

The Monte Carlo technique is widely used and recommended for including uncertainties LCA. Typically, 1000 or 10,000 runs are done, but a clear argument for that number is not available, and with the growing size of LCA databases, an excessively high number of runs may be a time-consuming thing. We therefore investigate if a large number of runs are useful, or if it might be unnecessary or even harmful.

Probability theory

We review the standard theory or probability distributions for describing stochastic variables, including the combination of different stochastic variables into a calculation. We also review the standard theory of inferential statistics for estimating a probability distribution, given a sample of values. For estimating the distribution of a function of probability distributions, two major techniques are available, analytical, applying probability theory and numerical, using Monte Carlo simulation. Because the analytical technique is often unavailable, the obvious way-out is Monte Carlo. However, we demonstrate and illustrate that it leads to overly precise conclusions on the values of estimated parameters, and to incorrect hypothesis tests.

Numerical illustration

We demonstrate the effect for two simple cases: one system in a stand-alone analysis and a comparative analysis of two alternative systems. Both cases illustrate that statistical hypotheses that should not be rejected in fact are rejected in a highly convincing way, thus pointing out a fundamental flaw.

Discussion and conclusions

Apart form the obvious recommendation to use larger samples for estimating input distributions, we suggest to restrict the number of Monte Carlo runs to a number not greater than the sample sizes used for the input parameters. As a final note, when the input parameters are not estimated using samples, but through a procedure, such as the popular pedigree approach, the Monte Carlo approach should not be used at all.

  相似文献   

18.
G H Paine  H A Scheraga 《Biopolymers》1986,25(8):1547-1563
The average conformation of Met-enkephalin was determined by using an adaptive, importance-sampling Monte Carlo algorithm (SMAPPS—Statistical Mechanical Algorithm for Predicting Protein Structure). In the calculation, only the backbone dihedral angles (? and ψ) were allowed to vary; i.e., all side-chain (χ) and peptide-bond (ω) dihedral angles were kept fixed at the values corresponding to a low-energy structure of the pentapeptide. The total conformational energy for each randomly generated structure of the polypeptide was obtained by summing over the interaction energies of all pairs of nonbonded atoms of the whole molecule. The interaction energies were computed by the program ECEPP/2 (Empirical Conformational Energy Program for Peptides). Solvent effects were not included in the computation. The calculation was repeated until a total of 10 independent average conformations were established. The regions of conformational space occupied by the average structures were compared with the regions of low conditional free energy obtained by SMAPPS in the first paper of this series. Such a comparison provides an analysis of the capacity of SMAPPS to adjust the Monte Carlo search to regions of highest probability. The results demonstrate that the ability of SMAPPS to focus the Monte Carlo search is excellent. Finally, the 10 independent average conformations and the mean of the 10 average structures were utilized as the initial conformations for a direct energy minimization of the pentapeptide. Of the 11 final energy-minimized structures, three of the conformations were found to be equivalent to the conformation of lowest energy determined previously. In addition, all but two of the remaining energy-minimized structures were found to correspond to one of the two other conformations of high probability obtained in the first paper of this series. These results indicate that a set of independent average conformations can provide a rational, unbiased choice for the initial conformation, to be used in a direct energy minimization of a polypeptide. The final energy-minimized structures consequently constitute a set of low-energy conformations, which include the global energy minimum.  相似文献   

19.
Purpose

California’s Central Valley produces more than 75% of global commercial almond supply, making the life cycle performance of almond production in California of global interest. This article describes the life cycle assessment of California almond production using a Scalable, Process-based, Agronomically Responsive Cropping System Life Cycle Assessment (SPARCS-LCA) model that includes crop responses to orchard management and modeling of California’s water supply and biomass energy infrastructure.

Methods

A spatially and temporally resolved LCA model was developed to reflect the regional climate, resource, and agronomic conditions across California’s Central Valley by hydrologic subregion (San Joaquin Valley, Sacramento Valley, and Tulare Lake regions). The model couples a LCA framework with region-specific data, including water supply infrastructure and economics, crop productivity response models, and dynamic co-product markets, to characterize the environmental performance of California almonds. Previous LCAs of California almond found that irrigation and management of co-products were most influential in determining life cycle CO2eq emissions and energy intensity of California almond production, and both have experienced extensive changes since previous studies due to drought and changing regulatory conditions, making them a focus of sensitivity and scenario analysis.

Results and discussion

Results using economic allocation show that 1 kg of hulled, brown-skin almond kernel at post-harvest facility gate causes 1.92 kg CO2eq (GWP100), 50.9 MJ energy use, and 4820 L freshwater use, with regional ranges of 2.0–2.69 kg CO2eq, 42.7–59.4 MJ, and 4540–5150 L, respectively. With a substitution approach for co-product allocation, 1 kg almond kernel results in 1.23 kg CO2eq, 18.05 MJ energy use, and 4804 L freshwater use, with regional ranges of 0.51–1.95 kg CO2eq, 3.68–36.5 MJ, and 4521–5140 L, respectively. Almond freshwater use is comparable with other nut crops in California and globally. Results showed significant variability across subregions. While the San Joaquin Valley performed best in most impact categories, the Tulare Lake region produced the lowest eutrophication impacts.

Conclusion

While CO2eq and energy intensity of almond production increased over previous estimates, so too did credits to the system for displacement of dairy feed. These changes result from a more comprehensive model scope and improved assumptions, as well as drought-related increases in groundwater depth and associated energy demand, and decreased utilization of biomass residues for energy recovery due to closure of bioenergy plants in California. The variation among different impact categories between subregions and over time highlight the need for spatially and temporally resolved agricultural LCA.

  相似文献   

20.
Loops in proteins are flexible regions connecting regular secondary structures. They are often involved in protein functions through interacting with other molecules. The irregularity and flexibility of loops make their structures difficult to determine experimentally and challenging to model computationally. Conformation sampling and energy evaluation are the two key components in loop modeling. We have developed a new method for loop conformation sampling and prediction based on a chain growth sequential Monte Carlo sampling strategy, called Distance-guided Sequential chain-Growth Monte Carlo (DiSGro). With an energy function designed specifically for loops, our method can efficiently generate high quality loop conformations with low energy that are enriched with near-native loop structures. The average minimum global backbone RMSD for 1,000 conformations of 12-residue loops is Å, with a lowest energy RMSD of Å, and an average ensemble RMSD of Å. A novel geometric criterion is applied to speed up calculations. The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about cpu minutes for 12-residue loops, compared to ca cpu minutes using the FALCm method. Test results on benchmark datasets show that DiSGro performs comparably or better than previous successful methods, while requiring far less computing time. DiSGro is especially effective in modeling longer loops (– residues).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号