首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article describes the construction of an inexpensive and reliable data acquisition system for a Varian E-line Century Series ESR spectrometer utilizing an Apple II Plus microcomputer. All necessary hardware is readily available and used without modification. A BASIC program for routine collection, display, plotting and disk storage of experimental data has been written and subsequently compiled into machine code for high speed operation. The interface offers distinct advantages in spectral resolution as well as instrument control. An example of signal enhancement via computer controlled time averaging is presented for a spin labeled DNA experiment. The technique has recently been applied to studies of relative binding affinities of gene-32 protein for various spin-labeled polynucleotides.  相似文献   

2.
Server scalability is more important than ever in today's client/server dominated network environments. Recently, researchers have begun to consider cluster-based computers using commodity hardware as an alternative to expensive specialized hardware for building scalable Web servers. In this paper, we present performance results comparing two cluster-based Web servers based on different server architectures: OSI layer two dispatching (LSMAC) and OSI layer three dispatching (LSNAT). Both cluster-based server systems were implemented as application-space programs running on commodity hardware in contrast to other, similar, solutions which require specialized hardware/software. We point out the advantages and disadvantages of both systems. We also identify when servers should be clustered and when clustering will not improve performance. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

3.
A wide variety of information or ‘metadata’ is required when undertaking dendrochronological sampling. Traditionally, researchers record observations and measurements on field notebooks and/or paper recording forms, and use digital cameras and hand-held GPS devices to capture images and record locations. In the lab, field notes are often manually entered into spreadsheets or personal databases, which are then sometimes linked to images and GPS waypoints. This process is both time consuming and prone to human and instrument error. Specialised hardware technology exists to marry these data sources, but costs can be prohibitive for small scale operations (>$2000 USD). Such systems often include proprietary software that is tailored to very specific needs and might require a high level of expertise to use. We report on the successful testing and deployment of a dendrochronological field data collection system utilising affordable off-the-shelf devices ($100–300 USD). The method builds upon established open source software that has been widely used in developing countries for public health projects as well as to assist in disaster recovery operations. It includes customisable forms for digital data entry in the field, and a marrying of accurate GPS location with geotagged photographs (with possible extensions to other measuring devices via Bluetooth) into structured data fields that are easy to learn and operate. Digital data collection is less prone to human error and efficiently captures a range of important metadata. In our experience, the hardware proved field worthy in terms of size, ruggedness, and dependability (e.g., battery life). The system integrates directly with the Tellervo software to both create forms and populate the database, providing end users with the ability to tailor the solution to their particular field data collection needs.  相似文献   

4.
Advances in gel-based nonradioactive protein expression and PTM detection using fluorophores has served as the impetus for developing analytical instrumentation with improved imaging capabilities. We describe a CCD camera-based imaging instrument, equipped with both a high-pressure Xenon arc lamp and a UV transilluminator, which provides broad-band wavelength coverage (380-700 nm and UV). With six-position filter wheels, both excitation and emission wavelengths may be selected, providing optimal measurement and quantitation of virtually any dye and allowing excellent spectral resolution among different fluorophores. While spatial resolution of conventional fixed CCD camera imaging systems is typically inferior to laser scanners, this problem is circumvented with the new instrument by mechanically scanning the CCD camera over the sample and collecting multiple images that are subsequently automatically reconstructed into a complete high-resolution image. By acquiring images in succession, as many as four different fluorophores may be evaluated from a gel. The imaging platform is suitable for analysis of the wide range of dyes and tags commonly encountered in proteomics investigations. The instrument is unique in its capabilities of scanning large areas at high resolution and providing accurate selectable illumination over the UV/visible spectral range, thus maximizing the efficiency of dye multiplexing protocols.  相似文献   

5.
Nonstructured line scales (NLS) are widely used in sensory and consumer research, normally generating a large amount of data to be introduced to computers for statistical analysis. This process can be very much accelerated with the use of special hardware and software. Available systems are efficient but costly. To overcome this last item a standard mouse was modified to be used as a measuring instrument, and a simple QBASIC program was developed to input the measured data into an ASCII file. The cost of the modified mouse was $60, and data input was 5 times faster than measuring distances with a ruler. Experiments designed to test the mouse showed that error measurements were small.  相似文献   

6.
ABSTRACT: BACKGROUND: Downstream applications in metabolomics, as well as mathematical modelling, require data in a quantitative format, which may also necessitate the automated and simultaneous quantification of numerous metabolites. Although numerous applications have been previously developed for metabolomics data handling, automated calibration and calculation of the concentrations in terms of mumol have not been carried out. Moreover, most of the metabolomics applications are designed for GC-MS, and would not be suitable for LC-MS, since in LC, the deviation in the retention time is not linear, which is not taken into account in these applications. Moreover, only a few are web-based applications, which could improve stand-alone software in terms of compatibility, sharing capabilities and hardware requirements, even though a strong bandwidth is required. Furthermore, none of these incorporate asynchronous communication to allow real-time interaction with pre-processed results. FINDINGS: Here, we present EasyLCMS (http://www.easylcms.es/), a new application for automated quantification which was validated using more than 1000 concentration comparisons in real samples with manual operation. The results showed that only 1% of the quantifications presented a relative error higher than 15%. Using clustering analysis, the metabolites with the highest relative error distributions were identified and studied to solve recurrent mistakes. CONCLUSIONS: EasyLCMS is a new web application designed to quantify numerous metabolites, simultaneously integrating LC distortions and asynchronous web technology to present a visual interface with dynamic interaction which allows checking and correction of LC-MS raw data pre-processing results. Moreover, quantified data obtained with EasyLCMS are fully compatible with numerous downstream applications, as well as for mathematical modelling in the systems biology field.  相似文献   

7.
蛋白质三维结构叠加面临的主要问题是,参与叠加的目标蛋白质的氨基酸残基存在某些缺失,但是多结构叠加方法却大多数需要完整的氨基酸序列,而目前通用的方法是直接删去缺失的氨基酸序列,导致叠加结果不准确。由于同源蛋白质间结构的相似性,因此,一个蛋白质结构中缺失的某个区域,可能存在于另一个同源蛋白质结构中。基于此,本文提出一种新的、简单、有效的缺失数据下的蛋白质结构叠加方法(ITEMDM)。该方法采用缺失数据的迭代思想计算蛋白质的结构叠加,采用优化的最小二乘算法结合矩阵SVD分解方法,求旋转矩阵和平移向量。用该方法成功叠加了细胞色素C家族的蛋白质和标准Fischer’s 数据库的蛋白质(67对蛋白质),并且与其他方法进行了比较。数值实验表明,本算法有如下优点:①与THESEUS算法相比较,运行时间快,迭代次数少;②与PSSM算法相比较,结果准确,运算时间少。结果表明,该方法可以更好地叠加缺失数据的蛋白质三维结构。  相似文献   

8.
9.
The relative slow scanning speed of a galvanometer commonly used in a confocal laser scanning microscopy system can dramatically limit the system performance in scanning speed and image quality, if the data collection is simply synchronized with the galvanometric scanning. Several algorithms for the optimization of the galvanometric CLSM system performance are discussed in this work, with various hardware controlling techniques for the image distortion correction such as pixel delay and interlace line switching; increasing signal-to-noise ratio with data binning; or enhancing the imaging speed with region of interest imaging. Moreover, the pixel number can be effectively increased with Acquire-On-Fly scan, which can be used for the imaging of a large field-of-view with a high resolution.  相似文献   

10.
Animal production systems convert plant protein into animal protein. Depending on animal species, ration and management, between 5% and 45 % of the nitrogen (N) in plant protein is converted to and deposited in animal protein. The other 55%–95% is excreted via urine and feces, and can be used as nutrient source for plant (= often animal feed) production. The estimated global amount of N voided by animals ranges between 80 and 130 Tg N per year, and is as large as or larger than the global annual N fertilizer consumption. Cattle (60%), sheep (12%) and pigs (6%) have the largest share in animal manure N production.The conversion of plant N into animal N is on average more efficient in poultry and pork production than in dairy production, which is higher than in beef and sheep production. However, differences within a type of animal production system can be as large as differences between types of animal production systems, due to large effects of the genetic potential of animals, animal feed and management. The management of animals and animal feed, together with the genetic potential of the animals, are key factors to a high efficiency of conversion of plant protein into animal protein.The efficiency of the conversion of N from animal manure, following application to land, into plant protein ranges between 0 and 60%, while the estimated global mean is about 15%. The other 40%–100% is lost to the wider environment via NH3 volatilization, denitrification, leaching and run-off in pastures or during storage and/or following application of the animal manure to land. On a global scale, only 40%–50% of the amount of N voided is collected in barns, stables and paddocks, and only half of this amount is recycled to crop land. The N losses from animal manure collected in barns, stables and paddocks depend on the animal manure management system. Relative large losses occur in confined animal feeding operations, as these often lack the land base to utilize the N from animal manure effectively. Losses will be relatively low when all manure are collected rapidly in water-tight and covered basins, and when they are subsequently applied to the land in proper amounts and at the proper time, and using the proper method (low-emission techniques).There is opportunity for improving the N conversion in animal production systems by improving the genetic production potential of the herd, the composition of the animal feed, and the management of the animal manure. Coupling of crop and animal production systems, at least at a regional scale, is one way to high N use efficiency in the whole system. Clustering of confined animal production systems with other intensive agricultural production systems on the basis of concepts from industrial ecology with manure processing is another possible way to improve N use efficiency.  相似文献   

11.
The UAH Logging, Trace Recording, and Analysis instrumentation (ULTRA) provides highly repeatable (0.0002% variation) application instruction counts for parallel programs which are invariant to the communication network used, the number of processors used, and the MPI communication library used. ULTRA, implemented as an MPI profiling wrapper, avoids the data collection system artifacts of time-based measurements by using instruction counts as the basic measure of work performed and records the operation performed and the amount of data sent for each network operation. These measurements can be scaled appropriately for various target architectures. ULTRA's instrumentation overhead is minimized by using the Pentium II processors's performance monitoring hardware, allowing large, production-run applications to be quickly characterized. Traces of the NAS benchmarks representing 6.67×1012 application instructions were generated by ULTRA. The application instructions executed per byte injected into the network and the instructions executed per message sent were computed from the traces. These values can be scaled by the expected processor performance to estimate the minimum network performance required to support the programs. It is impossible to use time-based measurements for this purpose due to measurement artifacts caused by the background processes and the communication network of the data collection system.  相似文献   

12.
PurposeSpot-scanning proton beam therapy (PBT) can create good dose distribution for static targets. However, there exists larger uncertainty for tumors that move due to respiration, bowel gas or other internal circumstances within the patients. We have developed a real-time tumor-tracking radiation therapy (RTRT) system that uses an X-ray linear accelerator gated to the motion of internal fiducial markers introduced in the late 1990s. Relying on more than 10 years of clinical experience and big log data, we established a real-time image gated proton beam therapy system dedicated to spot scanning.Materials and methodsUsing log data and clinical outcomes derived from the clinical usage of the RTRT system since 1999, we have established a library to be used for in-house simulation for tumor targeting and evaluation. Factors considered to be the dominant causes of the interplay effects related to the spot scanning dedicated proton therapy system are listed and discussed.Results/conclusionsTotal facility design, synchrotron operation cycle, and gating windows were listed as the important factors causing the interplay effects contributing to the irradiation time and motion-induced dose error. Fiducial markers that we have developed and used for the RTRT in X-ray therapy were suggested to have the capacity to improve dose distribution. Accumulated internal motion data in the RTRT system enable us to improve the operation and function of a Spot-scanning proton beam therapy (SSPT) system. A real-time-image gated SSPT system can increase accuracy for treating moving tumors. The system will start clinical service in early 2014.  相似文献   

13.
Modern automated microsystems based on microhydrodynamic (microfluidic) technologies— labs on chips—make it possible to solve various basic and applied research problems. In the last 15 years, the development of these approaches in application to the problems of modern quantitative (systems) development biology has been observed. In this field, high-throughput experiments aimed at accumulating ample quantitative data for their subsequent computer analysis are important. In this review, the main directions in the development and application of microfluidics approaches for solving problems of modern developmental biology using the classical model object, Drosophila embryo, as an example is discussed. Microfluidic systems provide an opportunity to perform experiments that can hardly be performed using other approaches. These systems allow automated, rapid, reliable, and proper placing of many live embryos on a substrate for their simultaneous confocal scanning, sorting them, or injecting them with various agents. Such systems make it possible, in particular, to create controlled gradients of microenvironmental parameters along a series of developing embryos or even to introduce discontinuity in parameters within the microenvironment of one embryo, so that the head half is under other conditions compared to the tail half (at continuous scanning). These approaches are used both in basic research of the functions of gene ensembles that control early development, including the problems of resistance of early patterns to disturbances, and in test systems for screening chemical agents on developing embryos. The problems of integration of microfluidic devices in systems for automated performance of experiments simultaneously on many developing embryos under conditions of their continuous scanning using modern fluorescence microscopy instruments will be discussed. The methods and approaches developed for Drosophila are also applicable to other model objects, even mammalian embryos.  相似文献   

14.
Predicting protein functional classes such as localization sites and modifications plays a crucial role in function annotation. Given a tremendous amount of sequence data yielded from high-throughput sequencing experiments, the need of efficient and interpretable prediction strategies has been rapidly amplified. Our previous approach for subcellular localization prediction, PSLDoc, archives high overall accuracy for Gram-negative bacteria. However, PSLDoc is computational intensive due to incorporation of homology extension in feature extraction and probabilistic latent semantic analysis in feature reduction. Besides, prediction results generated by support vector machines are accurate but generally difficult to interpret. In this work, we incorporate three new techniques to improve efficiency and interpretability. First, homology extension is performed against a compact non-redundant database using a fast search model to reduce running time. Second, correspondence analysis (CA) is incorporated as an efficient feature reduction to generate a clear visual separation of different protein classes. Finally, functional classes are predicted by a combination of accurate compact set (CS) relation and interpretable one-nearest neighbor (1-NN) algorithm. Besides localization data sets, we also apply a human protein kinase set to validate generality of our proposed method. Experiment results demonstrate that our method make accurate prediction in a more efficient and interpretable manner. First, homology extension using a fast search on a compact database can greatly accelerate traditional running time up to twenty-five times faster without sacrificing prediction performance. This suggests that computational costs of many other predictors that also incorporate homology information can be largely reduced. In addition, CA can not only efficiently identify discriminative features but also provide a clear visualization of different functional classes. Moreover, predictions based on CS achieve 100% precision. When combined with 1-NN on unpredicted targets by CS, our method attains slightly better or comparable performance compared with the state-of-the-art systems.  相似文献   

15.
Progress in analytical ultracentrifugation (AUC) has been hindered by obstructions to hardware innovation and by software incompatibility. In this paper, we announce and outline the Open AUC Project. The goals of the Open AUC Project are to stimulate AUC innovation by improving instrumentation, detectors, acquisition and analysis software, and collaborative tools. These improvements are needed for the next generation of AUC-based research. The Open AUC Project combines on-going work from several different groups. A new base instrument is described, one that is designed from the ground up to be an analytical ultracentrifuge. This machine offers an open architecture, hardware standards, and application programming interfaces for detector developers. All software will use the GNU Public License to assure that intellectual property is available in open source format. The Open AUC strategy facilitates collaborations, encourages sharing, and eliminates the chronic impediments that have plagued AUC innovation for the last 20 years. This ultracentrifuge will be equipped with multiple and interchangeable optical tracks so that state-of-the-art electronics and improved detectors will be available for a variety of optical systems. The instrument will be complemented by a new rotor, enhanced data acquisition and analysis software, as well as collaboration software. Described here are the instrument, the modular software components, and a standardized database that will encourage and ease integration of data analysis and interpretation software.  相似文献   

16.

Background

Protein denaturation is often studied using differential scanning calorimetry (DSC). However, conventional instruments are limited in the temperature scanning rate available. Fast scanning calorimetry (FSC) provides an ability to study processes at much higher rates while using extremely small sample masses [ng]. This makes it a very interesting technique for protein investigation.

Methods

A combination of conventional DSC and fast scanning calorimeters was used to study denaturation of lysozyme dissolved in glycerol. Glycerol was chosen as a solvent to prevent evaporation from the micro-sized samples of the fast scanning calorimeter.

Results

The lysozyme denaturation temperatures in the range of scanning rates from 5?K/min to ca. 500,000?K/min follow the Arrhenius law. The experimental results for FSC and conventional DSC fall into two distinct clusters in a Kissinger plot, which are well approximated by two parallel straight lines.

Conclusions

The transition temperatures for the unfolding process measured on fast scanning calorimetry sensor are significantly lower than what could be expected from the results of conventional DSC using extrapolation to high scanning rates. Evidence for the influence of the relative surface area on the unfolding temperature was found.

General significance

For the first time, fast scanning calorimetry was employed to study protein denaturation with a range of temperature scanning rates of 5 orders of magnitude. Decreased thermal stability of the micro-sized samples on the fast scanning calorimeter raise caution over using bulk solution thermal stability data of proteins for applications where micro-sized dispersed protein solutions are used, e.g., spray drying.  相似文献   

17.

Background

Protein is an important molecule that performs a wide range of functions in biological systems. Recently, the protein folding attracts much more attention since the function of protein can be generally derived from its molecular structure. The GOR algorithm is one of the most successful computational methods and has been widely used as an efficient analysis tool to predict secondary structure from protein sequence. However, the execution time is still intolerable with the steep growth in protein database. Recently, FPGA chips have emerged as one promising application accelerator to accelerate bioinformatics algorithms by exploiting fine-grained custom design.

Results

In this paper, we propose a complete fine-grained parallel hardware implementation on FPGA to accelerate the GOR-IV package for 2D protein structure prediction. To improve computing efficiency, we partition the parameter table into small segments and access them in parallel. We aggressively exploit data reuse schemes to minimize the need for loading data from external memory. The whole computation structure is carefully pipelined to overlap the sequence loading, computing and back-writing operations as much as possible. We implemented a complete GOR desktop system based on an FPGA chip XC5VLX330.

Conclusions

The experimental results show a speedup factor of more than 430x over the original GOR-IV version and 110x speedup over the optimized version with multi-thread SIMD implementation running on a PC platform with AMD Phenom 9650 Quad CPU for 2D protein structure prediction. However, the power consumption is only about 30% of that of current general-propose CPUs.
  相似文献   

18.
We have devised a method of temperature scanning with a vibrating-U-tube density meter in which temperature fluctuations are much reduced compared to those using a constant or programmable thermostat. The standard error of a density measurement is 5 × 10?7 g/ml. Volume changes associated with conformational changes of macromolecular systems can be precisely measured. Using this instrument the volume expansion-melting curves of lipid dispersions have been obtained. The curves are similar in shape and resolution to the excess heat-capacity curves derived from differential scanning calorimetry performed on the same sample. Temperature scanning allows measurements of expansivity as well as apparent volume throughout a temperature range of interest.  相似文献   

19.
Differential scanning fluorimetry (DSF) is a rapid and inexpensive screening method to identify low-molecular-weight ligands that bind and stabilize purified proteins. The temperature at which a protein unfolds is measured by an increase in the fluorescence of a dye with affinity for hydrophobic parts of the protein, which are exposed as the protein unfolds. A simple fitting procedure allows quick calculation of the transition midpoint; the difference in the temperature of this midpoint in the presence and absence of ligand is related to the binding affinity of the small molecule, which can be a low-molecular-weight compound, a peptide or a nucleic acid. DSF is best performed using a conventional real-time PCR instrument. Ligand solutions from a storage plate are added to a solution of protein and dye, distributed into the wells of the PCR plate and fluorescence intensity measured as the temperature is raised gradually. Results can be obtained in a single day.  相似文献   

20.
Semi-supervised protein classification using cluster kernels   总被引:2,自引:0,他引:2  
MOTIVATION: Building an accurate protein classification system depends critically upon choosing a good representation of the input sequences of amino acids. Recent work using string kernels for protein data has achieved state-of-the-art classification performance. However, such representations are based only on labeled data--examples with known 3D structures, organized into structural classes--whereas in practice, unlabeled data are far more plentiful. RESULTS: In this work, we develop simple and scalable cluster kernel techniques for incorporating unlabeled data into the representation of protein sequences. We show that our methods greatly improve the classification performance of string kernels and outperform standard approaches for using unlabeled data, such as adding close homologs of the positive examples to the training data. We achieve equal or superior performance to previously presented cluster kernel methods and at the same time achieving far greater computational efficiency. AVAILABILITY: Source code is available at www.kyb.tuebingen.mpg.de/bs/people/weston/semiprot. The Spider matlab package is available at www.kyb.tuebingen.mpg.de/bs/people/spider. SUPPLEMENTARY INFORMATION: www.kyb.tuebingen.mpg.de/bs/people/weston/semiprot.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号