首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Single-cell and single-molecule measurements indicate the importance of stochastic phenomena in cell biology. Stochasticity creates spontaneous differences in the copy numbers of key macromolecules and the timing of reaction events between genetically-identical cells. Mathematical models are indispensable for the study of phenotypic stochasticity in cellular decision-making and cell survival. There is a demand for versatile, stochastic modeling environments with extensive, preprogrammed statistics functions and plotting capabilities that hide the mathematics from the novice users and offers low-level programming access to the experienced user. Here we present StochPy (Stochastic modeling in Python), which is a flexible software tool for stochastic simulation in cell biology. It provides various stochastic simulation algorithms, SBML support, analyses of the probability distributions of molecule copy numbers and event waiting times, analyses of stochastic time series, and a range of additional statistical functions and plotting facilities for stochastic simulations. We illustrate the functionality of StochPy with stochastic models of gene expression, cell division, and single-molecule enzyme kinetics. StochPy has been successfully tested against the SBML stochastic test suite, passing all tests. StochPy is a comprehensive software package for stochastic simulation of the molecular control networks of living cells. It allows novice and experienced users to study stochastic phenomena in cell biology. The integration with other Python software makes StochPy both a user-friendly and easily extendible simulation tool.  相似文献   

3.
Recent developments have led to an enormous increase of publicly available large genomic data, including complete genomes. The 1000 Genomes Project was a major contributor, releasing the results of sequencing a large number of individual genomes, and allowing for a myriad of large scale studies on human genetic variation. However, the tools currently available are insufficient when the goal concerns some analyses of data sets encompassing more than hundreds of base pairs and when considering haplotype sequences of single nucleotide polymorphisms (SNPs). Here, we present a new and potent tool to deal with large data sets allowing the computation of a variety of summary statistics of population genetic data, increasing the speed of data analysis.  相似文献   

4.
5.
The nematode Caenorhabditis elegans has been employed as a model organism to study human obesity due to the conservation of the pathways that regulate energy metabolism. To assay for fat storage in C. elegans, a number of fat-soluble dyes have been employed including BODIPY, Nile Red, Oil Red O, and Sudan Black. However, dye-labeled assays produce results that often do not correlate with fat stores in C. elegans. An alternative label-free approach to analyze fat storage in C. elegans has recently been described with coherent anti-Stokes Raman scattering (CARS) microscopy. Here, we compare the performance of CARS microscopy with standard dye-labeled techniques and biochemical quantification to analyze fat storage in wild type C. elegans and with genetic mutations in the insulin/IGF-1 signaling pathway including the genes daf-2 (insulin/IGF-1 receptor), rict-1 (rictor) and sgk-1 (serum glucocorticoid kinase). CARS imaging provides a direct measure of fat storage with unprecedented details including total fat stores as well as the size, number, and lipid-chain unsaturation of individual lipid droplets. In addition, CARS/TPEF imaging reveals a neutral lipid species that resides in both the hypodermis and the intestinal cells and an autofluorescent organelle that resides exclusively in the intestinal cells. Importantly, coherent addition of the CARS fields from the C-H abundant neutral lipid permits selective CARS imaging of the fat store, and further coupling of spontaneous Raman analysis provides unprecedented details including lipid-chain unsaturation of individual lipid droplets. We observe that although daf-2, rict-1, and sgk-1 mutants affect insulin/IGF-1 signaling, they exhibit vastly different phenotypes in terms of neutral lipid and autofluorescent species. We find that CARS imaging gives quantification similar to standard biochemical triglyceride quantification. Further, we independently confirm that feeding worms with vital dyes does not lead to the staining of fat stores, but rather the sequestration of dyes in lysosome-related organelles. In contrast, fixative staining methods provide reproducible data but are prone to errors due to the interference of autofluorescent species and the non-specific staining of cellular structures other than fat stores. Importantly, both growth conditions and developmental stage should be considered when comparing methods of C. elegans lipid storage. Taken together, we confirm that CARS microscopy provides a direct, non-invasive, and label-free means to quantitatively analyze fat storage in living C. elegans.  相似文献   

6.
Ochratoxin A(OTA) is found to be one of the predominant contaminating mycotoxins in a wide variety of food commodities. To avoid the risk of OTA consumption, the detection and quantitation of OTA level are of great significance. Based on the fact that ssDNA aptamer has the ability to form a double-strand structure with its complementary sequence, a simple and rapid aptamer-based label-free approach for highly sensitive and selective fluorescence detection of OTA was developed by using ultra-sensitive double-strand DNA specific dyes PicoGreen. The results showed that as low as 1 ng/mL of OTA could be detected with a dynamic range of more than 5 orders of magnitude which satisfies the requirements for OTA maximum residue limit in various food regulated by European Commission. With the specificity of aptamer, the assay exhibited high selectivity for OTA against two other analogues (N-acetyl-l-phenylalanine and zearalenone). We also tested the aptasensor practicability using real sample of 1% beer spiked with a series of concentration of OTA and the results show good tolerance to matrix effect. All detections could be achieved in less than 30 min, which provides a simple, quick and sensitive detection method for OTA screening in food safety and could be easily extend to other small molecular chemical compounds detection which aptamer has been selected.  相似文献   

7.
8.

Background and Aims

Early detection of fibrosis is important in identifying individuals at risk for advanced liver disease in non-alcoholic fatty liver disease (NAFLD). We tested whether second-harmonic generation (SHG) and coherent anti-Stokes Raman scattering (CARS) microscopy, detecting fibrillar collagen and fat in a label-free manner, might allow automated and sensitive quantification of early fibrosis in NAFLD.

Methods

We analyzed 32 surgical biopsies from patients covering histological fibrosis stages 0–4, using multimodal label-free microscopy. Native samples were visualized by SHG and CARS imaging for detecting fibrillar collagen and fat. Furthermore, we developed a method for quantitative assessment of early fibrosis using automated analysis of SHG signals.

Results

We found that the SHG mean signal intensity correlated well with fibrosis stage and the mean CARS signal intensity with liver fat. Little overlap in SHG signal intensities between fibrosis stages 0 and 1 was observed. A specific fibrillar SHG signal was detected in the liver parenchyma outside portal areas in all samples histologically classified as having no fibrosis. This signal correlated with immunohistochemical location of fibrillar collagens I and III.

Conclusions

This study demonstrates that label-free SHG imaging detects fibrillar collagen deposition in NAFLD more sensitively than routine histological staging and enables observer-independent quantification of early fibrosis in NAFLD with continuous grading.  相似文献   

9.
The quantification of bolus-tracking MRI techniques remains challenging. The acquisition usually relies on one contrast and the analysis on a simplified model of the various phenomena that arise within a voxel, leading to inaccurate perfusion estimates. To evaluate how simplifications in the interstitial model impact perfusion estimates, we propose a numerical tool to simulate the MR signal provided by a dynamic contrast enhanced (DCE) MRI experiment. Our model encompasses the intrinsic and relaxations, the magnetic field perturbations induced by susceptibility interfaces (vessels and cells), the diffusion of the water protons, the blood flow, the permeability of the vessel wall to the the contrast agent (CA) and the constrained diffusion of the CA within the voxel. The blood compartment is modeled as a uniform compartment. The different blocks of the simulation are validated and compared to classical models. The impact of the CA diffusivity on the permeability and blood volume estimates is evaluated. Simulations demonstrate that the CA diffusivity slightly impacts the permeability estimates ( for classical blood flow and CA diffusion). The effect of long echo times is investigated. Simulations show that DCE-MRI performed with an echo time may already lead to significant underestimation of the blood volume (up to 30% lower for brain tumor permeability values). The potential and the versatility of the proposed implementation are evaluated by running the simulation with realistic vascular geometry obtained from two photons microscopy and with impermeable cells in the extravascular environment. In conclusion, the proposed simulation tool describes DCE-MRI experiments and may be used to evaluate and optimize acquisition and processing strategies.  相似文献   

10.
Metabolomics: A Primer   总被引:2,自引:0,他引:2  
  相似文献   

11.
We report herein the G-quadruplex-selective property of a luminescent cyclometallated iridium(III) complex for the detection of adenosine-5′-triphosphate (ATP) in aqueous solution. The ATP-binding aptamer was employed as the ATP recognition unit, while the iridium(III) complex was used to monitor the formation of the G-quadruplex structure induced by ATP. The sensitivity and fold enhancement of the assay were higher than those of the previously reported assay using the organic dye crystal violet as a fluorescent probe. This label-free luminescent switch-on assay exhibits high sensitivity and selectivity towards ATP with a limit of detection of 2.5 µM.  相似文献   

12.
In this study, we present a fully automated tool, called IDEAL-Q, for label-free quantitation analysis. It accepts raw data in the standard mzXML format as well as search results from major search engines, including Mascot, SEQUEST, and X!Tandem, as input data. To quantify as many identified peptides as possible, IDEAL-Q uses an efficient algorithm to predict the elution time of a peptide unidentified in a specific LC-MS/MS run but identified in other runs. Then, the predicted elution time is used to detect peak clusters of the assigned peptide. Detected peptide peaks are processed by statistical and computational methods and further validated by signal-to-noise ratio, charge state, and isotopic distribution criteria (SCI validation) to filter out noisy data. The performance of IDEAL-Q has been evaluated by several experiments. First, a serially diluted protein mixed with Escherichia coli lysate showed a high correlation with expected ratios and demonstrated good linearity (R2 = 0.996). Second, in a biological replicate experiment on the THP-1 cell lysate, IDEAL-Q quantified 87% (1,672 peptides) of all identified peptides, surpassing the 45.7% (909 peptides) achieved by the conventional identity-based approach, which only quantifies peptides identified in all LC-MS/MS runs. Manual validation on all 11,940 peptide ions in six replicate LC-MS/MS runs revealed that 97.8% of the peptide ions were correctly aligned, and 93.3% were correctly validated by SCI. Thus, the mean of the protein ratio, 1.00 ± 0.05, demonstrates the high accuracy of IDEAL-Q without human intervention. Finally, IDEAL-Q was applied again to the biological replicate experiment but with an additional SDS-PAGE step to show its compatibility for label-free experiments with fractionation. For flexible workflow design, IDEAL-Q supports different fractionation strategies and various normalization schemes, including multiple spiked internal standards. User-friendly interfaces are provided to facilitate convenient inspection, validation, and modification of quantitation results. In summary, IDEAL-Q is an efficient, user-friendly, and robust quantitation tool. It is available for download.Quantitative analysis of protein expression promises to provide fundamental understanding of the biological changes or biomarker discoveries in clinical applications. In recent years, various stable isotope labeling techniques, e.g. ICAT (1), enzymatic labeling using 18O/16O (2, 3), stable isotope labeling by amino acids in cell culture (4), and isobaric tagging for relative and absolute quantitation (2, 5), coupled with LC-MS/MS have been widely used for large scale quantitative proteomics. However, several factors, such as the limited number of samples, the complexity of procedures in isotopic labeling experiments, and the high cost of reagents, limit the applicability of isotopic labeling techniques to high throughput analysis. Unlike the labeling approaches, the label-free quantitation approach quantifies protein expression across multiple LC-MS/MS analyses directly without using any labeling technique (79). Thus, it is particularly useful for analyzing clinical specimens in highly multiplexed quantitation (10, 11); theoretically, it can be used to compare any number of samples. Despite these significant advantages, data analysis in label-free experiments is an intractable problem because of the experimental procedures. First, although high reproducibility in LC is considered a critical prerequisite, variations, including the aging of separation columns, changes in sample buffers, and fluctuations in temperature, will cause a chromatographic shift in retention time for analytes in different LC-MS/MS runs and thus complicate the analysis. In addition, under the label-free approach, many technical replicate analyses across a large number of samples are often acquired; however, comparing a large number of data files further complicates data analysis and renders lower quantitation accuracy than that derived by labeling methods. Hence, an accurate, automated computation tool is required to effectively solve the problem of chromatographic shift, analyze a large amount of experimental data, and provide convenient user interfaces for manual validation of quantitation results.The rapid emergence of new label-free techniques for biomarker discovery has inspired the development of a number of bioinformatics tools in recent years. For example, Scaffold (Proteome Software) and Census (12) process PepXML search results to quantify relative protein expression based on spectral counting (1315), which uses the number of MS/MS spectra assigned to a protein to determine the relative protein amount. Spectral counting has demonstrated a high correlation with protein abundance; however, to achieve good quantitation accuracy with the technique, high speed MS/MS data acquisition is required. Moreover, manipulations of the exclusion/inclusion strategy also affect the accuracy of spectral counting significantly. Because peptide level quantitation is also important for post-translational modification studies, the accuracy of spectral counting on peptide level quantitation deserves further study.Another type of quantitation analysis determines peptide abundance by MS1 peak signals. According to some studies, MS1 peak signals across different LC-MS/MS runs can be highly reproducible and correlate well with protein abundance in complex biological samples (79). Quantitation analysis methods based on MS1 peak signals can be classified into three categories: identity-based, pattern-based, and hybrid-based methods (16). Identity-based methods (79) depend on the results of MS/MS sequencing to identify and detect peptide signals in MS1 data. However, because the data acquisition speed of MS scanning is insufficient, a considerable number of low abundance peptides may not be selected for limited MS/MS sequencing. Only a few peptides can be repetitively identified in all LC-MS/MS runs and subsequently quantified; thus, only a small fraction of identified peptides are quantified, resulting in a small number of quantifiable peptides/proteins.In contrast to identity-based methods, pattern-based methods (1723), including the publicly available MSight (20), MZmine (21, 22), and msInspect (23), tend to quantify all peptide peaks in MS1 data to increase the number of quantifiable peptides. These methods first detect all peaks in each MS1 data and then align the detected peaks across different LC-MS/MS runs. However, in pattern-based methods, efficient detection and alignment of the peaks between each pair of LC-MS/MS runs are a major challenge. To align the peaks, several methods based on dynamic programming or image pattern recognition have been proposed (2426). The algorithms applied in these methods require intensive computation, and their computation time increases dramatically as the number of compared samples increases because all the LC-MS/MS runs must be processed. Therefore, pattern-based approaches are infeasible for processing a large number of samples. Furthermore, pattern recognition algorithms may fail on data containing noise or overlapping peptide signal (i.e. co-eluting peptides). The hybrid-based quantitation approach (16, 2730) combines a pattern recognition algorithm with peptide identification results to align shifted peptides for quantitation. The pioneering accurate mass and time tag strategy (27) takes advantage of very sensitive, highly accurate mass measurement instruments with a wide dynamic range, e.g. FTICR-MS and TOF-MS, for quantitation analysis. PEPPeR (16) and SuperHirn (28) apply pattern recognition algorithms to align peaks and use the peptide identification results as landmarks to improve the alignment. However, because these methods still align all peaks in MS1 data, they suffer the same computation time problem as pattern-based methods.To resolve the computation-intensive problem in the hybrid approach, we present a fully automated software system, called IDEAL-Q, for label-free quantitation including differential protein expression and protein modification analysis. Instead of using computation-intensive pattern recognition methods, IDEAL-Q uses a computation-efficient fragmental regression method for identity-based alignment of all confidently identified peptides in a local elution time domain. It then performs peptide cross-assignment by mapping predicted elution time profiles across multiple LC-MS experiments. To improve the quantitation accuracy, IDEAL-Q applies three validation criteria to the detected peptide peak clusters to filter out noisy signals, false peptide peak clusters, and co-eluting peaks. Because of the above key features, i.e. fragmental regression and stringent validation, IDEAL-Q can substantially increase the number of quantifiable proteins as well as the quantitation accuracy compared with other extracted ion chromatogram (XIC)1 -based tools. Notably, to accommodate different designs, IDEAL-Q supports various built-in normalization procedures, including normalization based on multiple internal standards, to eliminate systematic biases. It also adapts to different fractionation strategies for in-depth proteomics profiling.We evaluated the performance of IDEAL-Q on three levels: 1) quantitation of a standard protein mixture, 2) large scale proteome quantitation using replicate cell lysate, and 3) proteome scale quantitative analysis of protein expression that incorporates an additional fractionation step. We demonstrated that IDEAL-Q can quantify up to 89% of identified proteins (703 proteins) in the replicate THP-1 cell lysate. Moreover, by manual validation of the entire 11,940 peptide ions corresponding to 1,990 identified peptides, 93% of peptide ions were accurately quantified. In another experiment on replicate data containing huge chromatographic shifts obtained from two independent LC-MS/MS instruments, IDEAL-Q demonstrated its robust quantitation and its ability to rectify such shifts. Finally, we applied IDEAL-Q to the THP-1 replicate experiment with an additional SDS-PAGE fractionation step. Equipped with user-friendly visualization interfaces and convenient data output for publication, IDEAL-Q represents a generic, robust, and comprehensive tool for label-free quantitative proteomics.  相似文献   

13.
14.
Inference of population demographic history has vastly improved in recent years due to a number of technological and theoretical advances including the use of ancient DNA. Approximate Bayesian computation (ABC) stands among the most promising methods due to its simple theoretical fundament and exceptional flexibility. However, limited availability of user-friendly programs that perform ABC analysis renders it difficult to implement, and hence programming skills are frequently required. In addition, there is limited availability of programs able to deal with heterochronous data. Here we present the software BaySICS: Bayesian Statistical Inference of Coalescent Simulations. BaySICS provides an integrated and user-friendly platform that performs ABC analyses by means of coalescent simulations from DNA sequence data. It estimates historical demographic population parameters and performs hypothesis testing by means of Bayes factors obtained from model comparisons. Although providing specific features that improve inference from datasets with heterochronous data, BaySICS also has several capabilities making it a suitable tool for analysing contemporary genetic datasets. Those capabilities include joint analysis of independent tables, a graphical interface and the implementation of Markov-chain Monte Carlo without likelihoods.  相似文献   

15.
Despite their strategic potential, tool management issues in flexible manufacturing systems (FMSs) have received little attention in the literature. Nonavailability of tools in FMSs cuts at the very root of the strategic goals for which such systems are designed. Specifically, the capability of FMSs to economically produce customized products (flexibility of scope) in varying batch sizes (flexibility of volume) and delivering them on an accelerated schedule (market response time) is seriously hampered when required tools are not available at the time needed. On the other hand, excess inventory of tools in such systems represents a significant cost due to the expensive nature of FMS tool inventory. This article constructs a dynamic tool requirement planning (DTRP) model for an FMS tool planning operation that allows dynamic determination of the optimal tool replenishments at the beginning of each arbitrary, managerially convenient, discrete time period. The analysis presented in the article consists of two distinct phases: In the first phase, tool demand distributions are obtained using information from manufacturing production plans (such as master production schedule (MPS) and material requirement plans (MRP)) and general tool life distributions fitted on actual time-to-failure data. Significant computational reductions are obtained if the tool failure data follow a Weibull or Gamma distribution. In the second phase, results from classical dynamic inventory models are modified to obtain optimal tool replenishment policies that permit compliance with such FMS-specific constraints as limited tool storage capacity and part/tool service levels. An implementation plan is included.  相似文献   

16.
In this report we describe a novel graphically oriented method for pathway modeling and a software package that allows for both modeling and visualization of biological networks in a user-friendly format. The Visinets mathematical approach is based on causal mapping (CMAP) that has been fully integrated with graphical interface. Such integration allows for fully graphical and interactive process of modeling, from building the network to simulation of the finished model. To test the performance of Visinets software we have applied it to: a) create executable EGFR-MAPK pathway model using an intuitive graphical way of modeling based on biological data, and b) translate existing ordinary differential equation (ODE) based insulin signaling model into CMAP formalism and compare the results. Our testing fully confirmed the potential of the CMAP method for broad application for pathway modeling and visualization and, additionally, showed significant advantage in computational efficiency. Furthermore, we showed that Visinets web-based graphical platform, along with standardized method of pathway analysis, may offer a novel and attractive alternative for dynamic simulation in real time for broader use in biomedical research. Since Visinets uses graphical elements with mathematical formulas hidden from the users, we believe that this tool may be particularly suited for those who are new to pathway modeling and without the in-depth modeling skills often required when using other software packages.  相似文献   

17.
Coumaphos is a common organophosphorus pesticide used in agricultural products. It is harmful to human health and has a strictly stipulated maximum residue limit (MRL) on fruits and vegetables. Currently existing methods for detection are complex in execution, require expensive tools and are time consuming and labor intensive. The surface plasmon resonance method has been widely used in biomedicine and many other fields. This study discusses a detection method based on surface plasmon resonance in organophosphorus pesticide residues. As an alternative solution, this study proposes a method to detect Coumaphos. The method, which is based on surface plasmon resonance (SPR) and immune reaction, belongs to the suppression method. A group of samples of Coumaphos was detected by this method. The concentrations of Coumaphos in the samples were 0 µg/L, 50 µg/L, 100 µg/L, 300 µg/L, 500 µg/L, 1000 µg/L, 3000 µg/L and 5000 µg/L, respectively. Through detecting a group of samples, the process of kinetic reactions was analyzed and the corresponding standard curve was obtained. The sensibility is less than 25 µg/L, conforming to the standard of the MRL of Coumaphos stipulated by China. This method is label-free, using an unpurified single antibody only and can continuously test at least 80 groups of samples continuously. It has high sensitivity and specificity. The required equipments are simple, environmental friendly and easy to control. So this method is promised for a large number of samples quick detection on spot and for application prospects.  相似文献   

18.
The taxonomic classification of the genus Vernonia Schreb is complex and, as yet, unclear. We here report the use of untargeted metabolomics approaches, followed by multivariate analyses methods and a phytochemical characterization of ten Vernonia species. Metabolic fingerprints were obtained by accurate mass measurements and used to determine the phytochemical similarities and differences between species through multivariate analyses approaches. Principal component analysis based on the relative levels of 528 metabolites, indicated that the ten species could be clustered into four groups. Thereby, V. polyanthes was the only species with presence of flavones chrysoeriol-7-O-glycuronyl, acacetin-7-O-glycuronyl and sesquiterpenes lactones piptocarphin A and piptocarphin B, while glaucolide A was detected in both V. brasiliana and V. polyanthes, separating these species from the two other species of the Vernonanthura group. Species from the Lessingianthus group were unique in showing a positive response in the foam test, suggesting the presence of saponins, which could be confirmed by metabolite annotation. V. rufogrisea showed a great variety of sesquiterpene lactones, placing this species into a separate group. Species within the Chrysolaena group were unique in accumulating clovamide. Our results of LC-MS-based profiling combined with multivariate analyses suggest that metabolomics approaches, such as untargeted LC-MS, may be potentially used as a large-scale chemotaxonomical tool, in addition to classical morphological and cytotaxonomical approaches, in order to facilitate taxonomical classifications.  相似文献   

19.
Global demand for soybean and its products has stimulated research into the production of novel genotypes with higher yields, greater drought and disease tolerance, and shorter growth times. Genetic research may be the most effective way to continue developing high-performing cultivars with desirable agronomic features and improved nutritional content and seed performance. Metabolomics, which predicts the metabolic marker for plant performance under stressful conditions, is rapidly gaining interest in plant breeding and has emerged as a powerful tool for driving crop improvement. The development of increasingly sensitive, automated, and high-throughput analytical technologies, paired with improved bioinformatics and other omics techniques, has paved the way for wide characterization of genetic characteristics for crop improvement. The combination of chromatography (liquid and gas-based) with mass spectrometry has also proven to be an indisputable efficient platform for metabolomic studies, notably plant metabolic fingerprinting investigations. Nevertheless, there has been significant progress in the use of nuclear magnetic resonance (NMR), capillary electrophoresis, and Fourier-transform infrared spectroscopy (FTIR), each with its own set of benefits and drawbacks. Furthermore, utilizing multivariate analysis, principal components analysis (PCA), discriminant analysis, and projection to latent structures (PLS), it is possible to identify and differentiate various groups. The researched soybean varieties may be correctly classified by using the PCA and PLS multivariate analyses. As metabolomics is an effective method for evaluating and selecting wild specimens with desirable features for the breeding of improved new cultivars, plant breeders can benefit from the identification of metabolite biomarkers and key metabolic pathways to develop new genotypes with value-added features.  相似文献   

20.

Background

The analysis of biological networks has become a major challenge due to the recent development of high-throughput techniques that are rapidly producing very large data sets. The exploding volumes of biological data are craving for extreme computational power and special computing facilities (i.e. super-computers). An inexpensive solution, such as General Purpose computation based on Graphics Processing Units (GPGPU), can be adapted to tackle this challenge, but the limitation of the device internal memory can pose a new problem of scalability. An efficient data and computational parallelism with partitioning is required to provide a fast and scalable solution to this problem.

Results

We propose an efficient parallel formulation of the k-Nearest Neighbour (kNN) search problem, which is a popular method for classifying objects in several fields of research, such as pattern recognition, machine learning and bioinformatics. Being very simple and straightforward, the performance of the kNN search degrades dramatically for large data sets, since the task is computationally intensive. The proposed approach is not only fast but also scalable to large-scale instances. Based on our approach, we implemented a software tool GPU-FS-kNN (GPU-based Fast and Scalable k-Nearest Neighbour) for CUDA enabled GPUs. The basic approach is simple and adaptable to other available GPU architectures. We observed speed-ups of 50–60 times compared with CPU implementation on a well-known breast microarray study and its associated data sets.

Conclusion

Our GPU-based Fast and Scalable k-Nearest Neighbour search technique (GPU-FS-kNN) provides a significant performance improvement for nearest neighbour computation in large-scale networks. Source code and the software tool is available under GNU Public License (GPL) at https://sourceforge.net/p/gpufsknn/.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号