首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The pre-B?tzinger complex (PBC) in the rostral ventrolateral medulla contains a kernel involved in respiratory rhythm generation. So far, its respiratory activity has been analyzed predominantly by electrophysiological approaches. Recent advances in fluorescence imaging now allow for the visualization of neuronal population activity in rhythmogenic networks. In the respiratory network, voltage-sensitive dyes have been used mainly, so far, but their low sensitivity prevents an analysis of activity patterns of single neurons during rhythmogenesis. We now have succeeded in using more sensitive Ca(2+) imaging to study respiratory neurons in rhythmically active brain stem slices of neonatal rats. For the visualization of neuronal activity, fluo-3 was suited best in terms of neuronal specificity, minimized background fluorescence, and response magnitude. The tissue penetration of fluo-3 was improved by hyperosmolar treatment (100 mM mannitol) during dye loading. Rhythmic population activity was imaged with single-cell resolution using a sensitive charge-coupled device camera and a x20 objective, and it was correlated with extracellularly recorded mass activity of the contralateral PBC. Correlated optical neuronal activity was obvious online in 29% of slices. Rhythmic neurons located deeper became detectable during offline image processing. Based on their activity patterns, 74% of rhythmic neurons were classified as inspiratory and 26% as expiratory neurons. Our approach is well suited to visualize and correlate the activity of several single cells with respiratory network activity. We demonstrate that neuronal synchronization and possibly even network configurations can be analyzed in a noninvasive approach with single-cell resolution and at frame rates currently not reached by most scanning-based imaging techniques.  相似文献   

2.
Voltage imaging enables monitoring neural activity at sub-millisecond and sub-cellular scale, unlocking the study of subthreshold activity, synchrony, and network dynamics with unprecedented spatio-temporal resolution. However, high data rates (>800MB/s) and low signal-to-noise ratios create bottlenecks for analyzing such datasets. Here we present VolPy, an automated and scalable pipeline to pre-process voltage imaging datasets. VolPy features motion correction, memory mapping, automated segmentation, denoising and spike extraction, all built on a highly parallelizable, modular, and extensible framework optimized for memory and speed. To aid automated segmentation, we introduce a corpus of 24 manually annotated datasets from different preparations, brain areas and voltage indicators. We benchmark VolPy against ground truth segmentation, simulations and electrophysiology recordings, and we compare its performance with existing algorithms in detecting spikes. Our results indicate that VolPy’s performance in spike extraction and scalability are state-of-the-art.  相似文献   

3.
Harmonic analysis on manifolds and graphs has recently led to mathematical developments in the field of data analysis. The resulting new tools can be used to compress and analyze large and complex data sets, such as those derived from sensor networks or neuronal activity datasets, obtained in the laboratory or through computer modeling. The nature of the algorithms (based on diffusion maps and connectivity strengths on graphs) possesses a certain analogy with neural information processing, and has the potential to provide inspiration for modeling and understanding biological organization in perception and memory formation.  相似文献   

4.

Background

Principal component analysis (PCA) has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs) is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA) has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems.

Method

In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs) were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and hardware memories achieved by the streaming computing

Results and discussion

Results demonstrate that the stream-based eigenfilter can achieve the same accuracy and is 10 times more computationally efficient when compared with conventional PCA algorithms. Hardware evaluations show that 90.3% logic resources, 95.1% power consumption and 86.8% computing latency can be reduced by the stream-based eigenfilter when compared with PCA hardware. By utilizing the streaming method, 92% memory resources and 67% power consumption can be saved when compared with the direct implementation of GHA.

Conclusion

Stream-based Hebbian eigenfilter presents a novel approach to enable real-time spike sorting with reduced computational complexity and hardware costs. This new design can be further utilized for multi-channel neuro-physiological experiments or chronic implants.  相似文献   

5.

Background

Since both the number of SNPs (single nucleotide polymorphisms) used in genomic prediction and the number of individuals used in training datasets are rapidly increasing, there is an increasing need to improve the efficiency of genomic prediction models in terms of computing time and memory (RAM) required.

Methods

In this paper, two alternative algorithms for genomic prediction are presented that replace the originally suggested residual updating algorithm, without affecting the estimates. The first alternative algorithm continues to use residual updating, but takes advantage of the characteristic that the predictor variables in the model (i.e. the SNP genotypes) take only three different values, and is therefore termed “improved residual updating”. The second alternative algorithm, here termed “right-hand-side updating” (RHS-updating), extends the idea of improved residual updating across multiple SNPs. The alternative algorithms can be implemented for a range of different genomic predictions models, including random regression BLUP (best linear unbiased prediction) and most Bayesian genomic prediction models. To test the required computing time and RAM, both alternative algorithms were implemented in a Bayesian stochastic search variable selection model.

Results

Compared to the original algorithm, the improved residual updating algorithm reduced CPU time by 35.3 to 43.3%, without changing memory requirements. The RHS-updating algorithm reduced CPU time by 74.5 to 93.0% and memory requirements by 13.1 to 66.4% compared to the original algorithm.

Conclusions

The presented RHS-updating algorithm provides an interesting alternative to reduce both computing time and memory requirements for a range of genomic prediction models.  相似文献   

6.
Increasingly, applications need to be able to self-reconfigure in response to changing requirements and environmental conditions. Autonomic computing has been proposed as a means for automating software maintenance tasks. As the complexity of adaptive and autonomic systems grows, designing and managing the set of reconfiguration rules becomes increasingly challenging and may produce inconsistencies. This paper proposes an approach to leverage genetic algorithms in the decision-making process of an autonomic system. This approach enables a system to dynamically evolve target reconfigurations at run time that balance tradeoffs between functional and non-functional requirements in response to changing requirements and environmental conditions. A key feature of this approach is incorporating system and environmental monitoring information into the genetic algorithm such that specific changes in the environment automatically drive the evolutionary process towards new viable solutions. We have applied this genetic-algorithm based approach to the dynamic reconfiguration of a collection of remote data mirrors, demonstrating an effective decision-making method for diffusing data and minimizing operational costs while maximizing data reliability and network performance, even in the presence of link failures.  相似文献   

7.

Background

Conventional pairwise sequence comparison software algorithms are being used to process much larger datasets than they were originally designed for. This can result in processing bottlenecks that limit software capabilities or prevent full use of the available hardware resources. Overcoming the barriers that limit the efficient computational analysis of large biological sequence datasets by retrofitting existing algorithms or by creating new applications represents a major challenge for the bioinformatics community.

Results

We have developed C libraries for pairwise sequence comparison within diverse architectures, ranging from commodity systems to high performance and cloud computing environments. Exhaustive tests were performed using different datasets of closely- and distantly-related sequences that span from small viral genomes to large mammalian chromosomes. The tests demonstrated that our solution is capable of generating high quality results with a linear-time response and controlled memory consumption, being comparable or faster than the current state-of-the-art methods.

Conclusions

We have addressed the problem of pairwise and all-versus-all comparison of large sequences in general, greatly increasing the limits on input data size. The approach described here is based on a modular out-of-core strategy that uses secondary storage to avoid reaching memory limits during the identification of High-scoring Segment Pairs (HSPs) between the sequences under comparison. Software engineering concepts were applied to avoid intermediate result re-calculation, to minimise the performance impact of input/output (I/O) operations and to modularise the process, thus enhancing application flexibility and extendibility. Our computationally-efficient approach allows tasks such as the massive comparison of complete genomes, evolutionary event detection, the identification of conserved synteny blocks and inter-genome distance calculations to be performed more effectively.

Electronic supplementary material

The online version of this article (doi:10.1186/s12859-015-0679-9) contains supplementary material, which is available to authorized users.  相似文献   

8.
Offline processing has been shown to strengthen memory traces and enhance learning in the absence of conscious rehearsal or awareness. Here we evaluate whether a brief, two-minute offline processing period can boost associative learning and test a memory reactivation account for these offline processing effects. After encoding paired associates, subjects either completed a distractor task for two minutes or were immediately tested for memory of the pairs in a counterbalanced, within-subjects functional magnetic resonance imaging study. Results showed that brief, awake, offline processing improves memory for associate pairs. Moreover, multi-voxel pattern analysis of the neuroimaging data suggested reactivation of encoded memory representations in dorsolateral prefrontal cortex during offline processing. These results signify the first demonstration of awake, active, offline enhancement of associative memory and suggest that such enhancement is accompanied by the offline reactivation of encoded memory representations.  相似文献   

9.
This paper proposes a novel multi-label classification method for resolving the spacecraft electrical characteristics problems which involve many unlabeled test data processing, high-dimensional features, long computing time and identification of slow rate. Firstly, both the fuzzy c-means (FCM) offline clustering and the principal component feature extraction algorithms are applied for the feature selection process. Secondly, the approximate weighted proximal support vector machine (WPSVM) online classification algorithms is used to reduce the feature dimension and further improve the rate of recognition for electrical characteristics spacecraft. Finally, the data capture contribution method by using thresholds is proposed to guarantee the validity and consistency of the data selection. The experimental results indicate that the method proposed can obtain better data features of the spacecraft electrical characteristics, improve the accuracy of identification and shorten the computing time effectively.  相似文献   

10.
Agglomerative hierarchical clustering becomes infeasible when applied to large datasets due to its O(N 2) storage requirements. We present a multi-stage agglomerative hierarchical clustering (MAHC) approach aimed at large datasets of speech segments. The algorithm is based on an iterative divide-and-conquer strategy. The data is first split into independent subsets, each of which is clustered separately. Thus reduces the storage required for sequential implementations, and allows concurrent computation on parallel computing hardware. The resultant clusters are merged and subsequently re-divided into subsets, which are passed to the following iteration. We show that MAHC can match and even surpass the performance of the exact implementation when applied to datasets of speech segments.  相似文献   

11.
Much remains to be discovered about the fate of recent memories in the human brain. Several studies have reported the reactivation of learning-related cerebral activity during post-training sleep, suggesting that sleep plays a role in the offline processing and consolidation of memory. However, little is known about how new information is maintained and processed during post-training wakefulness before sleep, while the brain is actively engaged in other cognitive activities. We show, using functional magnetic resonance imaging, that brain activity elicited during a new learning episode modulates brain responses to an unrelated cognitive task, during the waking period following the end of training. This post-training activity evolves in learning-related cerebral structures, in which functional connections with other brain regions are gradually established or reinforced. It also correlates with behavioral performance. These processes follow a different time course for hippocampus-dependent and hippocampus-independent memories. Our experimental approach allowed the characterization of the offline evolution of the cerebral correlates of recent memories, without the confounding effect of concurrent practice of the learned material. Results indicate that the human brain has already extensively processed recent memories during the first hours of post-training wakefulness, even when simultaneously coping with unrelated cognitive demands.  相似文献   

12.
The emergence of cloud computing has made it become an attractive solution for large-scale data processing and storage applications. Cloud infrastructures provide users a remote access to powerful computing capacity, large storage space and high network bandwidth to deploy various applications. With the support of cloud computing, many large-scale applications have been migrated to cloud infrastructures instead of running on in-house local servers. Among these applications, continuous write applications (CWAs) such as online surveillance systems, can significantly benefit due to the flexibility and advantages of cloud computing. However, with specific characteristics such as continuous data writing and processing, and high level demand of data availability, cloud service providers prefer to use sophisticated models for provisioning resources to meet CWAs’ demands while minimizing the operational cost of the infrastructure. In this paper, we present a novel architecture of multiple cloud service providers (CSPs) or commonly referred to as Cloud-of-Clouds. Based on this architecture, we propose two operational cost-aware algorithms for provisioning cloud resources for CWAs, namely neighboring optimal resource provisioning algorithm and global optimal resource provisioning algorithm, in order to minimize the operational cost and thereby maximizing the revenue of CSPs. We validate the proposed algorithms through comprehensive simulations. The two proposed algorithms are compared against each other to assess their effectiveness, and with a commonly used and practically viable round-robin approach. The results demonstrate that NORPA and GORPA outperform the conventional round-robin algorithm by reducing the operational cost by up to 28 and 57 %, respectively. The low complexity of the proposed cost-aware algorithms allows us to apply it to a realistic Cloud-of-Clouds environment in industry as well as academia.  相似文献   

13.
In recent years sympatry networks have been proposed as a mean to perform biogeographic analysis, but their computation posed practical difficulties that limited their use. We propose a novel approach, bringing closer the application of well-established network analysis tools to the study of sympatry patterns using both geographic and environmental data associated with the occurrence of species. Our proposed algorithm, SGraFuLo, combines the use of fuzzy logic and numerical methods to directly compute the network of interest from point locality records, without the need of specialized tools, such as geographic information systems, thereby simplifying the process for end users. By posing the problem in matrix terms, SGraFuLo is able to achieve remarkable efficiency even for large datasets, taking advantage of well established scientific computing algorithms. We present sympatry networks constructed using real-world data collected in Mexico and Central America and highlight the potential of our approach in the analysis of overlapping niches of species that could have important applications even in evolutionary studies. We also present details on the design and implementation of the algorithm, as well as experiments that show its efficiency. The source code is freely released and datasets are also available to support the reproducibility of our results.  相似文献   

14.
Understanding the structure–function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program.  相似文献   

15.
The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.  相似文献   

16.
Optical coherence tomography (OCT) is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D), that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra’s algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 496×644×51 voxels (captured by Spectralis SD-OCT) was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (∼ 4 microns), which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data.  相似文献   

17.
For the analysis of neuronal cooperativity, simultaneously recorded extracellular signals from neighboring neurons need to be sorted reliably by a spike sorting method. Many algorithms have been developed to this end, however, to date, none of them manages to fulfill a set of demanding requirements. In particular, it is desirable to have an algorithm that operates online, detects and classifies overlapping spikes in real time, and that adapts to non-stationary data. Here, we present a combined spike detection and classification algorithm, which explicitly addresses these issues. Our approach makes use of linear filters to find a new representation of the data and to optimally enhance the signal-to-noise ratio. We introduce a method called “Deconfusion” which de-correlates the filter outputs and provides source separation. Finally, a set of well-defined thresholds is applied and leads to simultaneous spike detection and spike classification. By incorporating a direct feedback, the algorithm adapts to non-stationary data and is, therefore, well suited for acute recordings. We evaluate our method on simulated and experimental data, including simultaneous intra/extra-cellular recordings made in slices of a rat cortex and recordings from the prefrontal cortex of awake behaving macaques. We compare the results to existing spike detection as well as spike sorting methods. We conclude that our algorithm meets all of the mentioned requirements and outperforms other methods under realistic signal-to-noise ratios and in the presence of overlapping spikes.  相似文献   

18.
MOTIVATION: Recently, the concept of the constrained sequence alignment was proposed to incorporate the knowledge of biologists about structures/functionalities/consensuses of their datasets into sequence alignment such that the user-specified residues/nucleotides are aligned together in the computed alignment. The currently developed programs use the so-called progressive approach to efficiently obtain a constrained alignment of several sequences. However, the kernels of these programs, the dynamic programming algorithms for computing an optimal constrained alignment between two sequences, run in (gamman2) memory, where gamma is the number of the constraints and n is the maximum of the lengths of sequences. As a result, such a high memory requirement limits the overall programs to align short sequences only. RESULTS: We adopt the divide-and-conquer approach to design a memory-efficient algorithm for computing an optimal constrained alignment between two sequences, which greatly reduces the memory requirement of the dynamic programming approaches at the expense of a small constant factor in CPU time. This new algorithm consumes only O(alphan) space, where alpha is the sum of the lengths of constraints and usually alpha < n in practical applications. Based on this algorithm, we have developed a memory-efficient tool for multiple sequence alignment with constraints. AVAILABILITY: http://genome.life.nctu.edu.tw/MUSICME.  相似文献   

19.
Advanced data analysis and visualization methodologies have played an important role in making surface electromyography both a valuable diagnostic methodology of neuromuscular disorders and a robust brain–machine interface, usable as a simple interface for prosthesis control, arm movement analysis, stiffness control, gait analysis, etc. But for diagnostic purposes, as well as for interfaces where the activation of single muscles is of interest, surface EMG suffers from severe crosstalk between deep and superficial muscle activation, making the reliable detection of the source of the signal, as well as reliable quantification of deeper muscle activation, prohibitively difficult. To address these issues we present a novel approach for processing surface electromyographic data. Our approach enables the reconstruction of 3D muscular activity location, making the depth of muscular activity directly visible. This is even possible when deep muscles are overlaid with superficial muscles, such as seen in the human forearm. The method, which we call imaging EMG (iEMG), is based on using the crosstalk between a sufficiently large number of surface electromyographic electrodes to reconstruct the 3D generating electrical potential distribution within a given area. Our results are validated by in vivo measurements of iEMG and ultrasound on the human forearm.  相似文献   

20.

Background

Activity in populations of neurons often takes the form of assemblies, where specific groups of neurons tend to activate at the same time. However, in calcium imaging data, reliably identifying these assemblies is a challenging problem, and the relative performance of different assembly-detection algorithms is unknown.

Results

To test the performance of several recently proposed assembly-detection algorithms, we first generated large surrogate datasets of calcium imaging data with predefined assembly structures and characterised the ability of the algorithms to recover known assemblies. The algorithms we tested are based on independent component analysis (ICA), principal component analysis (Promax), similarity analysis (CORE), singular value decomposition (SVD), graph theory (SGC), and frequent item set mining (FIM-X). When applied to the simulated data and tested against parameters such as array size, number of assemblies, assembly size and overlap, and signal strength, the SGC and ICA algorithms and a modified form of the Promax algorithm performed well, while PCA-Promax and FIM-X did less well, for instance, showing a strong dependence on the size of the neural array. Notably, we identified additional analyses that can improve their importance. Next, we applied the same algorithms to a dataset of activity in the zebrafish optic tectum evoked by simple visual stimuli, and found that the SGC algorithm recovered assemblies closest to the averaged responses.

Conclusions

Our findings suggest that the neural assemblies recovered from calcium imaging data can vary considerably with the choice of algorithm, but that some algorithms reliably perform better than others. This suggests that previous results using these algorithms may need to be reevaluated in this light.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号