首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We present gmblock, a block-level storage sharing system over Myrinet which uses an optimized I/O path to transfer data directly between the storage medium and the network, bypassing the host CPU and main memory bus of the storage server. It is device driver independent and retains the protection and isolation features of the OS. We evaluate the performance of a prototype gmblock server and find that: (a) the proposed techniques eliminate memory and peripheral bus contention, increasing remote I/O bandwidth significantly, in the order of 20–200% compared to an RDMA-based approach, (b) the impact of remote I/O to local computation becomes negligible, (c) the performance characteristics of RAID storage combined with limited NIC resources reduce performance. We introduce synchronized send operations to improve the degree of disk to network I/O overlapping. We deploy the OCFS2 shared-disk filesystem over gmblock and show gains for various application benchmarks, provided I/O scheduling can eliminate the disk bottleneck due to concurrent access.  相似文献   

2.
Over the last several years, many sequence alignment tools have appeared and become popular for the fast evolution of next generation sequencing technologies. Obviously, researchers that use such tools are interested in getting maximum performance when they execute them in modern infrastructures. Today’s NUMA (Non-uniform memory access) architectures present major challenges in getting such applications to achieve good scalability as more processors/cores are used. The memory system in NUMA systems shows a high complexity and may be the main cause for the loss of an application’s performance. The existence of several memory banks in NUMA systems implies a logical increase in latency associated with the accesses of a given processor to a remote bank. This phenomenon is usually attenuated by the application of strategies that tend to increase the locality of memory accesses. However, NUMA systems may also suffer from contention problems that can occur when concurrent accesses are concentrated on a reduced number of banks. Sequence alignment tools use large data structures to contain reference genomes to which all reads are aligned. Therefore, these tools are very sensitive to performance problems related to the memory system. The main goal of this study is to explore the trade-offs between data locality and data dispersion in NUMA systems. We have performed experiments with several popular sequence alignment tools on two widely available NUMA systems to assess the performance of different memory allocation policies and data partitioning strategies. We find that there is not one method that is best in all cases. However, we conclude that memory interleaving is the memory allocation strategy that provides the best performance when a large number of processors and memory banks are used. In the case of data partitioning, the best results are usually obtained when the number of partitions used is greater, sometimes combined with an interleave policy.  相似文献   

3.
The nematode, Caenorhabditis elegans (CE), serves as a model system in which to explore the impact of particularly low-levels of lead [250, 500, 1000 and 2000 parts per million (ppm) (1.4 × 10?6 M to 1.1 × 10?5 M/nematode)] on specific metabolic pathways and processes. Chromatographic profiles of redox active metabolites are captured through application of high performance liquid chromatography coupled to electrochemical detection (Coularray/HPLC). Principal Component Analysis (PCA: unbiased cluster analysis) and the application of a slicing program, located significant areas of difference occurring within the 2.8–4.58 min section of the chromatograms. It is within this region of the data profiles that known components of the purine pathway reside. Two analytes of unknown structure were detected at 3.5 and 4 min respectively. Alterations in levels of the purine, tryptophan and tyrosine pathway intermediates measured in response to differing concentrations of lead acetate indicate that the effect of lead on these pathways is not linear, yet the ratio of the pathway precursors, tryptophan and tyrosine remains relatively constant. The application of the above combined analytical approaches enhances the value of data generated. Exposure of CE to very low levels of lead produced significant alterations in profiles of electrochemically active compounds.  相似文献   

4.
A solubilized sheep red blood cell (SRBC) antigen (supernatant fraction obtained by centrifuging 107-2 × 108 sonicated SRBC at 6 × 104 g for 30 min [Sup-SRBC]), whose ability to inhibit anti-SRBC plaque formation was 70% of that of the original sonicated SRBC, was unable to elicit a detectable antibody response in either unprimed or SRBC-primed mice. However, Sup-SRBC as well as intact SRBC antigens generated memory for the secondary response, which was transferable to irradiated syngeneic recipients by injection of immune spleen cells. The memory generated by Sup-SRBC involved helper memory for anti-trinitrophenyl group (TNP) response to challenge with TNP-conjugated SRBC. Increase in the helper T cell memory in the spleens of Sup-SRBC-primed mice was also demonstrated by an in vitro culture experiment and by an adoptive cell transfer experiment. In contrast, no detectable B cell memory was generated by Sup-SRBC. Repeated stimulation with Sup-SRBC never induced significant antibody response but reduced the level of memory. A single injection of a low dose (106) of SRBC also failed to induce a definite primary antibody response generating memory for the secondary response. However, repeated stimulation with this dose of SRBC induced a high antibody response and generated good memory. From these results it is suggested that the intact structure of SRBC is required for the activation of B cells, but is not necessary for the stimulation of T cells.  相似文献   

5.
Two previous experiments on food storing and one-trial associative learning in marsh tits (Clayton 1992a; Clayton and Krebs 1992) demonstrate that information coming into the brain from the left eye disappears from the left eye system between 3 and 24 h after memory formation, whereas that coming into the brain from the right eye remains stable within the right eye system for at least 51 h after memory formation. Performance after a 7 h retention interval appears to represent an intermediate stage in which the information is no longer accessible to the left eye system but is not yet available to the right eye system, suggesting a unilateral transfer of memory. The experiments reported here further investigated lateralization and unilateral transfer of memory in food-storing marsh tits, Parus palustris, using the technique of monocular occlusion. Birds were tested for their ability to retrieve stored seeds after retention intervals of 3, 7 and 24 h under 4 different occlusion treatments. Two predictions were tested: (a) with right eye occlusion during storage, birds should show better memory performance after 3 and 24 h than after 7 h and (b) memory should be more accurate when both eyes are used during storage than with monocular occlusion. The first prediction, which arises from the fact that memory is transferred from the left to the right eye system at about 7 h and is inaccessible during the transfer, was supported by the data. The second prediction, however, was not supported. Previous work has shown that in marsh tits the two eye systems remember preferentially different aspects of the stimulus: the left eye system responds to spatial position and the right eye system to object-specific cues. It is possible that the failure to find superior performance in binocular tests was because the task could be solved by either spatial or object-specific memory.  相似文献   

6.
We report the application of multiple time regression analysis with the in situ brain perfusion technique to measure the rates of passage between blood and brain for [14C]l-proline, [14C]l-alanine, and [14C] α-aminoisobutyric acid (AIB) and their rapidly reversible volumes following perfusion of these amino acids from 10 to 60 seconds. We also report on their mechanism of transport. Proline diffused through the blood-brain barrier with a transfer coefficient (Kin) of 0.55 ± 0.15 × 10−4 ml/s/g and had no reversible compartment. AIB had a low Kin of 0.68±0.14×10−4 ml/s/g and a significant reversible volume of 4.34±0.51×10−3 ml/g in parietal cortex.l-alanine had the highest transfer coefficient, 3.11±0.26 × 10−4 ml/s/g, and a reversible volume of 10.03±0.93×10−3 ml/g in the same cerebral region. Postwash procedures which remove any radiotracer in the vasculature and capillary depletion were performed for alanine and AIB, as they had significant reversible compartments, to test the possibility of rapid efflux from the endothelial cells. Results obtained from wash and capillary depletion procedures suggest that a rapid efflux could occur from endothelial cells after entry of alanine and AIB. Mechanisms of transport forl-alanine and AIB were investigated using amino acids (5 mM) as substrates and inhibitors of different amino acid transport systems. AIB transport was reduced by plasma andl-leucine and unchanged by sodium-free buffer, confirming its passage by the L1 system.l-alanine uptake was sodium-independent and not reduced by plasma.l-serine,l-cysteine,l-leucine andl-phenylalanine produced similar inhibition (66%) whilel-alanine produced a lower inhibition (41%).l-arginine increased alanine uptake in cortex and thalamus. Addingl-serine tol-phenylalanine reduced the uptake only in cortex and hippocampus. These data suggest thatl-alanine is transported by another L transport system different from the L1 system at the luminal membrane.  相似文献   

7.
Program development environments have enabled graphics processing units (GPUs) to become an attractive high performance computing platform for the scientific community. A commonly posed problem in computational biology is protein database searching for functional similarities. The most accurate algorithm for sequence alignments is Smith-Waterman (SW). However, due to its computational complexity and rapidly increasing database sizes, the process becomes more and more time consuming making cluster based systems more desirable. Therefore, scalable and highly parallel methods are necessary to make SW a viable solution for life science researchers. In this paper we evaluate how SW fits onto the target GPU architecture by exploring ways to map the program architecture on the processor architecture. We develop new techniques to reduce the memory footprint of the application while exploiting the memory hierarchy of the GPU. With this implementation, GSW, we overcome the on chip memory size constraint, achieving 23× speedup compared to a serial implementation. Results show that as the query length increases our speedup almost stays stable indicating the solid scalability of our approach. Additionally this is a first of a kind implementation which purely runs on the GPU instead of a CPU-GPU integrated environment, making our design suitable for porting onto a cluster of GPUs.  相似文献   

8.
裴絅文  陈学良 《生态学报》1986,6(2):133-141
这篇稿件分别描述1982年双流中稻的3×7、5×7及7×7寸~2密植生态系统。我们假定该系统为线性及非时变和小区的肥水条件一致,这就形成单输入、单输出并具有反馈的线性非时变生态系统。根据系统的方块图,求得主要环节及该系统的传递函数的一般公式,各密植系统的实际传递函数即因此产生。通过它们的稳定性判断及灵敏度分析,选出5×7寸~2系统。但这三种密植的亩产相差不大(最大之差为30斤)。各密植系统的光能利用率不高,不及1.5%。如提高光能利用率,可能得到大量干物质及高额产量,这必须找出干物质最多时的苗数及叶面积指数的最佳值。要保证足够的苗数及适当的叶面积,只有在肥料上下功夫,所以我们把选出5×7寸~2扩大面积试种,把原来每亩纯氮用量17斤改为26斤,终于得到比以前好的结果。  相似文献   

9.
Biopharmaceuticals such as antibodies are produced in cultivated mammalian cells, which must be monitored to comply with good manufacturing practice. We, therefore, developed a fully automated system comprising a specific exhaust gas analyzer, inline analytics and a corresponding algorithm to precisely determine the oxygen uptake rate, carbon dioxide evolution rate, carbon dioxide transfer rate, transfer quotient and respiratory quotient without interrupting the ongoing cultivation, in order to assess its reproducibility. The system was verified using chemical simulation experiments and was able to measure the respiratory activity of hybridoma cells and DG44 cells (derived from Chinese hamster ovary cells) with satisfactory results at a minimum viable cell density of ~2.0 × 105 cells ml?1. The system was suitable for both batch and fed-batch cultivations in bubble-aerated and membrane-aerated reactors, with and without the control of pH and dissolved oxygen.  相似文献   

10.
To avoid the memory registration cost for small messages in MPI implementations over RDMA-enabled networks, message transfer protocols involve a copy to intermediate buffers at both sender and receiver. In this paper, we propose to eliminate the send-side copy when an application buffer is reused frequently. We show that it is more efficient to register the application buffer and use it for data transfer. The idea is examined for small message transfer protocols in MVAPICH2, including RDMA Write and Send/Receive based communications, one-sided communications and collectives. The proposed protocol adaptively falls back to the current protocol when the application does not frequently use its buffers. The performance results over InfiniBand indicate up to 14% improvement for single message latency, close to 20% improvement for one-sided operations and up to 25% improvement for collectives. In addition, the communication time in MPI applications with high buffer reuse is improved using this technique.  相似文献   

11.
We tested the effectiveness of an intensive, on average 17-session, adaptive and computerized working-memory training program for improving performance on untrained, paper and pencil working memory tasks, standardized school achievement tasks, and teacher ratings of classroom behavior. Third-grade children received either a computerized working memory training for about 30 minutes per session (n = 156) or participated in regular classroom activities (n = 126). Results indicated strong gains in the training task. Further, pretest and posttest transfer measures of working memory and school achievement, as well as teacher ratings, showed substantial correlations with training task performance, suggesting that the training task captured abilities that were relevant for the transfer tasks. However, effect sizes of training-specific transfer gains were very small and not consistent across tasks. These results raise questions about the benefits of intensive working-memory training programs within a regular school context.  相似文献   

12.
The antibiotic anisomycin, an inhibitor of protein synthesis in eucaryotic cells, which blocks long-term memory in mice, is shown to interact with the cholinergic system by inhibiting reversibly the acetylcholinesterase. The inhibition is a competitive one, the inhibition constant Ki being 5.0 × 10?3 for human brain acetylcholinesterase and 1.7 × 10?3 for acetylcholinesterase of bovine erythrocytes. The anisomycin effect on acetylcholinesterase is compared with the puromycin and cycloheximide-inhibition of the enzyme. The significance of the cholinergic effect of anisomycin in addition to its inhibitory effect on protein synthesis for the interpretation of memory experiments is discussed.  相似文献   

13.
The interaction of monovalent Fab fragments of NC10, an antiviral neuraminidase antibody, and the anti-idiotype antibody 3-2G12 has been used as a model system to demonstrate experimentally the influence of non-ideal binding effects on BIAcoreTM binding data. Because the association rate constant for these two molecules was found to be relatively high (about 5×105 M −1 s−1), mass transfer was recognised as a potential source of error in the analysis of the interaction kinetics. By manipulation of the flow rate and the surface density of the immobilised ligand, however, the magnitude to this error was minimised. In addition, the application of site-specific immobilisation procedures was found to improve considerably the correlation of experimental binding data to the ideal 1:1 kinetic model such that the discrepancy between experimental and fitted curves was within the noise range of the instrument. Experiments performed to measure the equilibrium constant (KD) in solution resulted in a value of similar magnitude to those obtained from the ratio of the kinetic rate constants, even those measured with a heterogeneous ligand or with a significant mass transfer component. For this system, the experimental complexities introduced by covalent immobilisation did not lead to large errors in the KD values obtained using the BIAcore © 1997 John Wiley & Sons, Ltd.  相似文献   

14.

Background

Concerns about worsening memory (“memory concerns”; MC) and impairment in memory performance are both predictors of Alzheimer''s dementia (AD). The relationship of both in dementia prediction at the pre-dementia disease stage, however, is not well explored. Refined understanding of the contribution of both MC and memory performance in dementia prediction is crucial for defining at-risk populations. We examined the risk of incident AD by MC and memory performance in patients with mild cognitive impairment (MCI).

Methods

We analyzed data of 417 MCI patients from a longitudinal multicenter observational study. Patients were classified based on presence (n = 305) vs. absence (n = 112) of MC. Risk of incident AD was estimated with Cox Proportional-Hazards regression models.

Results

Risk of incident AD was increased by MC (HR = 2.55, 95%CI: 1.33–4.89), lower memory performance (HR = 0.63, 95%CI: 0.56–0.71) and ApoE4-genotype (HR = 1.89, 95%CI: 1.18–3.02). An interaction effect between MC and memory performance was observed. The predictive power of MC was greatest for patients with very mild memory impairment and decreased with increasing memory impairment.

Conclusions

Our data suggest that the power of MC as a predictor of future dementia at the MCI stage varies with the patients'' level of cognitive impairment. While MC are predictive at early stage MCI, their predictive value at more advanced stages of MCI is reduced. This suggests that loss of insight related to AD may occur at the late stage of MCI.  相似文献   

15.
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method''s practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.  相似文献   

16.
Using resources shared within a social group—either in a cooperative or a competitive way—requires keeping track of own and others’ actions, which, in turn, requires well-developed short-term memory. Although short-term memory has been tested in social mammal species, little is known about this capacity in highly social birds, such as ravens. We compared ravens (Corvus corax) with humans in spatial tasks based on caching, which required short-term memory of one's own and of others’ actions. Human short-term memory has been most extensively tested of all social mammal species, hence providing an informative benchmark for the ravens. A recent study on another corvid species (Corvus corone) suggests their capacity to be similar to the humans’, but short-term memory skills have, to date, not been compared in a social setting. We used spatial setups based on caches of foods or objects, divided into individual and social conditions with two different spatial arrangements of caches (in a row or a 3 × 3 matrix). In each trial, a set of three up to nine caches was presented to an individual that was thereafter allowed to retrieve all items. Humans performed better on average across trials, but their performance dropped, when they had to keep track of partner's actions. This differed in ravens, as keeping track of such actions did not impair their performance. However, both humans and ravens demonstrated more memory-related mistakes in the social than in the individual conditions. Therefore, whereas both the ravens’ and the humans’ memory suffered in the social conditions, the ravens seemed to deal better with the demands of these conditions. The social conditions had a competitive element, and one might speculate that ravens’ memory strategies are more attuned to such situations, in particular in caching contexts, than is the case for humans.  相似文献   

17.
OLAP (On-Line Analytical Processing) is an approach to efficiently evaluate multidimensional data for business intelligence applications. OLAP contributes to business decision-making by identifying, extracting, and analyzing multidimensional data. The fundamental structure of OLAP is a data cube that enables users to interactively explore the distinct data dimensions. Processing depends on the complexity of queries, dimensionality, and growing size of the data cube. As data volumes keep on increasing and the demands by business users also increase, higher processing speed than ever is needed, as faster processing means faster decisions and more profit to industry. In this paper, we are proposing an Adaptive Hybrid OLAP Architecture that takes advantage of heterogeneous systems with GPUs and CPUs and leverages their different memory subsystems characteristics to minimize response time. Thus, our approach (a) exploits both types of hardware rather than using the CPU only as a frontend for GPU; (b) uses two different data formats (multidimensional cube and relational cube) to match the GPU and CPU memory access patterns and diverts queries adaptively to the best resource for solving the problem at hand; (c) exploits data locality of multidimensional OLAP on NUMA multicore systems through intelligent thread placement; and (d) guides its adaptation and choices by an architectural model that captures the memory access patterns and the underlying data characteristics. Results show an increase in performance by roughly four folds over the best known related approach. There is also the important economical factor. The proposed hybrid system costs only 10 % more than same system without GPU. With this small extra cost, the added GPU increases query processing by almost 2 times.  相似文献   

18.
The current study examined cardiovascular reactivity and recovery during memory testing in a sample of 28 younger and 28 older adults. Heart rate (HR) levels were measured before, during, and after a memory test (word list recall). Contrary to prediction, older adults did not have a blunted cardiovascular response to memory tasks compared to younger adults. Word list recall performance was predicted by both Age and an Age × HR recovery interaction. As expected, younger adults performed better on the word list task than older adults. In addition, older adults with better posttest HR recovery performed significantly better than older adults with poor posttest HR recovery, whereas HR recovery differences in younger adults were inconsequential. These relationships were not affected by subjective appraisals of anxiety and task difficulty. Overall, cardiac dysregulation, seen here as low HR recovery, represents an important, potentially modifiable, factor in memory performance in older adults. In addition to being beneficial to overall health, interventions designed to help older adults regulate their HR responses may help offset certain memory declines.  相似文献   

19.
A major barrier for a broad applicability of brain-computer interfaces (BCIs) based on electroencephalography (EEG) is the large number of EEG sensor electrodes typically used. The necessity for this results from the fact that the relevant information for the BCI is often spread over the scalp in complex patterns that differ depending on subjects and application scenarios. Recently, a number of methods have been proposed to determine an individual optimal sensor selection. These methods have, however, rarely been compared against each other or against any type of baseline. In this paper, we review several selection approaches and propose one additional selection criterion based on the evaluation of the performance of a BCI system using a reduced set of sensors. We evaluate the methods in the context of a passive BCI system that is designed to detect a P300 event-related potential and compare the performance of the methods against randomly generated sensor constellations. For a realistic estimation of the reduced system''s performance we transfer sensor constellations found on one experimental session to a different session for evaluation. We identified notable (and unanticipated) differences among the methods and could demonstrate that the best method in our setup is able to reduce the required number of sensors considerably. Though our application focuses on EEG data, all presented algorithms and evaluation schemes can be transferred to any binary classification task on sensor arrays.  相似文献   

20.
《IRBM》2022,43(6):621-627
Objective: Steady-State Visual Evoked Potentials based Brain-Computer Interfaces (SSVEP-based BCIs) systems have been shown as promising technology due to their short response time and ease of use. SSVEP-based BCIs use brain responses to a flickering visual stimulus as an input command to an external application or device, and it can be influenced by stimulus properties, signal recording, and signal processing. We aim to investigate the system performance varying the stimuli spatial proximity (a stimulus property).Material and methods: We performed a comparative analysis of two visual interface designs (named cross and square) for an SSVEP-based BCI. The power spectrum density (PSD) was used as feature extraction and the Support Machine Vector (SVM) as classification method. We also analyzed the effects of five flickering frequencies (6.67, 8.57, 10, 12 e 15 Hz) between and within interfaces.Results: We found higher accuracy rates for the flickering frequencies of 10, 12, and 15 Hz. The stimulus of 10 Hz presented the highest SSVEP amplitude response for both interfaces. The system presented the best performance (highest classification accuracy and information transfer rate) using the cross interface (lower visual angle).Conclusion: Our findings suggest that the system has the highest performance in the spatial proximity range from 4° to 13° (visual angle). In addition, we conclude that as the stimulus spatial proximity increases, the interference from other stimuli reduces, and the SSVEP amplitude response decreases, which reduces system accuracy. The inter-stimulus distance is a visual interface parameter that must be chosen carefully to increase the efficiency of an SSVEP-based BCI.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号