首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Image compression is an application of data compression on digital images. Several lossy/lossless transform coding techniques are used for image compression. Discrete cosine transform (DCT) is one such widely used technique. A variation of DCT, known as warped discrete cosine transform (WDCT), is used for 2-D image compression and it is shown to perform better than the DCT at high bit-rates. We extend this concept and develop the 3-D WDCT, a transform that has not been previously investigated. We outline some of its important properties, which make it especially suitable for image compression. We then propose a complete image coding scheme for volumetric data sets based on the 3-D WDCT scheme. It is shown that the 3-D WDCT-based compression scheme performs better than a similar 3-D DCT scheme for volumetric data sets at high bit-rates.  相似文献   

2.
A new lossless compression method using context modeling for ultrasound radio-frequency (RF) data is presented. In the proposed compression method, the combination of context modeling and entropy coding is used for effectively lowering the data transfer rates for modern software-based medical ultrasound imaging systems. From the phantom and in vivo data experiments, the proposed lossless compression method provides the average compression ratio of 0.45 compared to the Burg and JPEG-LS methods (0.52 and 0.55, respectively). This result indicates that the proposed compression method is capable of transferring 64-channel 40-MHz ultrasound RF data with a 16-lane PCI-Express 2.0 bus for software beamforming in real time.  相似文献   

3.
基于小波变换的混合二维ECG数据压缩方法   总被引:5,自引:0,他引:5  
提出了一种新的基于小波变换的混合二维心电(electrocardiogram,ECG)数据压缩方法。基于ECG数据的两种相关性,该方法首先将一维ECG信号转化为二维信号序列。然后对二维序列进行了小波变换,并利用改进的编码方法对变换后的系数进行了压缩编码:即先根据不同系数子带的各自特点和系数子带之间的相似性,改进了等级树集合分裂(setpartitioninghierarchicaltrees,SPIHT)算法和矢量量化(vectorquantization,VQ)算法;再利用改进后的SPIHT与VQ相混合的算法对小波变换后的系数进行了编码。利用所提算法与已有具有代表性的基于小波变换的压缩算法和其他二维ECG信号的压缩算法,对MIT/BIH数据库中的心律不齐数据进行了对比压缩实验。结果表明:所提算法适用于各种波形特征的ECG信号,并且在保证压缩质量的前提下,可以获得较大的压缩比。  相似文献   

4.
Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals.  相似文献   

5.
Transmission of long duration EEG signals without loss of information is essential for telemedicine based applications. In this work, a lossless compression scheme for EEG signals based on neural network predictors using the concept of correlation dimension (CD) is proposed. EEG signals which are considered as irregular time series of chaotic processes can be characterized by the non-linear dynamic parameter CD which is a measure of the correlation among the EEG samples. The EEG samples are first divided into segments of 1 s duration and for each segment, the value of CD is calculated. Blocks of EEG samples are then constructed such that each block contains segments with closer CD values. By arranging the EEG samples in this fashion, the accuracy of the predictor is improved as it makes use of highly correlated samples. As a result, the magnitude of the prediction error decreases leading to less number of bits for transmission. Experiments are conducted using EEG signals recorded under different physiological conditions. Different neural network predictors as well as classical predictors are considered. Experimental results show that the proposed CD based preprocessing scheme improves the compression performance of the predictors significantly.  相似文献   

6.
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.  相似文献   

7.
Jinbo Xu  Sheng Wang 《Proteins》2019,87(12):1069-1081
This paper reports the CASP13 results of distance-based contact prediction, threading, and folding methods implemented in three RaptorX servers, which are built upon the powerful deep convolutional residual neural network (ResNet) method initiated by us for contact prediction in CASP12. On the 32 CASP13 FM (free-modeling) targets with a median multiple sequence alignment (MSA) depth of 36, RaptorX yielded the best contact prediction among 46 groups and almost the best 3D structure modeling among all server groups without time-consuming conformation sampling. In particular, RaptorX achieved top L/5, L/2, and L long-range contact precision of 70%, 58%, and 45%, respectively, and predicted correct folds (TMscore > 0.5) for 18 of 32 targets. Further, RaptorX predicted correct folds for all FM targets with >300 residues (T0950-D1, T0969-D1, and T1000-D2) and generated the best 3D models for T0950-D1 and T0969-D1 among all groups. This CASP13 test confirms our previous findings: (a) predicted distance is more useful than contacts for both template-based and free modeling; and (b) structure modeling may be improved by integrating template and coevolutionary information via deep learning. This paper will discuss progress we have made since CASP12, the strength and weakness of our methods, and why deep learning performed much better in CASP13.  相似文献   

8.
目的:脑电信号含多种噪声和伪迹,信噪比较低,特征提取前必须进行复杂的预处理,严重影响睡眠分期的速度。鉴于此,本文提出一种基于奇异值第一主成分的睡眠脑电分期方法,该方法抗噪性能较强,可省去预处理过程,减少计算量,提高睡眠分期的效率。方法:对未经过预处理的睡眠脑电进行奇异系统分析,研究奇异谱曲线,提取奇异值第一主成分,探索其随睡眠状态变化的规律。并通过支持向量机利用奇异值第一主成分对睡眠分期。结果:奇异值第一主成分不仅能表征脑电信号主体,而且可以抑制噪声、降低维数。随着睡眠的深入,奇异值第一主成分的值逐渐增大,但在REM期处于S1期和S2期之间。经MIT-BIH睡眠数据库中5例同导联位置的脑电数据测试(仅1导脑电数据),睡眠脑电分期的准确率达到86.4%。结论:在未对脑电信号进行预处理的情况下,提取的睡眠脑电的奇异值第一主成分能有效表征睡眠状态,是一种有效的睡眠分期依据。本文运用提出的方法仅采用1导脑电数据,就能得到较为满意的睡眠分期结果。该方法有较强的分类性能,且抗噪能力强,不需要对脑电作复杂的预处理,计算量小,方法简单,很大程度上提高了睡眠分期的效率。  相似文献   

9.
We investigated the effects of forest fragmentation on golden-headed lion tamarins (Leontopithecus chrysomelas) by qualitatively and quantitatively characterizing the landscape throughout the species range, conducting surveys, and exploring predictive models of presence and absence. We identified 784 forest patches that varied in size, shape, core area, habitat composition, elevation, and distance to neighboring patches and towns. We conducted 284 interviews with local residents and 133 playback experiments in 98 patches. Results indicated a reduction in the western portions of the former species range. We tested whether L. chrysomelas presence or absence was related to the aforementioned fragmentation indices using Monte Carlo logistic regression techniques. The analysis yielded a majority of iterations with a one-term final model of which Core Area Index (percent of total area that is core) was the only significant type. Model concordance ranged between 65 and 90 percent. Area was highlighted for its potential predictive ability. Although final models for area lacked significance, their failure to reach significance was marginal and we discuss potential confounding factors weakening the term's predictive ability. We conclude that lower Core Area Index scores are useful indicators of forest patches at risk for not supporting L. chrysomelas. Taken together, our analyses of the landscape, survey results, and logistic regression modeling indicated that the L. chrysomelas metapopulation is facing substantial threat. The limited vagility of lion tamarins in nonforest matrix may lead to increasingly smaller and inbred populations subject to significant impact from edge effects and small population size. Local extinction is imminent in many forest patches in the L. chrysomelas range.  相似文献   

10.
This paper describes the application of text compression methodsto machine-readable files of nucleic acid and protein sequencedata. Two main methods are used to reduce the storage requirementsof such files, these being n-gram coding and run-length coding.A Pascal program combining both of these techniques resultedin a compression figure of 74.6% for the GenBank database anda program that used only n-gram coding gave a compression figureof 42.8% for the Protein Identification Resource database. Received on November 29, 1985; accepted on February 24, 1986  相似文献   

11.

Background

The study investigated the residual impact of eyeblinks on the electroencephalogram (EEG) after application of different correction procedures, namely a regression method (eye movement correction procedure, EMCP) and a component based method (Independent Component Analysis, ICA).

Methodology/Principle Findings

Real and simulated data were investigated with respect to blink-related potentials and the residual mutual information of uncorrected vertical electrooculogram (EOG) and corrected EEG, which is a measure of residual EOG contribution to the EEG. The results reveal an occipital positivity that peaks at about 250ms after the maximum blink excursion following application of either correction procedure. This positivity was not observable in the simulated data. Mutual information of vertical EOG and EEG depended on the applied regression procedure. In addition, different correction results were obtained for real and simulated data. ICA yielded almost perfect correction in all conditions. However, under certain conditions EMCP yielded comparable results to the ICA approach.

Conclusion

In conclusion, for EMCP the quality of correction depended on the EMCP variant used and the structure of the data, whereas ICA always yielded almost perfect correction. However, its disadvantage is the much more complex data processing, and that it requires a suitable amount of data.  相似文献   

12.
Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0–6.1 %) than that of componentwise boosting.  相似文献   

13.
We answer several important questions concerning EEG. We also shortly discuss importance of nonlinear methods of contemporary physics in EEG analysis. Basic definitions and explanation of fundamental concepts may be found in my previous publications in NBP.  相似文献   

14.
An important experimental design problem in early-stage drug discovery is how to prioritize available compounds for testing when very little is known about the target protein. Informer-based ranking (IBR) methods address the prioritization problem when the compounds have provided bioactivity data on other potentially relevant targets. An IBR method selects an informer set of compounds, and then prioritizes the remaining compounds on the basis of new bioactivity experiments performed with the informer set on the target. We formalize the problem as a two-stage decision problem and introduce the Bayes Optimal Informer SEt (BOISE) method for its solution. BOISE leverages a flexible model of the initial bioactivity data, a relevant loss function, and effective computational schemes to resolve the two-step design problem. We evaluate BOISE and compare it to other IBR strategies in two retrospective studies, one on protein-kinase inhibition and the other on anticancer drug sensitivity. In both empirical settings BOISE exhibits better predictive performance than available methods. It also behaves well with missing data, where methods that use matrix completion show worse predictive performance.  相似文献   

15.

Background

The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference.

Results

This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms.

Conclusions

LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.  相似文献   

16.
With larger, higher speed detectors and improved automation, individual CryoEM instruments are capable of producing a prodigious amount of data each day, which must then be stored, processed and archived. While it has become routine to use lossless compression on raw counting-mode movies, the averages which result after correcting these movies no longer compress well. These averages could be considered sufficient for long term archival, yet they are conventionally stored with 32 bits of precision, despite high noise levels. Derived images are similarly stored with excess precision, providing an opportunity to decrease project sizes and improve processing speed. We present a simple argument based on propagation of uncertainty for safe bit truncation of flat-fielded images combined with lossless compression. The same method can be used for most derived images throughout the processing pipeline. We test the proposed strategy on two standard, data-limited CryoEM data sets, demonstrating that these limits are safe for real-world use. We find that 5 bits of precision is sufficient for virtually any raw CryoEM data and that 8–12 bits is sufficient for intermediate averages or final 3-D structures. Additionally, we detail and recommend specific rules for discretization of data as well as a practical compressed data representation that is tuned to the specific needs of CryoEM.  相似文献   

17.
The exclusive use of characters coding for specific life stages may bias tree reconstruction. If characters from several life stages are coded, the type of coding becomes important. Here, we simulate the influence on tree reconstruction of morphological characters of Odonata larvae incorporated into a data matrix based on the adult body under different coding schemes. For testing purposes, our analysis is focused on a well‐supported hypothesis: the relationships of the suborders Zygoptera, ‘Anisozygoptera’, and Anisoptera. We studied the cephalic morphology of Epiophlebia, a key taxon among Odonata, and compared it with representatives of Zygoptera and Anisoptera in order to complement the data matrix. Odonate larvae are characterized by a peculiar morphology, such as the specific head form, mouthpart configuration, ridge configuration, cephalic musculature, and leg and gill morphology. Four coding strategies were used to incorporate the larval data: artificial coding (AC), treating larvae as independent terminal taxa; non‐multistate coding (NMC), preferring the adult life stage; multistate coding (MC); and coding larval and adult characters separately (SC) within the same taxon. As expected, larvae are ‘monophyletic’ in the AC strategy, but with anisopteran and zygopteran larvae as sister groups. Excluding larvae in the NMC approach leads to strong support for both monophyletic Odonata and Epiprocta, whereas MC erodes phylogenetic signal completely. This is an obvious result of the larval morphology leading to many multistate characters. SC results in the strongest support for Odonata, and Epiprocta receives the same support as with NMC. Our results show the deleterious effects of larval morphology on tree reconstruction when multistate coding is applied. Coding larval characters separately is still the best approach in a phylogenetic framework. © 2015 The Linnean Society of London  相似文献   

18.
Poddubnaya  E. P. 《Neurophysiology》2002,34(5):373-385
We carried out a computer analysis of the EEG of 169 healthy schoolchildren (6 to 17 years old) with the use of a periodometric approach allowing us to obtain a number of quantitative indices that characterize the temporal structure of the analyzed EEG segment (histogram of distribution of the frequencies of EEG oscillations within the analyzed time period, indices of the different rhythms, and matrix of the probabilities of conversion from waves of one frequency range to waves of other ranges). We demonstrated that data of the periodometric analysis can be used for objective classification of EEG patterns. In children of different age groups, five types of background EEG activity were classified and described; we also demonstrated that the intragroup frequencies of these EEG types vary in healthy children with age. We discuss the advantages and disadvantages of the periodometric analysis of EEG, as well as the prospects and expediency of use of this analysis in physiological studies and in clinics.  相似文献   

19.
Software based efficient and reliable ECG data compression and transmission scheme is proposed here. The algorithm has been applied to various ECG data of all the 12 leads taken from PTB diagnostic ECG database (PTB-DB). First of all, R-peaks are detected by differentiation and squaring technique and QRS regions are located. To achieve a strict lossless compression in the QRS regions and a tolerable lossy compression in rest of the signal, two different compression algorithms have used. The whole compression scheme is such that the compressed file contains only ASCII characters. These characters are transmitted using internet based Short Message Service (SMS) and at the receiving end, original ECG signal is brought back using just the reverse logic of compression. It is observed that the proposed algorithm can reduce the file size significantly (compression ratio: 22.47) preserving ECG signal morphology.  相似文献   

20.
Brain waves are proposed as a biometric for verification of the identities of individuals in a small group. The approach is based on a novel two-stage biometric authentication method that minimizes both false accept error (FAE) and false reject error (FRE). These brain waves (or electroencephalogram (EEG) signals) are recorded while the user performs either one or several thought activities. As different individuals have different thought processes, this idea would be appropriate for individual authentication. In this study, autoregressive coefficients, channel spectral powers, inter-hemispheric channel spectral power differences, inter-hemispheric channel linear complexity and non-linear complexity (approximate entropy) values were used as EEG features by the two-stage authentication method with a modified four fold cross validation procedure. The results indicated that perfect accuracy was obtained, i.e. the FRE and FAE were both zero when the proposed method was tested on five subjects using certain thought activities. This initial study has shown that the combination of the two-stage authentication method with EEG features from thought activities has good potential as a biometric as it is highly resistant to fraud. However, this is only a pilot type of study and further extensive research with more subjects would be necessary to establish the suitability of the proposed method for biometric applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号