首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Conduction of tele-3D-computer assisted operations as well as other telemedicine procedures often requires highest possible quality of transmitted medical images and video. Unfortunately, those data types are always associated with high telecommunication and storage costs that sometimes prevent more frequent usage of such procedures. We present a novel algorithm for lossless compression of medical images that is extremely helpful in reducing the telecommunication and storage costs. The algorithm models the image properties around the current, unknown pixel and adjusts itself to the local image region. The main contribution of this work is the enhancement of the well known approach of predictor blends through highly adaptive determination of blending context on a pixel-by-pixel basis using classification technique. We show that this approach is well suited for medical image data compression. Results obtained with the proposed compression method on medical images are very encouraging, beating several well known lossless compression methods. The predictor proposed can also be used in other image processing applications such as segmentation and extraction of image regions.  相似文献   

2.
This paper presents the AutoQual elastography method: a novel algorithm that improves the quality of 2D displacement field calculation from ultrasound radio frequency (RF) sequences of acutely ruptured Achilles tendons to determine image-lateral strain fields and has potential use for ligaments and muscles. This method uses 2D bicubic spline interpolation of the RF signal, Quality Determined Search, Automatic Search Range and Adaptive Block Size components as a novel combination that is designed to improve continuity and decrease displacement field noise, especially in areas of low signal strength. We present a simple experiment for quantitatively comparing the AutoQual method to a multiscale (MS) elastography method from ultrasound RF sequences of a 5% agar phantom for rigid body motion and known lateral strain loads with speeds up to 5 mm/s. We finally present examples of four in vivo Achilles tendons in various damage states and with manual or artificially controlled passive flexion of the foot. Results show that the AutoQual method offers a substantial improvement on the MS method, achieving similar performance for rigid body tracking at all speeds, a lower normalized square error at all strains induced and a more continuous strain field at higher compression rates. AutoQual also showed a greater average normalized cross correlation for image blocks in the area of interest, a lower standard deviation of the strain field and a visually more acceptable point tracking for in vivo examples. This work demonstrates lateral ultrasound elastography which is robust to the complex passive motion of the Achilles and to various imaging artifacts associated with imaging tendon rupture. This method potentially has a wide clinical application for assessing in vivo strains in and hence mechanical function of any near skin surface tissues that are longitudinally loaded.  相似文献   

3.
The goal of the current study was to investigate the fidelity of a 2D ultrasound elastography method for the measurement of tendon motion and strain. Ultrasound phantoms and ex vivo porcine flexor tendons were cyclically stretched to 4% strain while cine ultrasound radiofrequency (RF) data and video data were simultaneously collected. 2D ultrasound elastography was used to estimate tissue motion and strain from RF data, and surface tissue motion and strain were separately estimated using digital image correlation (DIC). There were strong correlations (R2>0.97) between DIC and RF measurements of phantom displacement and strain, and good agreement in estimates of peak phantom strain (DIC: 3.5±0.2%; RF: 3.7±0.1%). For tendon, elastographic estimates of displacement profiles also correlated well with DIC measurements (R2>0.92), and exhibited similar estimated peak tendon strain (DIC: 2.6±1.4%; RF: 2.2±1.3%). Elastographic tracking with B-Mode images tended to under-predict peak strain for both the phantom and tendon. This study demonstrates the capacity to use quantitative elastographic techniques to measure tendon displacement and strain within an ultrasound image window. The approach may be extendible to in vivo use on humans, which would allow for the non-invasive analysis of tendon deformation in both normal and pathological states.  相似文献   

4.
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.  相似文献   

5.
A novel full-inversion-based technique for quantitative ultrasound elastography was investigated in a pilot clinical study on five patients for non-invasive detection and localization of prostate cancer and quantification of its extent. Conventional-frequency ultrasound images and radiofrequency (RF) data (~5 MHz) were collected during mechanical stimulation of the prostate using a transrectal ultrasound probe. Pre and post-compression RF data were used to construct the strain images. The Young's modulus (YM) images were subsequently reconstructed using the derived strain images and the stress distribution estimated iteratively using finite element (FE) analysis. Tumor regions determined based on the reconstructed YM images were compared to whole-mount histopathology images of radical prostatectomy specimens. Results indicated that tumors were significantly stiffer than the surrounding tissue, demonstrating a relative YM of 2.5 ± 0.8 compared to normal prostate tissue. The YM images had a good agreement with the histopathology images in terms of tumor location within the prostate. On average, 76% ± 28% of tumor regions detected based on the proposed method were inside respective tumor areas identified in the histopathology images. Results of a linear regression analysis demonstrated a good correlation between the disease extents estimated using the reconstructed YM images and those determined from whole-mount histopathology images (r2 = 0.71). This pilot study demonstrates that the proposed method has a good potential for detection, localization and quantification of prostate cancer. The method can potentially be used for prostate needle biopsy guidance with the aim of decreasing the number of needle biopsies. The proposed technique utilizes conventional ultrasound imaging system only while no additional hardware attachment is required for mechanical stimulation or data acquisition. Therefore, the technique may be regarded as a non-invasive, low cost and potentially widely-available clinical tool for prostate cancer diagnosis.  相似文献   

6.
Genome data are becoming increasingly important for modern medicine. As the rate of increase in DNA sequencing outstrips the rate of increase in disk storage capacity, the storage and data transferring of large genome data are becoming important concerns for biomedical researchers. We propose a two-pass lossless genome compression algorithm, which highlights the synthesis of complementary contextual models, to improve the compression performance. The proposed framework could handle genome compression with and without reference sequences, and demonstrated performance advantages over best existing algorithms. The method for reference-free compression led to bit rates of 1.720 and 1.838 bits per base for bacteria and yeast, which were approximately 3.7% and 2.6% better than the state-of-the-art algorithms. Regarding performance with reference, we tested on the first Korean personal genome sequence data set, and our proposed method demonstrated a 189-fold compression rate, reducing the raw file size from 2986.8 MB to 15.8 MB at a comparable decompression cost with existing algorithms. DNAcompact is freely available at https://sourceforge.net/projects/dnacompact/for research purpose.  相似文献   

7.
Software based efficient and reliable ECG data compression and transmission scheme is proposed here. The algorithm has been applied to various ECG data of all the 12 leads taken from PTB diagnostic ECG database (PTB-DB). First of all, R-peaks are detected by differentiation and squaring technique and QRS regions are located. To achieve a strict lossless compression in the QRS regions and a tolerable lossy compression in rest of the signal, two different compression algorithms have used. The whole compression scheme is such that the compressed file contains only ASCII characters. These characters are transmitted using internet based Short Message Service (SMS) and at the receiving end, original ECG signal is brought back using just the reverse logic of compression. It is observed that the proposed algorithm can reduce the file size significantly (compression ratio: 22.47) preserving ECG signal morphology.  相似文献   

8.
In this paper we considered a theoretical evaluation of data and text compression algorithm based on the Burrows-Wheeler Transform (BWT) and General Bidirectional Associative Memory (GBAM). A new data and text lossless compression method, based on the combination of BWT1 and GBAM2 approaches, is presented. The algorithm was tested on many texts in different formats (ASCII and RTF). The compression ratio achieved is fairly good, on average 28-36%. Decompression is fast.  相似文献   

9.
Electrocardiogram (ECG) compression can significantly reduce the storage and transmission burden for the long-term recording system and telemedicine applications. In this paper, an improved wavelet-based compression method is proposed. A discrete wavelet transform (DWT) is firstly applied to the mean removed ECG signal. DWT coefficients in a hierarchical tree order are taken as the component of a vector named tree vector (TV). Then, the TV is quantized with a vector–scalar quantizer (VSQ), which is composed of a dynamic learning vector quantizer and a uniform scalar dead-zone quantizer. The context modeling arithmetic coding is finally employed to encode those quantized coefficients from the VSQ. All tested records are selected from the Massachusetts Institute of Technology-Beth Israel Hospital arrhythmia database. Statistical results show that the compression performance of the proposed method outperforms several published compression algorithms.  相似文献   

10.

Background

The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference.

Results

This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms.

Conclusions

LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.  相似文献   

11.
The introduction of fast CMOS detectors is moving the field of transmission electron microscopy into the computer science field of big data. Automated data pipelines control the instrument and initial processing steps which imposes more onerous data transfer and archiving requirements. Here we conduct a technical demonstration whereby storage and read/write times are improved 10× at a dose rate of 1 e?/pix/frame for data from a Gatan K2 direct-detection device by combination of integer decimation and lossless compression. The example project is hosted at github.com/em-MRCZ and released under the BSD license.  相似文献   

12.
With larger, higher speed detectors and improved automation, individual CryoEM instruments are capable of producing a prodigious amount of data each day, which must then be stored, processed and archived. While it has become routine to use lossless compression on raw counting-mode movies, the averages which result after correcting these movies no longer compress well. These averages could be considered sufficient for long term archival, yet they are conventionally stored with 32 bits of precision, despite high noise levels. Derived images are similarly stored with excess precision, providing an opportunity to decrease project sizes and improve processing speed. We present a simple argument based on propagation of uncertainty for safe bit truncation of flat-fielded images combined with lossless compression. The same method can be used for most derived images throughout the processing pipeline. We test the proposed strategy on two standard, data-limited CryoEM data sets, demonstrating that these limits are safe for real-world use. We find that 5 bits of precision is sufficient for virtually any raw CryoEM data and that 8–12 bits is sufficient for intermediate averages or final 3-D structures. Additionally, we detail and recommend specific rules for discretization of data as well as a practical compressed data representation that is tuned to the specific needs of CryoEM.  相似文献   

13.
In medical ultrasound imaging, the frequency-dependent attenuation causes a downshift of the center frequency of transmitted ultrasound as it propagates through the body. The downshifting results in a considerable loss of signal-to-noise ratio (SNR) after quadrature demodulation (QDM) in which down-mixing and low pass filtering are involved. To overcome the problem, dynamic QDMs have been proposed, in which the change in the center frequency along the axial direction is obtained using autocorrelation-based spectral estimation and compensated in the QDM block. As an alternative, this paper proposes an adaptive dynamic QDM using the 2nd-order autoregressive model. The main advantage over the conventional dynamic QDMs is to use real radio-frequency (RF) data in the spectral estimation, while its counterparts require additional steps to obtain either complex RF signals or complex baseband signals. This allows the proposed method to be used with a minimal modification of signal processing blocks. The performances of the proposed method were evaluated through in vitro and in vivo experiments. The performances were also compared with those of the conventional dynamic QDM. From the experiments, it was learned that the proposed method improved SNR by maximally 7.8 dB in the near field compared with the conventional dynamic QDM. In the far field, however, its SNR improvement is similar to its counterpart. This may be explained by the fact that the signal loss mainly results from the amplitude attenuation and the diffraction rather than the frequency downshift in the far field. In addition, the proposed method improved contrast resolution (CR) by at least 6.8%, compared with that of the conventional dynamic QDM. The experimental results demonstrated that the proposed method can be used to improve SNR and CR of ultrasound images in an effective manner.  相似文献   

14.
A major challenge of current high-throughput sequencing experiments is not only the generation of the sequencing data itself but also their processing, storage and transmission. The enormous size of these data motivates the development of data compression algorithms usable for the implementation of the various storage policies that are applied to the produced intermediate and final result files. In this article, we present NGC, a tool for the compression of mapped short read data stored in the wide-spread SAM format. NGC enables lossless and lossy compression and introduces the following two novel ideas: first, we present a way to reduce the number of required code words by exploiting common features of reads mapped to the same genomic positions; second, we present a highly configurable way for the quantization of per-base quality values, which takes their influence on downstream analyses into account. NGC, evaluated with several real-world data sets, saves 33–66% of disc space using lossless and up to 98% disc space using lossy compression. By applying two popular variant and genotype prediction tools to the decompressed data, we could show that the lossy compression modes preserve >99% of all called variants while outperforming comparable methods in some configurations.  相似文献   

15.
In the last decade, the cost of genomic sequencing has been decreasing so much that researchers all over the world accumulate huge amounts of data for present and future use. These genomic data need to be efficiently stored, because storage cost is not decreasing as fast as the cost of sequencing. In order to overcome this problem, the most popular general-purpose compression tool, gzip, is usually used. However, these tools were not specifically designed to compress this kind of data, and often fall short when the intention is to reduce the data size as much as possible. There are several compression algorithms available, even for genomic data, but very few have been designed to deal with Whole Genome Alignments, containing alignments between entire genomes of several species. In this paper, we present a lossless compression tool, MAFCO, specifically designed to compress MAF (Multiple Alignment Format) files. Compared to gzip, the proposed tool attains a compression gain from 34% to 57%, depending on the data set. When compared to a recent dedicated method, which is not compatible with some data sets, the compression gain of MAFCO is about 9%. Both source-code and binaries for several operating systems are freely available for non-commercial use at: http://bioinformatics.ua.pt/software/mafco.  相似文献   

16.
In this paper, we study various lossless compression techniques for electroencephalograph (EEG) signals. We discuss a computationally simple pre-processing technique, where EEG signal is arranged in the form of a matrix (2-D) before compression. We discuss a two-stage coder to compress the EEG matrix, with a lossy coding layer (SPIHT) and residual coding layer (arithmetic coding). This coder is optimally tuned to utilize the source memory and the i.i.d. nature of the residual. We also investigate and compare EEG compression with other schemes such as JPEG2000 image compression standard, predictive coding based shorten, and simple entropy coding. The compression algorithms are tested with University of Bonn database and Physiobank Motor/Mental Imagery database. 2-D based compression schemes yielded higher lossless compression compared to the standard vector-based compression, predictive and entropy coding schemes. The use of pre-processing technique resulted in 6% improvement, and the two-stage coder yielded a further improvement of 3% in compression performance.  相似文献   

17.
The male fertility restorer (RF) proteins belong to extended protein families associated with the cytoplasmic male sterility in higher plants. Up till now, there is no devised nomenclature for naming the RF proteins. The systematic sequencing of new plant species in recent years has uncovered the existence of several novel RF genes and their encoded proteins. Their naming has been simply arbitrary and could not be adequately handled in the context of comparative functional genomics. We propose in this study a unified nomenclature for the RF extended protein families across all plant species. This new and unified nomenclature relies upon previously developed nomenclature for the first ever characterized RF gene, RF2A/ALDH2B2, a member of ALDH gene superfamily, and adheres to the guidelines issued by the ALDH Genome Nomenclature Committees. The proposed nomenclature reveals that RF gene superfamily encodes currently members of 51 families. This unified nomenclature accommodates functional RF genes and pseudogenes, and offers the flexibility needed to incorporate additional RFs as they become available in future. In addition, we provide a phylogenetic relationship between the RF extended families and use computational protein modeling to demonstrate the high divergence of RF functional specializations through specific structural features of selected members of RF superfamily.  相似文献   

18.
We present Quip, a lossless compression algorithm for next-generation sequencing data in the FASTQ and SAM/BAM formats. In addition to implementing reference-based compression, we have developed, to our knowledge, the first assembly-based compressor, using a novel de novo assembly algorithm. A probabilistic data structure is used to dramatically reduce the memory required by traditional de Bruijn graph assemblers, allowing millions of reads to be assembled very efficiently. Read sequences are then stored as positions within the assembled contigs. This is combined with statistical compression of read identifiers, quality scores, alignment information and sequences, effectively collapsing very large data sets to <15% of their original size with no loss of information. Availability: Quip is freely available under the 3-clause BSD license from http://cs.washington.edu/homes/dcjones/quip.  相似文献   

19.
Image compression is an application of data compression on digital images. Several lossy/lossless transform coding techniques are used for image compression. Discrete cosine transform (DCT) is one such widely used technique. A variation of DCT, known as warped discrete cosine transform (WDCT), is used for 2-D image compression and it is shown to perform better than the DCT at high bit-rates. We extend this concept and develop the 3-D WDCT, a transform that has not been previously investigated. We outline some of its important properties, which make it especially suitable for image compression. We then propose a complete image coding scheme for volumetric data sets based on the 3-D WDCT scheme. It is shown that the 3-D WDCT-based compression scheme performs better than a similar 3-D DCT scheme for volumetric data sets at high bit-rates.  相似文献   

20.
Sakib MN  Tang J  Zheng WJ  Huang CT 《PloS one》2011,6(12):e28251
Research in bioinformatics primarily involves collection and analysis of a large volume of genomic data. Naturally, it demands efficient storage and transfer of this huge amount of data. In recent years, some research has been done to find efficient compression algorithms to reduce the size of various sequencing data. One way to improve the transmission time of large files is to apply a maximum lossless compression on them. In this paper, we present SAMZIP, a specialized encoding scheme, for sequence alignment data in SAM (Sequence Alignment/Map) format, which improves the compression ratio of existing compression tools available. In order to achieve this, we exploit the prior knowledge of the file format and specifications. Our experimental results show that our encoding scheme improves compression ratio, thereby reducing overall transmission time significantly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号