首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate.  相似文献   

2.
The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The encoding time complexities have also increased multiple times that is not suitable for realtime video coding applications. To address this limitation, this paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features (HVF). The HVF in the proposed technique comprise with human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the average computational time of the latest HEVC reference encoder by 34% while providing similar rate-distortion (RD) performance for a wide range of video sequences.  相似文献   

3.
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.  相似文献   

4.
ObjectivesBecause of the large amount of medical imaging data, the transmission process becomes complicated in telemedicine applications. Thus, in order to adapt the data bit streams to the constraints related to the limitation of the bandwidths a reduction of the size of the data by compression of the images is essential. Despite the improvements in the field of compression, the transmission itself can also introduce errors. For this reason, it is important to develop an adequate strategy which will help reduce this volume of data without having to introduce some distortion and resist the errors introduced by the channel noise during transmission. Thus, in this paper, we propose a ROI-based coding strategy and unequal bit stream protection to meet this dual constraint.Material and methodsThe proposed ROI-based compression strategy with unequal bit stream protection is composed of three parts: the first one allows the extraction of the ROI region, the second one consists of a ROI-based coding and the third one allows an unequal protection of the ROI bit stream.First, the Regions Of Interest (ROI) are extracted by hierarchical segmentation of these regions according to a segmentation method based on the technique of Marker-based-watershed combined with the technique of active contours by level set. The resulting regions are selectively encoded by a 3D coder based on a shape adaptive discrete wavelet transform 3D-BISK, where the compression ratio of each region depends on its relevance in diagnosis. These obtained regions of interest are protected with an error-correcting code of Reed-Solomon type with a code rate that varies according to the relevance of the region by an unequal protection strategy (UEP).ResultsThe performance of the proposed compression scheme is evaluated in several ways. First, tests are performed to study the impact of errors on the different bit streams. In the first place, these tests are carried out in order to study the effect of the variation of the compression rates on the different bit streams. Secondly, different Reed Solomon error-correcting codes of different code rates are tested at different compression rates on a BSC channel. Finally, the performances of this coding strategy are compared with those of SPIHT 3D in the case of transmission on a BSC channel.ConclusionThe obtained results show that the proposed method is quite efficient in transmission time reduction. Therefore, our proposed scheme will reduce the volume of data without having to introduce some distortion and resist the errors introduced by the channel noise in the case of telemedicine.  相似文献   

5.
Conduction of tele-3D-computer assisted operations as well as other telemedicine procedures often requires highest possible quality of transmitted medical images and video. Unfortunately, those data types are always associated with high telecommunication and storage costs that sometimes prevent more frequent usage of such procedures. We present a novel algorithm for lossless compression of medical images that is extremely helpful in reducing the telecommunication and storage costs. The algorithm models the image properties around the current, unknown pixel and adjusts itself to the local image region. The main contribution of this work is the enhancement of the well known approach of predictor blends through highly adaptive determination of blending context on a pixel-by-pixel basis using classification technique. We show that this approach is well suited for medical image data compression. Results obtained with the proposed compression method on medical images are very encouraging, beating several well known lossless compression methods. The predictor proposed can also be used in other image processing applications such as segmentation and extraction of image regions.  相似文献   

6.
A new lossless compression method using context modeling for ultrasound radio-frequency (RF) data is presented. In the proposed compression method, the combination of context modeling and entropy coding is used for effectively lowering the data transfer rates for modern software-based medical ultrasound imaging systems. From the phantom and in vivo data experiments, the proposed lossless compression method provides the average compression ratio of 0.45 compared to the Burg and JPEG-LS methods (0.52 and 0.55, respectively). This result indicates that the proposed compression method is capable of transferring 64-channel 40-MHz ultrasound RF data with a 16-lane PCI-Express 2.0 bus for software beamforming in real time.  相似文献   

7.
《IRBM》2022,43(3):217-228
Objective: Globally, cardiovascular diseases (CVDs) are one of the most leading causes of death. In medical screening and diagnostic procedures of CVDs, electrocardiogram (ECG) signals are widely used. Early detection of CVDs requires acquisition of longer ECG signals. It has triggered the development of personal healthcare systems which can be used by cardio-patients to manage the disease. These healthcare systems continuously record, store, and transmit the ECG data via wired/wireless communication channels. There are many issues with these systems such as data storage limitation, bandwidth limitation and limited battery life. Involvement of ECG data compression techniques can resolve all these issues.Method: In the past, numerous ECG data compression techniques have been proposed. This paper presents a methodological review of different ECG data compression techniques based on their experimental performance on ECG records of the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database.Results: It is observed that experimental performance of different compression techniques depends on several parameters. The existing compression techniques are validated using different distortion measures.Conclusion: This study elaborates advantages and disadvantages of different ECG data compression techniques. It also includes different validation methods of ECG compression techniques. Although compression techniques have been developed very widely but the validation of compression methods is still a prospective research area to accomplish an efficient and reliable performance.  相似文献   

8.
Recently the single image super-resolution reconstruction (SISR) via sparse coding has attracted increasing interests. Considering that there are obviously repetitive image structures in medical images, in this study we propose a regularized SISR method via sparse coding and structural similarity. The pixel based recovery is incorporated as a regularization term to exploit the non-local structural similarities of medical images, which is very helpful in further improving the quality of recovered medical images. An alternative variables optimization algorithm is proposed and some medical images including CT, MRI and ultrasound images are used to investigate the performance of our proposed method. The results show the superiority of our method to its counterparts.  相似文献   

9.
Gao  Hang  Gao  Tiegang 《Cluster computing》2022,25(1):707-725

To protect the security of data outsourced to the cloud, the tampers detection and recovery for outsourced image have aroused the concern of people. A secure tampering detection and lossless recovery for medical images (MI) using permutation ordered binary (POB) number system is proposed. In the proposed scheme, the region of interest (ROI) of MI is first extracted, and then, ROI is divided into some no-overlapping blocks, and image encoding is conducted on these blocks based on the better compression performance of JPEG-LS for medical image. After that, the generated compression data by all the blocks are divided into high 4-bit and low 4-bit planes, and shuffling and combination are used to generate two plane images. Owing to the substantial redundancies space in the compressed data, the data of each plane are spread to the size of the original image. Lastly, authentication data of two bits is obtained for every pixel and inserted into the pixel itself within the each plane, and the corresponding 10-bit data is transformed into the POB value of 8-bit. Furthermore, encryption is implemented on the above image to produce two shares which can be outsourced to the cloud server. The users can detect tampered part and recover original image when they down load the shares from the cloud. Extensive experiments on some ordinary medical image and COVID-19 image datasets show that the proposed approach can locate the tampered parts within the MI, and the original MI can be recovered without any loss even if one of the shares are totally destroyed, or two shares are tampered at the ration not more than 50%. Some comparisons and analysis are given to show the better performance of the scheme.

  相似文献   

10.
Nowadays, quality of service (QoS) is very popular in various research areas like distributed systems, multimedia real-time applications and networking. The requirements of these systems are to satisfy reliability, uptime, security constraints and throughput as well as application specific requirements. The real-time multimedia applications are commonly distributed over the network and meet various time constraints across networks without creating any intervention over control flows. In particular, video compressors make variable bit-rate streams that mismatch the constant-bit-rate channels typically provided by classical real-time protocols, severely reducing the efficiency of network utilization. Thus, it is necessary to enlarge the communication bandwidth to transfer the compressed multimedia streams using Flexible Time Triggered- Enhanced Switched Ethernet (FTT-ESE) protocol. FTT-ESE provides automation to calculate the compression level and change the bandwidth of the stream. This paper focuses on low-latency multimedia transmission over Ethernet with dynamic quality-of-service (QoS) management. This proposed framework deals with a dynamic QoS for multimedia transmission over Ethernet with FTT-ESE protocol. This paper also presents distinct QoS metrics based both on the image quality and network features. Some experiments with recorded and live video streams show the advantages of the proposed framework. To validate the solution we have designed and implemented a simulator based on the Matlab/Simulink, which is a tool to evaluate different network architecture using Simulink blocks.  相似文献   

11.
Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI.  相似文献   

12.
Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs) remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerningmoving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA). Because a moment represents thecoherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.  相似文献   

13.
In this paper, we study various lossless compression techniques for electroencephalograph (EEG) signals. We discuss a computationally simple pre-processing technique, where EEG signal is arranged in the form of a matrix (2-D) before compression. We discuss a two-stage coder to compress the EEG matrix, with a lossy coding layer (SPIHT) and residual coding layer (arithmetic coding). This coder is optimally tuned to utilize the source memory and the i.i.d. nature of the residual. We also investigate and compare EEG compression with other schemes such as JPEG2000 image compression standard, predictive coding based shorten, and simple entropy coding. The compression algorithms are tested with University of Bonn database and Physiobank Motor/Mental Imagery database. 2-D based compression schemes yielded higher lossless compression compared to the standard vector-based compression, predictive and entropy coding schemes. The use of pre-processing technique resulted in 6% improvement, and the two-stage coder yielded a further improvement of 3% in compression performance.  相似文献   

14.
In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.  相似文献   

15.
A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients’ benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the individual subjects, therefore, it can be used as a significant tool in clinical practice.  相似文献   

16.
With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.  相似文献   

17.
In this paper, two novel and simple, target distortion level (TDL) and target data rate (TDR), Wavelet threshold based ECG compression algorithms are proposed for real-time applications. The issues on the use of objective error measures, such as percentage root mean square difference (PRD) and root mean square error (RMSE) as a quality measures, in quality controlled/guranteed algorithm are investigated with different sets of experiments. For the proposed TDL and TDR algorithm, data rate variability and reconstructed signal quality is evaluated under different ECG signal test conditions. Experimental results show that the TDR algorithm achieves the required compression data rate to meet the demands of wire/wireless link while the TDL algorithm does not. The compression performance is assessed in terms of number of iterations required to achieve convergence and accuracy, reconstructed signal quality and coding delay. The reconstructed signal quality is evaluated by correct diagnosis (CD) test through visual inspection. Three sets of ECG data from three different databases, the MIT-BIH Arrhythmia (mita) (Fs=360 Hz, 11 b/sample), the Creighton University Ventricular Tachyarrhythmia (cuvt) (Fs=250 Hz, 12 b/sample) and the MIT-BIH Supraventricular Arrhythmia (mitsva) (Fs=128 Hz, 10 b/sample), are used for this work. For each set of ECG data, the compression ratio (CR) range is defined. The CD value of 100% is achieved for CR ≤12, CR ≤ 8 and CR ≤ 4 for data from mita, cuvt and mitsva databases, respectively. The experimental results demonstrate that the proposed TDR algorithm is suitable for real-time applications.  相似文献   

18.
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.  相似文献   

19.
基于移动终端的稻田飞虱调查方法   总被引:2,自引:0,他引:2  
【目的】建立一种基于移动终端的稻田飞虱调查方法,以减轻测报人员劳动强度,提高稻田飞虱调查的客观性,实现稻飞虱调查结果可追溯。【方法】利用Android相机、可伸缩手持杆和装载控制相机APP的Android手机研制了稻田飞虱图像采集仪。在Android开发环境下,利用socket通信和视频编码等技术,实现Android相机的视频采集与编码模块、视频传输模块和相机命令控制模块等。利用Android NDK开发和Java web等技术,实现手机端的视频预览模块、手机控制模块、图像上传模块等。相机实时拍摄的视频将压缩成H.264格式,通过RTSP/RTP协议控制其传输至手机端。手机端通过解压缩,实现实时预览相机所拍摄的视频,并控制相机拍摄水稻茎基部飞虱图像,同时将图像传输到手机端。稻飞虱识别算法部署在云服务器上。手机端可选择稻飞虱图像上传至云服务器,云服务器运行稻飞虱自动识别算法,结果返回至手机端。【结果】基于移动终端的稻田飞虱调查方法利用手机可以实时预览相机拍摄的水稻茎基部飞虱画面,控制相机拍照。云服务器上稻飞虱自动识别算法对图像中的飞虱平均检测率为86.9%,虚警率为11.2%;对稻飞虱各虫态平均检测率为81.7%,虚警率为16.6%。【结论】基于移动终端的稻田飞虱调查方法可以便捷地采集到水稻茎基部飞虱图像,实现稻田飞虱不同虫态的识别与计数。该方法可大大减轻测报人员的劳动量,避免稻飞虱田间调查的主观性,实现稻飞虱田间调查的可追溯。  相似文献   

20.
Genome data are becoming increasingly important for modern medicine. As the rate of increase in DNA sequencing outstrips the rate of increase in disk storage capacity, the storage and data transferring of large genome data are becoming important concerns for biomedical researchers. We propose a two-pass lossless genome compression algorithm, which highlights the synthesis of complementary contextual models, to improve the compression performance. The proposed framework could handle genome compression with and without reference sequences, and demonstrated performance advantages over best existing algorithms. The method for reference-free compression led to bit rates of 1.720 and 1.838 bits per base for bacteria and yeast, which were approximately 3.7% and 2.6% better than the state-of-the-art algorithms. Regarding performance with reference, we tested on the first Korean personal genome sequence data set, and our proposed method demonstrated a 189-fold compression rate, reducing the raw file size from 2986.8 MB to 15.8 MB at a comparable decompression cost with existing algorithms. DNAcompact is freely available at https://sourceforge.net/projects/dnacompact/for research purpose.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号