首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.  相似文献   

2.
The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The encoding time complexities have also increased multiple times that is not suitable for realtime video coding applications. To address this limitation, this paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features (HVF). The HVF in the proposed technique comprise with human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the average computational time of the latest HEVC reference encoder by 34% while providing similar rate-distortion (RD) performance for a wide range of video sequences.  相似文献   

3.
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.  相似文献   

4.
Conduction of tele-3D-computer assisted operations as well as other telemedicine procedures often requires highest possible quality of transmitted medical images and video. Unfortunately, those data types are always associated with high telecommunication and storage costs that sometimes prevent more frequent usage of such procedures. We present a novel algorithm for lossless compression of medical images that is extremely helpful in reducing the telecommunication and storage costs. The algorithm models the image properties around the current, unknown pixel and adjusts itself to the local image region. The main contribution of this work is the enhancement of the well known approach of predictor blends through highly adaptive determination of blending context on a pixel-by-pixel basis using classification technique. We show that this approach is well suited for medical image data compression. Results obtained with the proposed compression method on medical images are very encouraging, beating several well known lossless compression methods. The predictor proposed can also be used in other image processing applications such as segmentation and extraction of image regions.  相似文献   

5.
This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.  相似文献   

6.
In this paper we propose a novel saliency-based computational model for visual attention. This model processes both top-down (goal directed) and bottom-up information. Processing in the top-down channel creates the so called skin conspicuity map and emulates the visual search for human faces performed by humans. This is clearly a goal directed task but is generic enough to be context independent. Processing in the bottom-up information channel follows the principles set by Itti et al. but it deviates from them by computing the orientation, intensity and color conspicuity maps within a unified multi-resolution framework based on wavelet subband analysis. In particular, we apply a wavelet based approach for efficient computation of the topographic feature maps. Given that wavelets and multiresolution theory are naturally connected the usage of wavelet decomposition for mimicking the center surround process in humans is an obvious choice. However, our implementation goes further. We utilize the wavelet decomposition for inline computation of the features (such as orientation angles) that are used to create the topographic feature maps. The bottom-up topographic feature maps and the top-down skin conspicuity map are then combined through a sigmoid function to produce the final saliency map. A prototype of the proposed model was realized through the TMDSDMK642-0E DSP platform as an embedded system allowing real-time operation. For evaluation purposes, in terms of perceived visual quality and video compression improvement, a ROI-based video compression setup was followed. Extended experiments concerning both MPEG-1 as well as low bit-rate MPEG-4 video encoding were conducted showing significant improvement in video compression efficiency without perceived deterioration in visual quality.  相似文献   

7.
Thin sections of biological tissue embedded in plastic and cut with an ultramicrotome do not generally display useful details smaller than approximately 50 A in the electron microscope. However, there is evidence that before sectioning the embedded tissue can be substantially better preserved, which suggests that cutting is when major damage and loss of resolution occurs. We show here a striking example of such damage in embedded insect flight muscle fibres. X-ray diffraction of the embedded muscle gave patterns extending to 13A, whereas sections cut from the same block showed only approximately 50 A resolution. A possible source of this damage is the substantial compression that was imposed on sections during cutting. An oscillating knife ultramicrotome eliminates the compression and it seemed possible that sections cut with such a knife would show substantially improved preservation. We used the oscillating knife to cut sections from the embedded muscle and from embedded catalase crystals. Preservation with and without oscillation was assessed in Fourier transforms of micrographs. Sections cut with the knife oscillating did not show improved preservation over those cut without. Thus compression during cutting does not appear to be the major source of damage in plastic sections, and leaves unexplained the 50 A versus 13A discrepancy between block and section preservation. The results nevertheless suggest that improvements in ultramicrotomy will be important for bringing thin-sectioning and tomography of plastic-embedded cells and tissues to the point where macromolecule shapes can be resolved.  相似文献   

8.
In this paper, two novel and simple, target distortion level (TDL) and target data rate (TDR), Wavelet threshold based ECG compression algorithms are proposed for real-time applications. The issues on the use of objective error measures, such as percentage root mean square difference (PRD) and root mean square error (RMSE) as a quality measures, in quality controlled/guranteed algorithm are investigated with different sets of experiments. For the proposed TDL and TDR algorithm, data rate variability and reconstructed signal quality is evaluated under different ECG signal test conditions. Experimental results show that the TDR algorithm achieves the required compression data rate to meet the demands of wire/wireless link while the TDL algorithm does not. The compression performance is assessed in terms of number of iterations required to achieve convergence and accuracy, reconstructed signal quality and coding delay. The reconstructed signal quality is evaluated by correct diagnosis (CD) test through visual inspection. Three sets of ECG data from three different databases, the MIT-BIH Arrhythmia (mita) (Fs=360 Hz, 11 b/sample), the Creighton University Ventricular Tachyarrhythmia (cuvt) (Fs=250 Hz, 12 b/sample) and the MIT-BIH Supraventricular Arrhythmia (mitsva) (Fs=128 Hz, 10 b/sample), are used for this work. For each set of ECG data, the compression ratio (CR) range is defined. The CD value of 100% is achieved for CR ≤12, CR ≤ 8 and CR ≤ 4 for data from mita, cuvt and mitsva databases, respectively. The experimental results demonstrate that the proposed TDR algorithm is suitable for real-time applications.  相似文献   

9.
In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.  相似文献   

10.
In video sequence-based iris recognition system, the problem of making full use of relationship and correlation among frames still remains to be solved. A brand new template level multimodal fusion algorithm inspired by human cognition manner is proposed. In that a non-isolated geometrical manifold, named Hyper Sausage Chain due to its sausage shape, is trained using the frames from a pattern class for representing an iris class in feature space. We can classify any input iris by observing which manifold it locates in. This process is closer to the function of human being, which takes 'matter cognition' instead of 'matter classification' as its basic principle. The experiments on self-developed JLUBR-IRIS dataset with several video sequences per person demonstrate the effectiveness and usability of the proposed algorithm for video sequence-based iris recognition. Fur- thermore, the comparative experiments on public CASIA-I and CASIA-V4-Interval datasets show that our method can also achieve improved performance of image-based iris recognition system, provided enough samples are involved in training stage.  相似文献   

11.
This paper adds volume deformation capability to the mass-spring chain method using tetrahedral elements in order to obtain more realistic deformations, which occur during the interactions between medical tools and soft tissues. The mass-spring chain method originally does not consider volume information and performs deformation by moving and deforming individual springs of a deformable model. However, most of the applications in computer graphics require volume modelling using tetrahedrons. In the proposed method, the deformation algorithm loops through tetrahedrons and performs deformation based on defined rules similar to those of the original mass-spring chain method. This method can handle not only ordinary deformation applications but also those with topology changes, such as cutting and tearing, as it does not rely on any pre-computed quantities. A method to preserve the volume and the shape of the tetrahedral elements is also developed. In order to speed up the new version of the algorithm, a tetrahedral propagation for deformation is developed. The detailed implementation of the algorithm and the various applications of the organ–surgery tool interactions are presented. The paper also provides the animations of the different models obtained by the proposed method.  相似文献   

12.
Vector quantization plays an important role in many signal processing problems, such as speech/speaker recognition and signal compression. This paper presents an unsupervised algorithm for vector quantizer design. Although the proposed method is inspired in Kohonen learning, it does not incorporate the classical definition of topological neighborhood as an array of nodes. Simulations are carried out to compare the performance of the proposed algorithm, named SOA (self-organizing algorithm), to that of the traditional LBG (Linde-Buzo-Gray) algorithm. The authors present an evaluation concerning the codebook design for Gauss-Markov and Gaussian sources, since the theoretic optimal performance bounds for these sources, as described by Shannon's Rate-Distortion Theory, are known. In speech and image compression, SOA codebooks lead to reconstructed (vector-quantized) signals with better quality as compared to the ones obtained by using LBG codebooks. Additionally, the influence of the initial codebook in the algorithm performance is investigated and the algorithm ability to learn representative patterns is evaluated. In a speaker identification system, it is shown that the the codebooks designed by SOA lead to higher identification rates when compared to the ones designed by LBG.  相似文献   

13.
Nowadays, quality of service (QoS) is very popular in various research areas like distributed systems, multimedia real-time applications and networking. The requirements of these systems are to satisfy reliability, uptime, security constraints and throughput as well as application specific requirements. The real-time multimedia applications are commonly distributed over the network and meet various time constraints across networks without creating any intervention over control flows. In particular, video compressors make variable bit-rate streams that mismatch the constant-bit-rate channels typically provided by classical real-time protocols, severely reducing the efficiency of network utilization. Thus, it is necessary to enlarge the communication bandwidth to transfer the compressed multimedia streams using Flexible Time Triggered- Enhanced Switched Ethernet (FTT-ESE) protocol. FTT-ESE provides automation to calculate the compression level and change the bandwidth of the stream. This paper focuses on low-latency multimedia transmission over Ethernet with dynamic quality-of-service (QoS) management. This proposed framework deals with a dynamic QoS for multimedia transmission over Ethernet with FTT-ESE protocol. This paper also presents distinct QoS metrics based both on the image quality and network features. Some experiments with recorded and live video streams show the advantages of the proposed framework. To validate the solution we have designed and implemented a simulator based on the Matlab/Simulink, which is a tool to evaluate different network architecture using Simulink blocks.  相似文献   

14.
Video sensors with embedded compression offer significant energy savings in transmission but incur energy losses in the complexity of the encoder. Energy efficient video compression architectures for CMOS image sensors with focal-plane change detection are presented and analyzed. The compression architectures use pixel-level computational circuits to minimize energy usage by selectively processing only pixels which generate significant temporal intensity changes. Using the temporal intensity change detection to gate the operation of a differential DCT based encoder achieves nearly identical image quality to traditional systems (4dB decrease in PSNR) while reducing the amount of data that is processed by 67% and reducing overall power consumption reduction of 51%. These typical energy savings, resulting from the sparsity of motion activity in the visual scene, demonstrate the utility of focal-plane change triggered compression to surveillance vision systems.  相似文献   

15.
In this paper, a novel watershed approach based on seed region growing and image entropy is presented which could improve the medical image segmentation. The proposed algorithm enables the prior information of seed region growing and image entropy in its calculation. The algorithm starts by partitioning the image into several levels of intensity using watershed multi-degree immersion process. The levels of intensity are the input to a computationally efficient seed region segmentation process which produces the initial partitioning of the image regions. These regions are fed to entropy procedure to carry out a suitable merging which produces the final segmentation. The latter process uses a region-based similarity representation of the image regions to decide whether regions can be merged. The region is isolated from the level and the residual pixels are uploaded to the next level and so on, we recall this process as multi-level process and the watershed is called multi-level watershed. The proposed algorithm is applied to challenging applications: grey matter–white matter segmentation in magnetic resonance images (MRIs). The established methods and the proposed approach are experimented by these applications to a variety of simulating immersion, multi-degree, multi-level seed region growing and multi-level seed region growing with entropy. It is shown that the proposed method achieves more accurate results for medical image oversegmentation.  相似文献   

16.
The formation of a fracture callus in vivo tends to form in a structurally efficient manner distributing tissues where mechanical stimulus persists. Therefore, it is proposed that the formation of a fracture callus can be modelled in silico by way of an optimisation algorithm. This was tested by generating a finite element model of a transversal bone fracture embedded in a large tissue domain which was subjected to axial, bending and torsional loads. It was found that the relative fragment motion induced a compressive strain field in the early callus tissue which could be utilised to simulate the formation of external callus structures through an iterative optimisation process of tissue maintenance and removal. The phenomenological results showed a high level of congruence with in vivo healing patterns found in the literature. Consequently, the proposed strategy shows potential as a means of predicting spatial bone healing phenomena for pre-clinical testing.  相似文献   

17.
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.  相似文献   

18.
The algorithm for identifying the stochastic neural system and estimating the system process which reflects the dynamics of the neural network are presented in this papar. The analogous algorithm has been proposed in our preceding paper (Nakao et al., 1984), which was based on the randomly missed observations of a system process only. Since the previous algorithm mentioned above was subject to an unfavorable effect of consecutively missed observations, to reduce such an effect the algorithm proposed here is designed additionally to observe an intensity process in a neural spike train as the information for the estimation.The algorithm is constructed with the extended Kalman filters because it is naturally expected that a nonlinear and time variant structure is necessary for the filters to realize the observation of an intensity process by means of mapping from a system process to an intensity process. The performance of the algorithm is examined by applying it to some artificial neural systems and also to cat's visual nervous systems. The results in these applications are thought to prove the effectiveness of the algorithm proposed here and its superiority to the algorithm proposed previously.  相似文献   

19.
Action recognition has become a hot topic within computer vision. However, the action recognition community has focused mainly on relatively simple actions like clapping, walking, jogging, etc. The detection of specific events with direct practical use such as fights or in general aggressive behavior has been comparatively less studied. Such capability may be extremely useful in some video surveillance scenarios like prisons, psychiatric centers or even embedded in camera phones. As a consequence, there is growing interest in developing violence detection algorithms. Recent work considered the well-known Bag-of-Words framework for the specific problem of fight detection. Under this framework, spatio-temporal features are extracted from the video sequences and used for classification. Despite encouraging results in which high accuracy rates were achieved, the computational cost of extracting such features is prohibitive for practical applications. This work proposes a novel method to detect violence sequences. Features extracted from motion blobs are used to discriminate fight and non-fight sequences. Although the method is outperformed in accuracy by state of the art, it has a significantly faster computation time thus making it amenable for real-time applications.  相似文献   

20.
Nowadays, complex smartphone applications are developed that support gaming, navigation, video editing, augmented reality, and speech recognition which require considerable computational power and battery lifetime. The cloud computing provides a brand new opportunity for the development of mobile applications. Mobile Hosts (MHs) are provided with data storage and processing services on a cloud computing platform rather than on the MHs. To provide seamless connection and reliable cloud service, we are focused on communication. When the connection to cloud server is increased explosively, each MH connection quality has to be declined. It causes several problems: network delay, retransmission, and so on. In this paper, we propose proxy based architecture to improve link performance for each MH in mobile cloud computing. By proposed proxy, the MH need not keep connection of the cloud server because it just connected one of proxy in the same subnet. And we propose the optimal access network discovery algorithm to optimize bandwidth usage. When the MH changes its point of attachment, proposed discovery algorithm helps to connect the optimal access network for cloud service. By experiment result and analysis, the proposed connection management method has better performance than the 802.11 access method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号