首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.  相似文献   

2.
The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The encoding time complexities have also increased multiple times that is not suitable for realtime video coding applications. To address this limitation, this paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features (HVF). The HVF in the proposed technique comprise with human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the average computational time of the latest HEVC reference encoder by 34% while providing similar rate-distortion (RD) performance for a wide range of video sequences.  相似文献   

3.
This paper presents a high payload watermarking scheme for High Efficiency Video Coding (HEVC). HEVC is an emerging video compression standard that provides better compression performance as compared to its predecessor, i.e. H.264/AVC. Considering that HEVC may will be used in a variety of applications in the future, the proposed algorithm has a high potential of utilization in applications involving broadcast and hiding of metadata. The watermark is embedded into the Quantized Transform Coefficients (QTCs) during the encoding process. Later, during the decoding process, the embedded message can be detected and extracted completely. The experimental results show that the proposed algorithm does not significantly affect the video quality, nor does it escalate the bitrate.  相似文献   

4.
Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery.  相似文献   

5.
In this paper, we present a novel fractal coding method with the block classification scheme based on a shared domain block pool. In our method, the domain block pool is called dictionary and is constructed from fractal Julia sets. The image is encoded by searching the best matching domain block with the same BTC (Block Truncation Coding) value in the dictionary. The experimental results show that the scheme is competent both in encoding speed and in reconstruction quality. Particularly for large images, the proposed method can avoid excessive growth of the computational complexity compared with the traditional fractal coding algorithm.  相似文献   

6.
脱氧核糖核酸(Deoxyribonucleic Acid, DNA)是一种天然的信息存储介质,具有存储密度高、存储时间长、损耗率低等特点。在传统存储方式不能满足信息增长的需求时,DNA数据存储技术逐渐成为研究热点。DNA编码是用尽可能少的碱基序列无错的存储数据信息,包括压缩(尽可能少的占用空间)、纠错(无错存储)和转换(数字信息转为碱基序列)3部分。DNA编码是DNA存储中的关键技术,它的结果直接影响存储性能的优劣和数据读写的完整。本文首先介绍DNA存储的发展历史,然后介绍DNA存储的框架,其中重点介绍DNA编码技术,最后对DNA存储中的编解码技术的未来发展方向进行讨论。  相似文献   

7.
基于最大熵原理,针对目前对混交林测树因子概率分布模型研究的不足,提出了联合最大熵概率密度函数,该函数具有如下特点:1)函数的每一组成部分都是相互联系的最大熵函数,故可以综合混交林各主要组成树种测树因子的概率分布信息;2)函数是具有双权重的概率表达式,能体现混交林结构复杂的特点,在最大限度地利用混交林每一主要树种测树因子概率分布信息的同时,还能精确地全面反映混交林测树因子概率分布规律;3)函数的结构简洁、性能优良.用天目山自然保护区的混交林样地对混交林测树因子概率分布模型进行了应用与检验,结果表明:模型的拟合精度(R2=0.9655)与检验精度(R2=
0.9772)都较高.说明联合最大熵概率密度函数可以作为混交林测树因子概率分布模型,为全面了解混交林林分结构提供了一种可行的方法.  相似文献   

8.
9.
Neuron transmits spikes to postsynaptic neurons through synapses. Experimental observations indicated that the communication between neurons is unreliable. However most modelling and computational studies considered deterministic synaptic interaction model. In this paper, we investigate the population rate coding in an all-to-all coupled recurrent neuronal network consisting of both excitatory and inhibitory neurons connected with unreliable synapses. We use a stochastic on-off process to model the unreliable synaptic transmission. We find that synapses with suitable successful transmission probability can enhance the encoding performance in the case of weak noise; while in the case of strong noise, the synaptic interactions reduce the encoding performance. We also show that several important synaptic parameters, such as the excitatory synaptic strength, the relative strength of inhibitory and excitatory synapses, as well as the synaptic time constant, have significant effects on the performance of the population rate coding. Further simulations indicate that the encoding dynamics of our considered network cannot be simply determined by the average amount of received neurotransmitter for each neuron in a time instant. Moreover, we compare our results with those obtained in the corresponding random neuronal networks. Our numerical results demonstrate that the network randomness has the similar qualitative effect as the synaptic unreliability but not completely equivalent in quantity.  相似文献   

10.
This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.  相似文献   

11.
Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network''s interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification.  相似文献   

12.
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.  相似文献   

13.
《IRBM》2020,41(1):2-17
In this work, computationally efficient and reliable cosine modulated filter banks (CMFBs) are designed for Electrocardiogram (ECG) data compression. First of all, CMFBs (uniform and non-uniform) are designed using interpolated finite impulse response (IFIR) prototype filter to reduce the computational complexity. To reduce the reconstruction error, linear iteration technique is applied to optimize the prototype filter. Then after, non-uniform CMFB is used for ECG data compression by decomposing ECG signal into various frequency bands. Subsequently, thresholding is applied for truncating the insignificant coefficients. The estimation of the threshold value is done by examining the significant energy of each band. Further, Run-length encoding (RLE) is utilized for improving the compression performance. The method is applied to MIT-BIH arrhythmia database for performance analysis of the proposed work. The experimental observations demonstrate that the proposed method has accomplished high compression ratio with the admirable quality of signal reconstruction. The proposed work provides the average values of compression ratio (CR), percent root mean square difference (PRD), percent root mean square difference normalized (PRDN), quality score (QS), correlation coefficient (CC), maximum error (ME), mean square error (MSE), and signal to noise ratio (SNR) are 23.86, 1.405, 2.55, 19.08, 0.999, 0.12, 0.054 and 37.611 dB, respectively. The proposed 8-channel uniform filter bank is used to detect the R-peak locations of the ECG signal. The comparative analysis shows that beats (locations and amplitudes) of both signals (original and reconstructed signals) are same.  相似文献   

14.
The maintenance in the long run of a positive carbon balance under very low irradiance is a prerequisite for survival of tree seedlings below the canopy or in small gaps in a tropical rainforest. To provide a quantitative basis for this assumption, experiments were carried out to determine whether construction cost (CC) and payback time for leaves and support structures, as well as leaf life span (i) differ among species and (ii) display an irradiance-elicited plasticity. Experiments were also conducted to determine whether leaf life span correlates to CC and payback time and is close to the optimal longevity derived from an optimization model. Saplings from 13 tropical tree species were grown under three levels of irradiance. Specific-CC was computed, as well as CC scaled to leaf area at the metamer level. Photosynthesis was recorded over the leaf life span. Payback time was derived from CC and a simple photosynthesis model. Specific-CC displayed only little interspecific variability and irradiance-elicited plasticity, in contrast to CC scaled to leaf area. Leaf life span ranged from 4 months to >26 months among species, and was longest in seedlings grown under lowest irradiance. It was always much longer than payback time, even under the lowest irradiance. Leaves were shed when their photosynthesis had reached very low values, in contrast to what was predicted by an optimality model. The species ranking for the different traits was stable across irradiance treatments. The two pioneer species always displayed the smallest CC, leaf life span, and payback time. All species displayed a similar large irradiance-elicited plasticity.  相似文献   

15.
高猛 《生态学报》2016,36(14):4406-4414
最近邻体法是一类有效的植物空间分布格局分析方法,邻体距离的概率分布模型用于描述邻体距离的统计特征,属于常用的最近邻体法之一。然而,聚集分布格局中邻体距离(个体到个体)的概率分布模型表达式复杂,参数估计的计算量大。根据该模型期望和方差的特性,提出了一种简化的参数估计方法,并利用遗传算法来实现参数优化,结果表明遗传算法可以有效地估计的该模型的两个参数。同时,利用该模型拟合了加拿大南温哥华岛3个寒温带树种的空间分布数据,结果显示:该概率分布模型可以很好地拟合美国花旗松(P.menziesii)和西部铁杉(T.heterophylla)的邻体距离分布,但由于西北红柏(T.plicata)存在高度聚集的团簇分布,拟合结果不理想;美国花旗松在样地中近似随机分布,空间聚集参数对空间尺度的依赖性不强,但西北红柏和西部铁杉空间聚集参数具有尺度依赖性,随邻体距离阶数增加而变大。最后,讨论了该模型以及参数估计方法的优势和限制。  相似文献   

16.
Morishita Y  Iwasa Y 《Biophysical journal》2011,101(10):2324-2335
Robust positioning of cells in a tissue against unavoidable noises is important for achieving normal and reproducible morphogenesis. The position in a tissue is represented by morphogen concentrations, and cells read them to recognize their spatial coordinates. From the engineering viewpoint, these positioning processes can be regarded as an information coding. Organisms are conjectured to adopt good coding designs with high reliability for a given number of available morphogen species and their chemical properties. To answer, quantitatively, the questions of how good coding is adopted, and subsequently when, where, and to what extent each morphogen contributes to positioning, we need a way to evaluate the goodness of coding. In this article, by introducing basic concepts of computer science, we mathematically formulate coding processes in morphogen-dependent positioning, and define some key concepts such as encoding, decoding, and positional information and its precision. We demonstrate the best designs for pairs of encoding and decoding rules, and show how those designs can be biologically implemented by using some examples. We also propose a possible procedure of data analysis to validate the coding optimality formulated here.  相似文献   

17.
MORGAN is an integrated system for finding genes in vertebrate DNA sequences. MORGAN uses a variety of techniques to accomplish this task, the most distinctive of which is a decision tree classifier. The decision tree system is combined with new methods for identifying start codons, donor sites, and acceptor sites, and these are brought together in a frame-sensitive dynamic programming algorithm that finds the optimal segmentation of a DNA sequence into coding and noncoding regions (exons and introns). The optimal segmentation is dependent on a separate scoring function that takes a subsequence and assigns to it a score reflecting the probability that the sequence is an exon. The scoring functions in MORGAN are sets of decision trees that are combined to give a probability estimate. Experimental results on a database of 570 vertebrate DNA sequences show that MORGAN has excellent performance by many different measures. On a separate test set, it achieves an overall accuracy of 95 %, with a correlation coefficient of 0.78, and a sensitivity and specificity for coding bases of 83 % and 79%. In addition, MORGAN identifies 58% of coding exons exactly; i.e., both the beginning and end of the coding regions are predicted correctly. This paper describes the MORGAN system, including its decision tree routines and the algorithms for site recognition, and its performance on a benchmark database of vertebrate DNA.  相似文献   

18.
ABSTRACT: BACKGROUND: The ancestries of genes form gene trees which do not necessarily have the same topology as the species tree due to incomplete lineage sorting. Available algorithms determining the probability of a gene tree given a species tree require exponential computational runtime. RESULTS: In this paper, we provide a polynomial time algorithm to calculate the probability of a ranked gene tree topology for a given species tree, where a ranked tree topology is a tree topology with the internal vertices being ordered. The probability of a gene tree topology can thus be calculated in polynomial time if the number of orderings of the internal vertices is a polynomial number. However, the complexity of calculating the probability of a gene tree topology with an exponential number of rankings for a given species tree remains unknown. CONCLUSIONS: Polynomial algorithms for calculating ranked gene tree probabilities may become useful in developing methodology to infer species trees based on a collection of gene trees, leading to a more accurate reconstruction of ancestral species relationships.  相似文献   

19.
A new lossless compression method using context modeling for ultrasound radio-frequency (RF) data is presented. In the proposed compression method, the combination of context modeling and entropy coding is used for effectively lowering the data transfer rates for modern software-based medical ultrasound imaging systems. From the phantom and in vivo data experiments, the proposed lossless compression method provides the average compression ratio of 0.45 compared to the Burg and JPEG-LS methods (0.52 and 0.55, respectively). This result indicates that the proposed compression method is capable of transferring 64-channel 40-MHz ultrasound RF data with a 16-lane PCI-Express 2.0 bus for software beamforming in real time.  相似文献   

20.
Action of a Transposable Element in Coding Sequence Fusions   总被引:5,自引:1,他引:4       下载免费PDF全文
J. A. Shapiro  D. Leach 《Genetics》1990,126(2):293-299
The original Casadaban technique for isolating fused cistrons encoding hybrid beta-galactosidase proteins used a Mucts62 prophage to align the upstream coding sequence and lacZ prior to selection. Kinetic analysis of araB-lacZ fusion colony emergence indicated that the required DNA rearrangements were regulated and responsive to conditions on selection plates. This has been cited as an example of "directed mutation." Here we show genetically that the MuA and integration host factor (IHF) transposition functions are involved in the formation of hybrid araB-lacZ cistrons and propose a molecular model for how fusions can form from the initial strand-transfer complex. These results confirm earlier indications of direct Mu involvement in the fusion process. The proposed model explains how rearranged Mu sequences come to be found as interdomain linkers in certain hybrid cistrons and indicates that the fusion process involves a spatially and temporally coordinated sequence of biochemical reactions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号