首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《IRBM》2022,43(5):325-332
ObjectiveIn cardiac patient-care, compression of long-term ECG data is essential to minimize the data storage requirement and transmission cost. Hence, this paper presents a novel electrocardiogram data compression technique which utilizes modified run-length encoding of wavelet coefficients.MethodFirst, wavelet transform is applied to the ECG data which decomposes it and packs maximum energy to less number of transform coefficients. The wavelet transform coefficients are quantized using dead-zone quantization. It discards small valued coefficients lying in the dead-zone interval while other coefficients are kept at the formulated quantized output interval. Among all the quantized coefficients, an average value is assigned to those coefficients for which energy packing efficiency is less than 99.99%. The obtained coefficients are encoded using modified run-length coding. It offers higher compression ratio than conventional run-length coding without any loss of information.ResultsCompression performance of the proposed technique is evaluated using different ECG records taken from the MIT-BIH arrhythmia database. The average compression performance in terms of compression ratio, percent root mean square difference, normalized percent mean square difference, and signal to noise ratio are 17.18, 3.92, 6.36, and 28.27 dB respectively for 48 ECG records.ConclusionThe compression results obtained by the proposed technique is better than techniques recently introduced by others. The proposed technique can be utilized for compression of ECG records of Holter monitoring.  相似文献   

2.
C.K. Jha  M.H. Kolekar 《IRBM》2021,42(1):65-72
ObjectiveIn health-care systems, compression is an essential tool to solve the storage and transmission problems. In this regard, this paper reports a new electrocardiogram (ECG) data compression scheme which employs sifting function based empirical mode decomposition (EMD) and discrete wavelet transform.MethodEMD based on sifting function is utilized to get the first intrinsic mode function (IMF). After EMD, the first IMF and four significant sifting functions are combined together. This combination is free from many irrelevant components of the signal. Discrete wavelet transform (DWT) with mother wavelet ‘bior4.4’ is applied to this combination. The transform coefficients obtained after DWT are passed through dead-zone quantization. It discards small transform coefficients lying around zero. Further, integer conversion of coefficients and run-length encoding are utilized to achieve a compressed form of ECG data.ResultsCompression performance of the proposed scheme is evaluated using 48 ECG records of the MIT-BIH arrhythmia database. In the comparison of compression results, it is observed that the proposed method exhibits better performance than many recent ECG compressors. A mean opinion score test is also conducted to evaluate the true quality of the reconstructed ECG signals.ConclusionThe proposed scheme offers better compression performance with preserving the key features of the signal very well.  相似文献   

3.
PurposeCardiovascular disease (CVD) is a leading cause of death globally. Electrocardiogram (ECG), which records the electrical activity of the heart, has been used for the diagnosis of CVD. The automated and robust detection of CVD from ECG signals plays a significant role for early and accurate clinical diagnosis. The purpose of this study is to provide automated detection of coronary artery disease (CAD) from ECG signals using capsule networks (CapsNet).MethodsDeep learning-based approaches have become increasingly popular in computer aided diagnosis systems. Capsule networks are one of the new promising approaches in the field of deep learning. In this study, we used 1D version of CapsNet for the automated detection of coronary artery disease (CAD) on two second (95,300) and five second-long (38,120) ECG segments. These segments are obtained from 40 normal and 7 CAD subjects. In the experimental studies, 5-fold cross validation technique is employed to evaluate performance of the model.ResultsThe proposed model, which is named as 1D-CADCapsNet, yielded a promising 5-fold diagnosis accuracy of 99.44% and 98.62% for two- and five-second ECG signal groups, respectively. We have obtained the highest performance results using 2 s ECG segment than the state-of-art studies reported in the literature.Conclusions1D-CADCapsNet model automatically learns the pertinent representations from raw ECG data without using any hand-crafted technique and can be used as a fast and accurate diagnostic tool to help cardiologists.  相似文献   

4.
ECG data compression techniques have received extensive attention in ECG analysis. Numerous data compression algorithms for ECG signals have been proposed during the last three decades. We describe two algorithms based on the scan-along polygonal approximation algorithm (SAPA) that are suitable for multichannel ECG data reduction on a microprocessor-based system. One represents a modification of SAPA (MSAPA) which adopts the method of integer division table searching to speed up data reduction; the other (CSAPA) combines MSAPA and TP, a turning-point algorithm, to preserve ST segment signals. Results show that our algorithms achieve a compression ratio of more than 5:1 and a percent rms difference (PRD) to the original signal of less than 3.5%. In addition, the maximum execution time of MSAPA for processing one data point is about 50μ s. Moreover, the CSAPA algorithm retains all of the details of the ST segment, which are important in ischaemia diagnosis, by employing the TP algorithm.  相似文献   

5.
In recent years evidence has accumulated that ECG signals are of a nonlinear nature. It has been recognized that strictly periodic cardiac rhythms are not accompanied by healthy conditions but, on the contrary, by pathological states. Therefore, the application of methods from nonlinear system theory for the analysis of ECG signals has gained increasing interest. Crucial for the application of nonlinear methods is the reconstruction (embedding) of the time series in a phase space with appropriate dimension. In this study continuous ECG signals of 12 healthy subjects recorded during different sleep stages were analysed. Proper embedding dimension was determined by application of two techniques – the false nearest neighbours method and the saturation of the correlation dimension. Results for the ECG signals were compared with findings for simulated data (quasiperiodic dynamics, Lorenz data, white noise) and for phase randomized surrogates. Findings obtained with the two approaches suggest that embedding dimensions from 6 to 8 may be regarded as suitable for the topologically proper reconstruction of ECG signals. Received: 7 June 1999 / Accepted in revised form: 10 December 1999  相似文献   

6.
In this paper, two novel and simple, target distortion level (TDL) and target data rate (TDR), Wavelet threshold based ECG compression algorithms are proposed for real-time applications. The issues on the use of objective error measures, such as percentage root mean square difference (PRD) and root mean square error (RMSE) as a quality measures, in quality controlled/guranteed algorithm are investigated with different sets of experiments. For the proposed TDL and TDR algorithm, data rate variability and reconstructed signal quality is evaluated under different ECG signal test conditions. Experimental results show that the TDR algorithm achieves the required compression data rate to meet the demands of wire/wireless link while the TDL algorithm does not. The compression performance is assessed in terms of number of iterations required to achieve convergence and accuracy, reconstructed signal quality and coding delay. The reconstructed signal quality is evaluated by correct diagnosis (CD) test through visual inspection. Three sets of ECG data from three different databases, the MIT-BIH Arrhythmia (mita) (Fs=360 Hz, 11 b/sample), the Creighton University Ventricular Tachyarrhythmia (cuvt) (Fs=250 Hz, 12 b/sample) and the MIT-BIH Supraventricular Arrhythmia (mitsva) (Fs=128 Hz, 10 b/sample), are used for this work. For each set of ECG data, the compression ratio (CR) range is defined. The CD value of 100% is achieved for CR ≤12, CR ≤ 8 and CR ≤ 4 for data from mita, cuvt and mitsva databases, respectively. The experimental results demonstrate that the proposed TDR algorithm is suitable for real-time applications.  相似文献   

7.
《IRBM》2022,43(5):422-433
BackgroundElectrocardiogram (ECG) is a method of recording the electrical activity of the heart and it provides a diagnostic means for heart-related diseases. Arrhythmia is any irregularity of the heartbeat that causes an abnormality in the heart rhythm. Early detection of arrhythmia has great importance to prevent many diseases. Manual analysis of ECG recordings is not practical for quickly identifying arrhythmias that may cause sudden deaths. Hence, many studies have been presented to develop computer-aided-diagnosis (CAD) systems to automatically identify arrhythmias.MethodsThis paper proposes a novel deep learning approach to identify arrhythmias in ECG signals. The proposed approach identifies arrhythmia classes using Convolutional Neural Network (CNN) trained by two-dimensional (2D) ECG beat images. Firstly, ECG signals, which consist of 5 different arrhythmias, are segmented into heartbeats which are transformed into 2D grayscale images. Afterward, the images are used as input for training a new CNN architecture to classify heartbeats.ResultsThe experimental results show that the classification performance of the proposed approach reaches an overall accuracy of 99.7%, sensitivity of 99.7%, and specificity of 99.22% in the classification of five different ECG arrhythmias. Further, the proposed CNN architecture is compared to other popular CNN architectures such as LeNet and ResNet-50 to evaluate the performance of the study.ConclusionsTest results demonstrate that the deep network trained by ECG images provides outstanding classification performance of arrhythmic ECG signals and outperforms similar network architectures. Moreover, the proposed method has lower computational costs compared to existing methods and is more suitable for mobile device-based diagnosis systems as it does not involve any complex preprocessing process. Hence, the proposed approach provides a simple and robust automatic cardiac arrhythmia detection scheme for the classification of ECG arrhythmias.  相似文献   

8.
《IRBM》2020,41(5):252-260
ObjectiveMonitoring the heartbeat of the fetus during pregnancy is a vital part in determining their health. Current fetal heart monitoring techniques lack the accuracy in fetal heart rate monitoring and features acquisition, resulting in diagnostic medical issues. The demand for a reliable method of non-invasive fetal heart monitoring is of high importance.MethodElectrocardiogram (ECG) is a method of monitoring the electrical activity produced by the heart. The extraction of the fetal ECG (FECG) from the abdominal ECG (AECG) is challenging since both ECGs of the mother and the baby share similar frequency components, adding to the fact that the signals are corrupted by white noise. This paper presents a method of FECG extraction by eliminating all other signals using AECG. The algorithm is based on attenuating the maternal ECG (MECG) by filtering and wavelet analysis to find the locations of the FECG, and thus isolating them based on their locations. Two signals of AECG collected at different locations on the abdomens are used. The ECG data used contains MECG of a power of five to ten times that of the FECG.ResultsThe FECG signals were successfully isolated from the AECG using the proposed method through which the QRS complex of the heartbeat was conserved, and heart rate was calculated. The fetal heart rate was 135 bpm and the instantaneous heart rate was 131.58 bpm. The heart rate of the mother was at 90 bpm with an instantaneous heart rate of 81.9 bpm.ConclusionThe proposed method is promising for FECG extraction since it relies on filtering and wavelet analysis of two abdominal signals for the algorithm. The method implemented is easily adjusted based on the power levels of signals, giving it great ease of adaptation to changing signals in different biosignals applications.  相似文献   

9.

Introduction

We describe initial validation of a new system for digital to analog conversion (DAC) and reconstruction of 12-lead ECGs. The system utilizes an open and optimized software format with a commensurately optimized DAC hardware configuration to accurately reproduce, from digital files, the original analog electrocardiographic signals of previously instrumented patients. By doing so, the system also ultimately allows for transmission of data collected on one manufacturer''s 12-lead ECG hardware/software into that of any other.

Materials and Methods

To initially validate the system, we compared original and post-DAC re-digitized 12-lead ECG data files (∼5-minutes long) in two types of validation studies in 10 patients. The first type quantitatively compared the total waveform voltage differences between the original and re-digitized data while the second type qualitatively compared the automated electrocardiographic diagnostic statements generated by the original versus re-digitized data.

Results

The grand-averaged difference in root mean squared voltage between the original and re-digitized data was 20.8 µV per channel when re-digitization involved the same manufacturer''s analog to digital converter (ADC) as the original digitization, and 28.4 µV per channel when it involved a different manufacturer''s ADC. Automated diagnostic statements generated by the original versus reconstructed data did not differ when using the diagnostic algorithm from the same manufacturer on whose device the original data were collected, and differed only slightly for just 1 of 10 patients when using a third-party diagnostic algorithm throughout.

Conclusion

Original analog 12-lead ECG signals can be reconstructed from digital data files with accuracy sufficient for clinical use. Such reconstructions can readily enable automated second opinions for difficult-to-interpret 12-lead ECGs, either locally or remotely through the use of dedicated or cloud-based servers.  相似文献   

10.
The aim of this paper is to describe the analysis of a high resolution ECG recorded from the body surface. Standard signal averaging techniques are improved by using a new time delay estimation method which leads to a better alignment accuracy of P and T waves. A second method uses adaptive identification to achieve a beat by beat fine ECG estimation. Information provided by the two methods allows a better interpretation of low and very low level signals.  相似文献   

11.
In wireless network research, simulation is the most imperative technique to investigate the network’s behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the errors of our proposed model.  相似文献   

12.
PurposeArtificial intelligence (AI) models are playing an increasing role in biomedical research and healthcare services. This review focuses on challenges points to be clarified about how to develop AI applications as clinical decision support systems in the real-world context.MethodsA narrative review has been performed including a critical assessment of articles published between 1989 and 2021 that guided challenging sections.ResultsWe first illustrate the architectural characteristics of machine learning (ML)/radiomics and deep learning (DL) approaches. For ML/radiomics, the phases of feature selection and of training, validation, and testing are described. DL models are presented as multi-layered artificial/convolutional neural networks, allowing us to directly process images. The data curation section includes technical steps such as image labelling, image annotation (with segmentation as a crucial step in radiomics), data harmonization (enabling compensation for differences in imaging protocols that typically generate noise in non-AI imaging studies) and federated learning. Thereafter, we dedicate specific sections to: sample size calculation, considering multiple testing in AI approaches; procedures for data augmentation to work with limited and unbalanced datasets; and the interpretability of AI models (the so-called black box issue). Pros and cons for choosing ML versus DL to implement AI applications to medical imaging are finally presented in a synoptic way.ConclusionsBiomedicine and healthcare systems are one of the most important fields for AI applications and medical imaging is probably the most suitable and promising domain. Clarification of specific challenging points facilitates the development of such systems and their translation to clinical practice.  相似文献   

13.
Two-component systems (TCSs) are widely employed by bacteria to sense specific external signals and conduct an appropriate response via a phosphorylation cascade within the cell. The TCS of the agr operon in the bacterium Staphylococcus aureus forms part of a regulatory process termed quorum sensing, a cell-to-cell communication mechanism used to assess population density. Since S. aureus manipulates this “knowledge” in order to co-ordinate production of its armoury of exotoxin virulence factors required to promote infection, it is important to understand fully how this process works. We present three models of the agr operon, each incorporating a different phosphorylation cascade for the TCS since the precise nature of the cascade is not fully understood. Using numerical and asymptotic techniques we examine the effects of inhibitor therapy, a novel approach to controlling bacterial infection through the attenuation of virulence, on each of these three cascades. We present results which, if evaluated against appropriate experimental data, provide insights into the potential effectiveness of such therapy. Moreover, the TCS models presented here are of broad relevance given that TCSs are widely conserved throughout the bacterial kingdom.  相似文献   

14.

Background

The electrocardiogram (ECG) signals provide important information about the heart electrical activities in medical and diagnostic applications. This signal may be contaminated by different types of noises. One of the noise types which has a considerable overlap with the ECG signals in frequency domain is electromyogram (EMG). Among the exciting approaches for de-noising the ECG signals, those based on singular spectrum analysis (SSA) are popular.

Methods

In this paper, we propose a method based on SSA to separate the ECG signals from EMG noises. In general, SSA contains four steps as: embedding, singular value decomposition, grouping, and diagonal averaging. Among these steps, grouping step contains parameter (indices) which can be adjusted to achieve the desirable results. Indeed, grouping is one of the important steps of SSA as the ECG and EMG signals are separated in this step. Hence, in the proposed method, a new criterion is presented to select the indices in grouping step to separate the ECG from EMG signal with higher accuracy.

Results

Performance of the proposed method is investigated using several experiments. Two sub-sets from Physionet MIT-BIH arrhythmia database are used for this purpose.

Conclusion

The experimental results demonstrate effectiveness of the proposed method in comparison with other SSA-based techniques.  相似文献   

15.
In recent years, the removal of electrocardiogram (ECG) interferences from electromyogram (EMG) signals has been given large consideration. Where the quality of EMG signal is of interest, it is important to remove ECG interferences from EMG signals. In this paper, an efficient method based on a combination of adaptive neuro-fuzzy inference system (ANFIS) and wavelet transform is proposed to effectively eliminate ECG interferences from surface EMG signals. The proposed approach is compared with other common methods such as high-pass filter, artificial neural network, adaptive noise canceller, wavelet transform, subtraction method and ANFIS. It is found that the performance of the proposed ANFIS–wavelet method is superior to the other methods with the signal to noise ratio and relative error of 14.97 dB and 0.02 respectively and a significantly higher correlation coefficient (p < 0.05).  相似文献   

16.

Background  

Information extraction from microarrays has not yet been widely used in diagnostic or prognostic decision-support systems, due to the diversity of results produced by the available techniques, their instability on different data sets and the inability to relate statistical significance with biological relevance. Thus, there is an urgent need to address the statistical framework of microarray analysis and identify its drawbacks and limitations, which will enable us to thoroughly compare methodologies under the same experimental set-up and associate results with confidence intervals meaningful to clinicians. In this study we consider gene-selection algorithms with the aim to reveal inefficiencies in performance evaluation and address aspects that can reduce uncertainty in algorithmic validation.  相似文献   

17.
This study aims at assessing the accuracy of computational fluid dynamics (CFD) for applications in sports aerodynamics, for example for drag predictions of swimmers, cyclists or skiers, by evaluating the applied numerical modelling techniques by means of detailed validation experiments. In this study, a wind-tunnel experiment on a scale model of a cyclist (scale 1:2) is presented. Apart from three-component forces and moments, also high-resolution surface pressure measurements on the scale model’s surface, i.e. at 115 locations, are performed to provide detailed information on the flow field. These data are used to compare the performance of different turbulence-modelling techniques, such as steady Reynolds-averaged Navier–Stokes (RANS), with several kε and kω turbulence models, and unsteady large-eddy simulation (LES), and also boundary-layer modelling techniques, namely wall functions and low-Reynolds number modelling (LRNM). The commercial CFD code Fluent 6.3 is used for the simulations. The RANS shear-stress transport (SST) kω model shows the best overall performance, followed by the more computationally expensive LES. Furthermore, LRNM is clearly preferred over wall functions to model the boundary layer. This study showed that there are more accurate alternatives for evaluating flow around bluff bodies with CFD than the standard kε model combined with wall functions, which is often used in CFD studies in sports.  相似文献   

18.
Software based efficient and reliable ECG data compression and transmission scheme is proposed here. The algorithm has been applied to various ECG data of all the 12 leads taken from PTB diagnostic ECG database (PTB-DB). First of all, R-peaks are detected by differentiation and squaring technique and QRS regions are located. To achieve a strict lossless compression in the QRS regions and a tolerable lossy compression in rest of the signal, two different compression algorithms have used. The whole compression scheme is such that the compressed file contains only ASCII characters. These characters are transmitted using internet based Short Message Service (SMS) and at the receiving end, original ECG signal is brought back using just the reverse logic of compression. It is observed that the proposed algorithm can reduce the file size significantly (compression ratio: 22.47) preserving ECG signal morphology.  相似文献   

19.
《IRBM》2020,41(1):2-17
In this work, computationally efficient and reliable cosine modulated filter banks (CMFBs) are designed for Electrocardiogram (ECG) data compression. First of all, CMFBs (uniform and non-uniform) are designed using interpolated finite impulse response (IFIR) prototype filter to reduce the computational complexity. To reduce the reconstruction error, linear iteration technique is applied to optimize the prototype filter. Then after, non-uniform CMFB is used for ECG data compression by decomposing ECG signal into various frequency bands. Subsequently, thresholding is applied for truncating the insignificant coefficients. The estimation of the threshold value is done by examining the significant energy of each band. Further, Run-length encoding (RLE) is utilized for improving the compression performance. The method is applied to MIT-BIH arrhythmia database for performance analysis of the proposed work. The experimental observations demonstrate that the proposed method has accomplished high compression ratio with the admirable quality of signal reconstruction. The proposed work provides the average values of compression ratio (CR), percent root mean square difference (PRD), percent root mean square difference normalized (PRDN), quality score (QS), correlation coefficient (CC), maximum error (ME), mean square error (MSE), and signal to noise ratio (SNR) are 23.86, 1.405, 2.55, 19.08, 0.999, 0.12, 0.054 and 37.611 dB, respectively. The proposed 8-channel uniform filter bank is used to detect the R-peak locations of the ECG signal. The comparative analysis shows that beats (locations and amplitudes) of both signals (original and reconstructed signals) are same.  相似文献   

20.
1IntroductionElbo(ECG)offersalotofilllcorralltinfondionforthediagnosisOfheartdis-eases.Berz1,seahaormalEChcax[lrins>llleu"knownsituations,thcycanbeCSUghtinthelongti1Ylecontin~11xHlltoriDg.ndterrnoultonngsystemwhichcanrecord24-hoUrECGdataisoneofeffectiveme~toprovidethefun~.AlthOUghthehag6scaleICmorestohavebeeddevelopepbedeavailabletostorelOngtboeECGdsta,itisveqdifficultyandtioublesomethatalopnUmbeOfdstaisprasersandstoredortiallsillltted.InthedigitedECGdata,thereare…  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号