首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data.  相似文献   

2.
The idea of ‘besides the MU properties and depending on the recording techniques, MUAPs can have unique pattern’ was adopted. The aim of this work was to recognise whether a Laplacian-detected MUAP is isolated or overlapped basing on novel morphological features using fuzzy classifier. Training data set was constructed to elaborate and test the ‘if-then’ fuzzy rules using signals provided by three muscles: the abductor pollicis brevis (APB), the first dorsal interosseous (FDI) and the biceps brachii (BB) muscles of 11 healthy subjects. The proposed fuzzy classier recognized automatically the isolated MUAPs with a performance of 95.03% which was improved to 97.8% by adjusting the certainty grades of rules using genetic algorithms (GA). Synthetic signals were used as reference to further evaluate the performance of the elaborated classifier. The recognition of the isolated MUAPs depends largely on noise level and is acceptable down to the signal to noise ratio of 20 dB with a detection probability of 0.96. The recognition of overlapped MUAPs depends slightly on the noise level with a detection probability of about 0.8. The corresponding misrecognition is caused principally by the synchronisation and the small overlapping degree.  相似文献   

3.
We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes.  相似文献   

4.
PA Taylor  KH Cho  CP Lin  BB Biswal 《PloS one》2012,7(9):e43415
Tractography algorithms have been developed to reconstruct likely WM pathways in the brain from diffusion tensor imaging (DTI) data. In this study, an elegant and simple means for improving existing tractography algorithms is proposed by allowing tracts to propagate through diagonal trajectories between voxels, instead of only rectilinearly to their facewise neighbors. A series of tests (using both real and simulated data sets) are utilized to show several benefits of this new approach. First, the inclusion of diagonal tract propagation decreases the dependence of an algorithm on the arbitrary orientation of coordinate axes and therefore reduces numerical errors associated with that bias (which are also demonstrated here). Moreover, both quantitatively and qualitatively, including diagonals decreases overall noise sensitivity of results and leads to significantly greater efficiency in scanning protocols; that is, the obtained tracts converge much more quickly (i.e., in a smaller amount of scanning time) to those of data sets with high SNR and spatial resolution. Importantly, the inclusion of diagonal propagation adds essentially no appreciable time of calculation or computational costs to standard methods. This study focuses on the widely-used streamline tracking method, FACT (fiber assessment by continuous tracking), and the modified method is termed "FACTID" (FACT including diagonals).  相似文献   

5.
Cardiovascular diseases are the number one cause of death worldwide. Currently, portable battery-operated systems such as mobile phones with wireless ECG sensors have the potential to be used in continuous cardiac function assessment that can be easily integrated into daily life. These portable point-of-care diagnostic systems can therefore help unveil and treat cardiovascular diseases. The basis for ECG analysis is a robust detection of the prominent QRS complex, as well as other ECG signal characteristics. However, it is not clear from the literature which ECG analysis algorithms are suited for an implementation on a mobile device. We investigate current QRS detection algorithms based on three assessment criteria: 1) robustness to noise, 2) parameter choice, and 3) numerical efficiency, in order to target a universal fast-robust detector. Furthermore, existing QRS detection algorithms may provide an acceptable solution only on small segments of ECG signals, within a certain amplitude range, or amid particular types of arrhythmia and/or noise. These issues are discussed in the context of a comparison with the most conventional algorithms, followed by future recommendations for developing reliable QRS detection schemes suitable for implementation on battery-operated mobile devices.  相似文献   

6.
Electrocardiography (ECG) signals are often contaminated by various kinds of noise or artifacts, for example, morphological changes due to motion artifact, non-stationary noise due to muscular contraction (EMG), etc. Some of these contaminations severely affect the usefulness of ECG signals, especially when computer aided algorithms are utilized. In this paper, a novel ECG enhancement algorithm is proposed based on sparse derivatives. By solving a convex ?1 optimization problem, artifacts are reduced by modeling the clean ECG signal as a sum of two signals whose second and third-order derivatives (differences) are sparse respectively. The algorithm is applied to a QRS detection system and validated using the MIT-BIH Arrhythmia database (109,452 anotations), resulting a sensitivity of Se = 99.87% and a positive prediction of +P = 99.88%.  相似文献   

7.
Although Kolmogorov-Smirnov (KS) statistic is a widely used method, some weaknesses exist in investigating abrupt Change Point (CP) problems, e.g. it is time-consuming and invalid sometimes. To detect abrupt change from time series fast, a novel method is proposed based on Haar Wavelet (HW) and KS statistic (HWKS). First, the two Binary Search Trees (BSTs), termed TcA and TcD, are constructed by multi-level HW from a diagnosed time series; the framework of HWKS method is implemented by introducing a modified KS statistic and two search rules based on the two BSTs; and then fast CP detection is implemented by two HWKS-based algorithms. Second, the performance of HWKS is evaluated by simulated time series dataset. The simulations show that HWKS is faster, more sensitive and efficient than KS, HW, and T methods. Last, HWKS is applied to analyze the electrocardiogram (ECG) time series, the experiment results show that the proposed method can find abrupt change from ECG segment with maximal data fluctuation more quickly and efficiently, and it is very helpful to inspect and diagnose the different state of health from a patient''s ECG signal.  相似文献   

8.
Reconstructing phylogenetic trees efficiently and accurately from distance estimates is an ongoing challenge in computational biology from both practical and theoretical considerations. We study algorithms which are based on a characterization of edge-weighted trees by distances to LCAs (Least Common Ancestors). This characterization enables a direct application of ultrametric reconstruction techniques to trees which are not necessarily ultrametric. A simple and natural neighbor joining criterion based on this observation is used to provide a family of efficient neighbor-joining algorithms. These algorithms are shown to reconstruct a refinement of the Buneman tree, which implies optimal robustness to noise under criteria defined by Atteson. In this sense, they outperform many popular algorithms such as Saitou and Nei's NJ. One member of this family is used to provide a new simple version of the 3-approximation algorithm for the closest additive metric under the iota (infinity) norm. A byproduct of our work is a novel technique which yields a time optimal O (n (2)) implementation of common clustering algorithms such as UPGMA.  相似文献   

9.
A new algorithm for the identification of multiple input Wiener systems   总被引:1,自引:0,他引:1  
Multiple-input Wiener systems consist of two or more linear dynamic elements, whose outputs are transformed by a multiple-input static non-linearity. Korenberg (1985) demonstrated that the linear elements of these systems can be estimated using either a first order input-ouput cross-covariance or a slice of the second, or higher, order input-output cross-covariance function. Korenberg's work used a multiple input LNL structure, in which the output of the static nonlinearity was then filtered by a linear dynamic system. In this paper we show that by restricting our study to the slightly simpler Wiener structure, it is possible to improve the linear subsystem estimates obtained from the measured cross-covariance functions. Three algorithms, which taken together can identify any multiple-input Wiener system, have been developed. We present the theory underlying these algorithms and detail their implementation. Simulation results are then presented which demonstrate that the algorithms are robust in the presence of output noise, and provide good estimates of the system dynamics under a wide set of conditions.  相似文献   

10.
目的 目前,如何从核磁共振(nuclear magnetic resonance,NMR)光谱实验中准确地确定蛋白质的三维结构是生物物理学中的一个热门课题,因为蛋白质是生物体的重要组成成分,了解蛋白质的空间结构对研究其功能至关重要,然而由于实验数据的严重缺乏使其成为一个很大的挑战。方法 在本文中,通过恢复距离矩阵的矩阵填充(matrix completion,MC)算法来解决蛋白质结构确定问题。首先,初始距离矩阵模型被建立,由于实验数据的缺乏,此时的初始距离矩阵为不完整矩阵,随后通过MC算法恢复初始距离矩阵的缺失数据,从而获得整个蛋白质三维结构。为了进一步测试算法的性能,本文选取了4种不同拓扑结构的蛋白质和6种现有的MC算法进行了测试,探究了算法在不同的采样率以及不同程度噪声的情况下算法的恢复效果。结果 通过分析均方根偏差(root-mean-square deviation,RMSD)和计算时间这两个重要指标的平均值及标准差评估了算法的性能,结果显示当采样率和噪声因子控制在一定范围内时,RMSD值和标准差都能达到很小的值。另外本文更加具体地比较了不同算法的特点和优势,在精确采样情况下...  相似文献   

11.
Pattern recognition and classification are two of the key topics in computer science. In this paper a novel method for the task of pattern classification is presented. The proposed method combines a hybrid associative classifier (Clasificador Híbrido Asociativo con Traslación, CHAT, in Spanish), a coding technique for output patterns called one-hot vector and majority voting during the classification step. The method is termed as CHAT One-Hot Majority (CHAT-OHM). The performance of the method is validated by comparing the accuracy of CHAT-OHM with other well-known classification algorithms. During the experimental phase, the classifier was applied to four datasets related to the medical field. The results also show that the proposed method outperforms the original CHAT classification accuracy.  相似文献   

12.
The design of controllers for batch bioreactors   总被引:2,自引:0,他引:2  
The implementation of control algorithms to batch bioreactors is often complicated by variations in process dynamics that occur during the course of fermentation. Such a wide operating range often renders the performance of fixed gain proportional-integral-differential (PID) controllers unsatisfactory. In this work, detailed studies on the control of batch fermentations are per formed. Two simple controller designs are presented with the intent to compensate for changing process dynamics. One design incorporates the concepts of static feedforward-feedback control. While this technique produces tighter control than feedback alone, it is not as successful as a controller based on gain scheduling. The gain-scheduling controller, a subclass of adaptive controllers, uses the oxygen uptake rate as an auxiliary variable to fine-tune the PID controller parameters. The control of oxygen tension in the bioreactor is used as a vehicle to convey the proposed ideas, analyses, and results. Simulation experiments indicate significant improvement in controller performance can be achieved by both of the proposed approaches even in the presence of measurement noise.  相似文献   

13.
Methods of least squares and SIRT in reconstruction.   总被引:1,自引:0,他引:1  
In this paper we show that a particular version of the Simultaneous Iterative Reconstruction Technique (SIRT) proposed by Gilbert in 1972 strongly resembles the Richardson least-squares algorithm.By adopting the adjustable parameters of the general Richardson algorithm, we have been able to produce generalized SIRT algorithms with improved convergence.A particular generalization of the SIRT algorithm, GSIRT, has an adjustable parameter σ and the starting picture ρ0 as input. A value 12 for σ and a weighted back-projection for ρ0 produce a stable algorithm.We call the SIRT-like algorithms for the solution of the weighted leastsquares problems LSIRT and present two such algorithms, LSIRT1 and LSIRT2, which have definite computational advantages over SIRT and GSIRT.We have tested these methods on mathematically simulated phantoms and find that the new SIRT methods converge faster than Gilbert's SIRT but are more sensitive to noise present in the data. However, the faster convergence rates allow termination before the noise contribution degrades the reconstructed image excessively.  相似文献   

14.
15.
A mathematical model based on difference equations is presented to show that minute chiral perturbations are sufficient for spontaneous breaking ofL, D symmetry in nonlinear autocatalytic reactions. The effect of noise on rate constants is analysed and it was noted that, below a critical noise level, the influence of the chiral perturbation results selection of the biased isomer with certainty.Sumanasekara Chair in Natural Sciences.  相似文献   

16.
MOTIVATION: The most commonly utilized microarrays for mRNA profiling (Affymetrix) include 'probe sets' of a series of perfect match and mismatch probes (typically 22 oligonucleotides per probe set). There are an increasing number of reported 'probe set algorithms' that differ in their interpretation of a probe set to derive a single normalized 'signal' representative of expression of each mRNA. These algorithms are known to differ in accuracy and sensitivity, and optimization has been done using a small set of standardized control microarray data. We hypothesized that different mRNA profiling projects have varying sources and degrees of confounding noise, and that these should alter the choice of a specific probe set algorithm. Also, we hypothesized that use of the Microarray Suite (MAS) 5.0 probe set detection p-value as a weighting function would improve the performance of all probe set algorithms. RESULTS: We built an interactive visual analysis software tool (HCE2W) to test and define parameters in Affymetrix analyses that optimize the ratio of signal (desired biological variable) versus noise (confounding uncontrolled variables). Five probe set algorithms were studied with and without statistical weighting of probe sets using the MAS 5.0 probe set detection p-values. The signal-to-noise ratio optimization method was tested in two large novel microarray datasets with different levels of confounding noise, a 105 sample U133A human muscle biopsy dataset (11 groups: mutation-defined, extensive noise), and a 40 sample U74A inbred mouse lung dataset (8 groups: little noise). Performance was measured by the ability of the specific probe set algorithm, with and without detection p-value weighting, to cluster samples into the appropriate biological groups (unsupervised agglomerative clustering with F-measure values). Of the total random sampling analyses, 50% showed a highly statistically significant difference between probe set algorithms by ANOVA [F(4,10) > 14, p < 0.0001], with weighting by MAS 5.0 detection p-value showing significance in the mouse data by ANOVA [F(1,10) > 9, p < 0.013] and paired t-test [t(9) = -3.675, p = 0.005]. Probe set detection p-value weighting had the greatest positive effect on performance of dChip difference model, ProbeProfiler and RMA algorithms. Importantly, probe set algorithms did indeed perform differently depending on the specific project, most probably due to the degree of confounding noise. Our data indicate that significantly improved data analysis of mRNA profile projects can be achieved by optimizing the choice of probe set algorithm with the noise levels intrinsic to a project, with dChip difference model with MAS 5.0 detection p-value continuous weighting showing the best overall performance in both projects. Furthermore, both existing and newly developed probe set algorithms should incorporate a detection p-value weighting to improve performance. AVAILABILITY: The Hierarchical Clustering Explorer 2.0 is available at http://www.cs.umd.edu/hcil/hce/ Murine arrays (40 samples) are publicly available at the PEPR resource (http://microarray.cnmcresearch.org/pgadatatable.asp http://pepr.cnmcresearch.org Chen et al., 2004).  相似文献   

17.
Multiple‐dose factorial designs may provide confirmatory evidence that (fixed) combination drugs are superior to either component drug alone. Moreover, a useful and safe range of dose combinations may be identified. In our study, we focus on (A) adjustments of the overall significance level made necessary by multiple testing, (B) improvement of conventional statistical methods with respect to power, distributional assumptions and dimensionality, and (C) construction of corresponding simultaneous confidence intervals. We propose novel resampling algorithms, which in a simple way take the correlation of multiple test statistics into account, thus improving power. Moreover, these algorithms can easily be extended to combinations of more than two component drugs and binary outcome data. Published data summaries from a blood pressure reduction trial are analysed and presented as a worked example. An implementation of the proposed methods is available online as an R package.  相似文献   

18.
Summary Lateralization of interaural time difference by barn owls (Tyto alba) was studied in a dichotic masking experiment. Sound bursts consisted of two parts: binaurally time-shifted noise, termed the probe, was inserted between masking noise. The owls indicated that they detected and lateralized the time-shift in the probe by a head turn in the direction predicted from sign of the time-shift.The general characteristics of head turns in response to this stimulus was similar to the head turns elicited by free-field stimulation or to head turns in response to presentation of the probe alone.The owls could easily lateralize stimuli containing long probes. The number of correct turns decreased as probe duration decreased, demonstrating that the masking noise interfered with the owls' ability to lateralize the probe. The minimal probe duration that the animals could lateralize (minimal duration) became shorter as burst duration decreased. Minimal durations ranged from 1 ms to 15 ms for the two subjects and burst durations from 10 to 100 ms.These findings suggested that owls possess a temporal window. A fitting procedure proposed by Moore et al. (1988) was used to determine the shape of the temporal window. The fitting procedure showed that the shape of the owls' binaural temporal window could be described by the same algorithms as the human monaural temporal window. Thus, the temporal window is composed of a short time constant that determines the central part of the window, and a longer time constant that determines the shape at the skirts of the window.Abbreviations ERD equivalent rectangular duration - ILD interaural level difference - ITD interaural time difference - RSE relative signal energy - SNR signal-to-noise ratio  相似文献   

19.
Molecular entities work in concert as a system and mediate phenotypic outcomes and disease states. There has been recent interest in modelling the associations between molecular entities from their observed expression profiles as networks using a battery of algorithms. These networks have proven to be useful abstractions of the underlying pathways and signalling mechanisms. Noise is ubiquitous in molecular data and can have a pronounced effect on the inferred network. Noise can be an outcome of several factors including: inherent stochastic mechanisms at the molecular level, variation in the abundance of molecules, heterogeneity, sensitivity of the biological assay or measurement artefacts prevalent especially in high-throughput settings. The present study investigates the impact of discrepancies in noise variance on pair-wise dependencies, conditional dependencies and constraint-based Bayesian network structure learning algorithms that incorporate conditional independence tests as a part of the learning process. Popular network motifs and fundamental connections, namely: (a) common-effect, (b) three-chain, and (c) coherent type-I feed-forward loop (FFL) are investigated. The choice of these elementary networks can be attributed to their prevalence across more complex networks. Analytical expressions elucidating the impact of discrepancies in noise variance on pairwise dependencies and conditional dependencies for special cases of these motifs are presented. Subsequently, the impact of noise on two popular constraint-based Bayesian network structure learning algorithms such as Grow-Shrink (GS) and Incremental Association Markov Blanket (IAMB) that implicitly incorporate tests for conditional independence is investigated. Finally, the impact of noise on networks inferred from publicly available single cell molecular expression profiles is investigated. While discrepancies in noise variance are overlooked in routine molecular network inference, the results presented clearly elucidate their non-trivial impact on the conclusions that in turn can challenge the biological significance of the findings. The analytical treatment and arguments presented are generic and not restricted to molecular data sets.  相似文献   

20.
Iterative reconstruction algorithms are becoming increasingly important in electron tomography of biological samples. These algorithms, however, impose major computational demands. Parallelization must be employed to maintain acceptable running times. Graphics Processing Units (GPUs) have been demonstrated to be highly cost-effective for carrying out these computations with a high degree of parallelism. In a recent paper by Xu et al. (2010), a GPU implementation strategy was presented that obtains a speedup of an order of magnitude over a previously proposed GPU-based electron tomography implementation. In this technical note, we demonstrate that by making alternative design decisions in the GPU implementation, an additional speedup can be obtained, again of an order of magnitude. By carefully considering memory access locality when dividing the workload among blocks of threads, the GPU’s cache is used more efficiently, making more effective use of the available memory bandwidth.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号