首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary A data processing method is described which reduces the effects of t1 noise artifacts and improves the presentation of 2D NMR spectral data. A t1 noise profile is produced by measuring the average noise in each column. This profile is then used to determine weighting coefficients for a sliding weighted smoothing filter that is applied to each row, such that the amount of smoothing each point receives is proportional to both its estimated t1 noise level and the level of t1 noise of neighbouring points. Thus, points in the worst t1 noise bands receive the greatest smoothing, whereas points in low-noise regions remain relatively unaffected. In addition, weighted smoothing allows points in low-noise regions to influence neighbouring points in noisy regions. This method is also effective in reducing the noise artifacts associated with the solvent resonance in spectra of biopolymers in aqueous solution. Although developed primarily to improve the quality of 2D NMR spectra of biopolymers prior to automated analysis, this approach should enhance processing of spectra of a wide range of compounds and can be used whenever noise occurs in discrete bands in one dimension of a multi-dimensional spectrum.  相似文献   

2.
Hidden Markov modeling (HMM) can be applied to extract single channel kinetics at signal-to-noise ratios that are too low for conventional analysis. There are two general HMM approaches: traditional Baum's reestimation and direct optimization. The optimization approach has the advantage that it optimizes the rate constants directly. This allows setting constraints on the rate constants, fitting multiple data sets across different experimental conditions, and handling nonstationary channels where the starting probability of the channel depends on the unknown kinetics. We present here an extension of this approach that addresses the additional issues of low-pass filtering and correlated noise. The filtering is modeled using a finite impulse response (FIR) filter applied to the underlying signal, and the noise correlation is accounted for using an autoregressive (AR) process. In addition to correlated background noise, the algorithm allows for excess open channel noise that can be white or correlated. To maximize the efficiency of the algorithm, we derive the analytical derivatives of the likelihood function with respect to all unknown model parameters. The search of the likelihood space is performed using a variable metric method. Extension of the algorithm to data containing multiple channels is described. Examples are presented that demonstrate the applicability and effectiveness of the algorithm. Practical issues such as the selection of appropriate noise AR orders are also discussed through examples.  相似文献   

3.
Hui M  Li J  Wen X  Yao L  Long Z 《PloS one》2011,6(12):e29274

Background

Independent Component Analysis (ICA) has been widely applied to the analysis of fMRI data. Accurate estimation of the number of independent components of fMRI data is critical to reduce over/under fitting. Although various methods based on Information Theoretic Criteria (ITC) have been used to estimate the intrinsic dimension of fMRI data, the relative performance of different ITC in the context of the ICA model hasn''t been fully investigated, especially considering the properties of fMRI data. The present study explores and evaluates the performance of various ITC for the fMRI data with varied white noise levels, colored noise levels, temporal data sizes and spatial smoothness degrees.

Methodology

Both simulated data and real fMRI data with varied Gaussian white noise levels, first-order auto-regressive (AR(1)) noise levels, temporal data sizes and spatial smoothness degrees were carried out to deeply explore and evaluate the performance of different traditional ITC.

Principal Findings

Results indicate that the performance of ITCs depends on the noise level, temporal data size and spatial smoothness of fMRI data. 1) High white noise levels may lead to underestimation of all criteria and MDL/BIC has the severest underestimation at the higher Gaussian white noise level. 2) Colored noise may result in overestimation that can be intensified by the increase of AR(1) coefficient rather than the SD of AR(1) noise and MDL/BIC shows the least overestimation. 3) Larger temporal data size will be better for estimation for the model of white noise but tends to cause severer overestimation for the model of AR(1) noise. 4) Spatial smoothing will result in overestimation in both noise models.

Conclusions

1) None of ITC is perfect for all fMRI data due to its complicated noise structure. 2) If there is only white noise in data, AIC is preferred when the noise level is high and otherwise, Laplace approximation is a better choice. 3) When colored noise exists in data, MDL/BIC outperforms the other criteria.  相似文献   

4.
L L Lim  J Whitehead 《Biometrics》1992,48(1):175-187
The distribution of ventilation-perfusion ratio over the lung is a useful indicator of the efficiency of lung function. Information about this distribution can be obtained by observing the retention in blood of inert gases passed through the lung. These retentions are related to the ventilation-perfusion distribution through an ill-posed integral equation. An unusual feature of this problem of estimating the ventilation-perfusion distribution is the small amount of data available; typically there are just six data points, as only six gases are used in the experiment. A nonparametric smoothing method is compared to a simpler method that models the distribution as a histogram with five classes. Results from the smoothing method are found to be very unstable. In contrast, the simpler method gives stable solutions with parameters that are physiologically meaningful. It is concluded that while such smoothing methods may be useful for solving some ill-posed integral equation problems, the simpler method is preferable when data are scarce.  相似文献   

5.
Chromatographed peptide signals form the basis of further data processing that eventually results in functional information derived from data‐dependent bottom‐up proteomics assays. We seek to rank LC/MS parent ions by the quality of their extracted ion chromatograms. Ranked extracted ion chromatograms act as an intuitive physical/chemical preselection filter to improve the quality of MS/MS fragment scans submitted for database search. We identify more than 4900 proteins when considering detector shifts of less than 7 ppm. High quality parent ions for which the database search yields no hits become candidates for subsequent unrestricted analysis for PTMs. Following this rational approach, we prioritize identification of more than 5000 spectrum matches from modified peptides and confirmed the presence of acetylaldehyde‐modified His/Lys. We present a logical workflow that scores data‐dependent selected ion chromatograms and leverage information about semianalytical LC/LC dimension prior to MS. Our method can be successfully used to identify unexpected modifications in peptides with excellent chromatography characteristics, independent of fragmentation pattern and activation methods. We illustrate analysis of ion chromatograms detected in two different modes by RF linear ion trap and electrostatic field orbitrap.  相似文献   

6.

Background  

Quantitative proteomics technologies have been developed to comprehensively identify and quantify proteins in two or more complex samples. Quantitative proteomics based on differential stable isotope labeling is one of the proteomics quantification technologies. Mass spectrometric data generated for peptide quantification are often noisy, and peak detection and definition require various smoothing filters to remove noise in order to achieve accurate peptide quantification. Many traditional smoothing filters, such as the moving average filter, Savitzky-Golay filter and Gaussian filter, have been used to reduce noise in MS peaks. However, limitations of these filtering approaches often result in inaccurate peptide quantification. Here we present the WaveletQuant program, based on wavelet theory, for better or alternative MS-based proteomic quantification.  相似文献   

7.
MOTIVATION: The accumulation of genomic alterations is an important process in tumor formation and progression. Comparative genomic hybridization performed on cDNA arrays (cDNA aCGH) is a common method to investigate the genomic alterations on a genome-wide scale. However, when detecting low-level DNA copy number changes this technology requires the use of noise reduction strategies due to a low signal to noise ratio. RESULTS: Currently a running average smoothing filter is the most frequently used noise reduction strategy. We analyzed this strategy theoretically and experimentally and found that it is not sensitive to very low level genomic alterations. The presence of systematic errors in the data is one of the main reasons for this failure. We developed a novel algorithm which efficiently reduces systematic noise and allows for the detection of low-level genomic alterations. The algorithm is based on comparison of the biological relevant data to data from so-called self-self hybridizations, additional experiments which contain no biological information but contain systematic errors. We find that with our algorithm the effective resolution for +/-1 DNA copy number changes is about 2 Mb. For copy number changes larger than three the effective resolution is on the level of single genes.  相似文献   

8.
Hidden Markov models have recently been used to model single ion channel currents as recorded with the patch clamp technique from cell membranes. The estimation of hidden Markov models parameters using the forward-backward and Baum-Welch algorithms can be performed at signal to noise ratios that are too low for conventional single channel kinetic analysis; however, the application of these algorithms relies on the assumptions that the background noise be white and that the underlying state transitions occur at discrete times. To address these issues, we present an "H-noise" algorithm that accounts for correlated background noise and the randomness of sampling relative to transitions. We also discuss three issues that arise in the practical application of the algorithm in analyzing single channel data. First, we describe a digital inverse filter that removes the effects of the analog antialiasing filter and yields a sharp frequency roll-off. This enhances the performance while reducing the computational intensity of the algorithm. Second, the data may be contaminated with baseline drifts or deterministic interferences such as 60-Hz pickup. We propose an extension of previous results to consider baseline drift. Finally, we describe the extension of the algorithm to multiple data sets.  相似文献   

9.
MOTIVATION: A major problem for current peak detection algorithms is that noise in mass spectrometry (MS) spectra gives rise to a high rate of false positives. The false positive rate is especially problematic in detecting peaks with low amplitudes. Usually, various baseline correction algorithms and smoothing methods are applied before attempting peak detection. This approach is very sensitive to the amount of smoothing and aggressiveness of the baseline correction, which contribute to making peak detection results inconsistent between runs, instrumentation and analysis methods. RESULTS: Most peak detection algorithms simply identify peaks based on amplitude, ignoring the additional information present in the shape of the peaks in a spectrum. In our experience, 'true' peaks have characteristic shapes, and providing a shape-matching function that provides a 'goodness of fit' coefficient should provide a more robust peak identification method. Based on these observations, a continuous wavelet transform (CWT)-based peak detection algorithm has been devised that identifies peaks with different scales and amplitudes. By transforming the spectrum into wavelet space, the pattern-matching problem is simplified and in addition provides a powerful technique for identifying and separating the signal from the spike noise and colored noise. This transformation, with the additional information provided by the 2D CWT coefficients can greatly enhance the effective signal-to-noise ratio. Furthermore, with this technique no baseline removal or peak smoothing preprocessing steps are required before peak detection, and this improves the robustness of peak detection under a variety of conditions. The algorithm was evaluated with SELDI-TOF spectra with known polypeptide positions. Comparisons with two other popular algorithms were performed. The results show the CWT-based algorithm can identify both strong and weak peaks while keeping false positive rate low. AVAILABILITY: The algorithm is implemented in R and will be included as an open source module in the Bioconductor project.  相似文献   

10.
Quantile smoothing of array CGH data   总被引:4,自引:0,他引:4  
MOTIVATION: Plots of array Comparative Genomic Hybridization (CGH) data often show special patterns: stretches of constant level (copy number) with sharp jumps between them. There can also be much noise. Classic smoothing algorithms do not work well, because they introduce too much rounding. To remedy this, we introduce a fast and effective smoothing algorithm based on penalized quantile regression. It can compute arbitrary quantile curves, but we concentrate on the median to show the trend and the lower and upper quartile curves showing the spread of the data. Two-fold cross-validation is used for optimizing the weight of the penalties. RESULTS: Simulated data and a published dataset are used to show the capabilities of the method to detect the segments of changed copy numbers in array CGH data.  相似文献   

11.
12.
The fractal dimension of subsets of time series data can be used to modulate the extent of filtering to which the data is subjected. In general, such fractal filtering makes it possible to retain large transient shifts in baseline with very little decrease in amplitude, while the baseline noise itself is markedly reduced (Strahle, W.C. (1988) Electron. Lett. 24, 1248-1249). The fractal filter concept is readily applicable to single channel data in which there are numerous opening/closing events and flickering. Using a simple recursive filter of the form: Yn = w.Yn-1 + (1 - w)Xn, where Xn is the data, Yn the filtered result, and w is a weighting factor, 0 less than w less than 1, we adjusted w as a function of the fractal dimension (D) for data subsets. Linear and ogive functions of D were used to modify w. Of these, the ogive function: w = [1 + p(1.5-D)]-1 (where p affects the amount of filtering), is most useful for removing extraneous noise while retaining opening/closing events.  相似文献   

13.
De Cáceres M  Legendre P 《Oecologia》2008,156(3):657-669
Beals smoothing is a multivariate transformation specially designed for species presence/absence community data containing noise and/or a lot of zeros. This transformation replaces the observed values of the target species by predictions of occurrence on the basis of its co-occurrences with the remaining species. In many applications, the transformed values are used as input for multivariate analyses. As Beals smoothing values provide a sense of “probability of occurrence”, they have also been used for inference. However, this transformation can produce spurious results, and it must be used with caution. Here we study the statistical and ecological bases underlying the Beals smoothing function, and the factors that may affect the reliability of transformed values are explored using simulated data sets. Our simulations demonstrate that Beals predictions are unreliable for target species that are not related to the overall ecological structure. Furthermore, the presence of these “random” species may diminish the quality of Beals smoothing values for the remaining species. A statistical test is proposed to determine when observed values can be replaced with Beals smoothing predictions. Two real-data example applications are presented to illustrate the potentially false predictions of Beals smoothing and the necessary checking step performed by the new test. Electronic supplementary material The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

14.
P M Sloot  P Tensen  C G Figdor 《Cytometry》1987,8(6):545-551
Spectral decomposition of flow cytometric datafiles of arbitrary dimension reveal information of both the signal and the noise components that constitute the histograms. This spectral information is used to construct a low-pass digital filter, which removes the high-frequency noise from the actual data. It is shown that this procedure guarantees non-trivial smoothing of the flow cytometric data in accordance with the local experimental situation. As a consequence optimal reconstruction of the signal is possible, which facilitates unambiguous interpretation of the data files and mathematical estimation of the statistical parameters.  相似文献   

15.
A new implementation of the surface Laplacian derivation (SLD) method is desribed which reconstructs a realistically shaped, local scalp surface geometry using measured electrode positions, generates a local spectral-interpolated potential distribution function, and estimates the surface Laplacian values through a local planar parametric space using a stable numerical method combining Taylor expansions with the least-squares technique. The implementation is modified for efficient repeated SLD operations on a time series. Examples are shown of applications to evoked potential data. The resolving power of the SLD is examined as a function of the spatial signal-to-noise (SNR) ratio. The analysis suggests that the Laplacian is effective when the spatial SNR is greater than 3. It is shown that spatial low-pass filtering with a Gaussian filter can be used to reduce the effect of noise and recover useful signal if the noise is spatially incoherent.  相似文献   

16.
A data-smoothing filter has been developed that permits the improvement in accuracy of individual elements of a bivariate flow cytometry (FCM) histogram by making use of data from adjacent elements, a knowledge of the two-dimensional measurement system point spread function (PSF), and the local count density. For FCM data, the PSF is assumed to be a set of two-dimensional Gaussian functions with a constant coefficient of variation for each axis. A set of space variant smoothing kernels are developed from the basic PSF by adjusting the orthogonal standard deviations of each Gaussian smoothing kernel according to the local count density. This adjustment in kernel size matches the degree of smoothing to the local reliability of the data. When the count density is high, a small kernel is sufficient. When the density is low, however, a broader kernel should be used. The local count density is taken from a region defined by the measurement PSF. The smoothing algorithm permits the reduction in statistical fluctuations present in bivariate FCM histograms due to the low count densities often encountered in some elements. This reduction in high-frequency spatial noise aids in the visual interpretation of the data. Additionally, by making more efficient use of smaller samples, systematic errors due to system drift may be minimized.  相似文献   

17.
Tennis stroke mechanics have attracted considerable biomechanical analysis, yet current filtering practice may lead to erroneous reporting of data near the impact of racket and ball. This research had three aims: (1) to identify the best method of estimating the displacement and velocity of the racket at impact during the tennis serve, (2) to demonstrate the effect of different methods on upper limb kinematics and kinetics and (3) to report the effect of increased noise on the most appropriate treatment method. The tennis serves of one tennis player, fit with upper limb and racket retro-reflective markers, were captured with a Vicon motion analysis system recording at 500 Hz. The raw racket tip marker displacement and velocity were used as criterion data to compare three different endpoint treatments and two different filters. The 2nd-order polynomial proved to be the least erroneous extrapolation technique and the quintic spline filter was the most appropriate filter. The previously performed "smoothing through impact" method, using a quintic spline filter, underestimated the racket velocity (9.1%) at the time of impact. The polynomial extrapolation method remained effective when noise was added to the marker trajectories.  相似文献   

18.
基于高阶累计量的肺音信号AR模型参数和双谱估计   总被引:1,自引:0,他引:1  
根据肺音信号的非高斯随机特性,建立了肺胞系统非高斯AR模型。应用高阶累积量技术对肺音信号进行参数化双谱估计,并提取肺音源特征和肺胸系统传递函数。实验结果证实:肺音源由非高斯白噪声、周期脉冲序列和间歇性随机脉冲组成,肺胸系统相当于声低通滤波器,不同病理情况下的肺音双谱结构存在明显差异。该方法克服了肺音信号功率谱分析和经典双谱分析的缺陷与不足,可以为肺部疾病诊断提供更多和更客观的内在信息。  相似文献   

19.
Sedimentation data acquired with the interference optical scanning system of the Optima XL-I analytical ultracentrifuge can exhibit time-invariant noise components, as well as small radial-invariant baseline offsets, both superimposed onto the radial fringe shift data resulting from the macromolecular solute distribution. A well-established method for the interpretation of such ultracentrifugation data is based on the analysis of time-differences of the measured fringe profiles, such as employed in the g(s*) method. We demonstrate how the technique of separation of linear and nonlinear parameters can be used in the modeling of interference data by unraveling the time-invariant and radial-invariant noise components. This allows the direct application of the recently developed approximate analytical and numerical solutions of the Lamm equation to the analysis of interference optical fringe profiles. The presented method is statistically advantageous since it does not require the differentiation of the data and the model functions. The method is demonstrated on experimental data and compared with the results of a g(s*) analysis. It is also demonstrated that the calculation of time-invariant noise components can be useful in the analysis of absorbance optical data. They can be extracted from data acquired during the approach to equilibrium, and can be used to increase the reliability of the results obtained from a sedimentation equilibrium analysis.  相似文献   

20.
Smoothing and differentiation of noisy data using spline functions requires the selection of an unknown smoothing parameter. The method of generalized cross-validation provides an excellent estimate of the smoothing parameter from the data itself even when the amount of noise associated with the data is unknown. In the present model only a single smoothing parameter must be obtained, but in a more general context the number may be larger. In an earlier work, smoothing of the data was accomplished by solving a minimization problem using the technique of dynamic programming. This paper shows how the computations required by generalized cross-validation can be performed as a simple extension of the dynamic programming formulas. The results of numerical experiments are also included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号