首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background  

Dual-axis swallowing accelerometry has recently been proposed as a tool for non-invasive analysis of swallowing function. Although swallowing is known to be physiologically modifiable by the type of food or liquid (i.e., stimuli), the effects of stimuli on dual-axis accelerometry signals have never been thoroughly investigated. Thus, the objective of this study was to investigate stimulus effects on dual-axis accelerometry signal characteristics. Signals were acquired from 17 healthy participants while swallowing 4 different stimuli: water, nectar-thick and honey-thick apple juices, and a thin-liquid barium suspension. Two swallowing tasks were examined: discrete and sequential. A variety of features were extracted in the time and time-frequency domains after swallow segmentation and pre-processing. A separate Friedman test was conducted for each feature and for each swallowing task.  相似文献   

2.

Background

Swallowing accelerometry has been suggested as a potential non-invasive tool for bedside dysphagia screening. Various vibratory signal features and complementary measurement modalities have been put forth in the literature for the potential discrimination between safe and unsafe swallowing. To date, automatic classification of swallowing accelerometry has exclusively involved a single-axis of vibration although a second axis is known to contain additional information about the nature of the swallow. Furthermore, the only published attempt at automatic classification in adult patients has been based on a small sample of swallowing vibrations.

Methods

In this paper, a large corpus of dual-axis accelerometric signals were collected from 30 older adults (aged 65.47 ± 13.4 years, 15 male) referred to videofluoroscopic examination on the suspicion of dysphagia. We invoked a reputation-based classifier combination to automatically categorize the dual-axis accelerometric signals into safe and unsafe swallows, as labeled via videofluoroscopic review. From these participants, a total of 224 swallowing samples were obtained, 164 of which were labeled as unsafe swallows (swallows where the bolus entered the airway) and 60 as safe swallows. Three separate support vector machine (SVM) classifiers and eight different features were selected for classification.

Results

With selected time, frequency and information theoretic features, the reputation-based algorithm distinguished between safe and unsafe swallowing with promising accuracy (80.48 ± 5.0%), high sensitivity (97.1 ± 2%) and modest specificity (64 ± 8.8%). Interpretation of the most discriminatory features revealed that in general, unsafe swallows had lower mean vibration amplitude and faster autocorrelation decay, suggestive of decreased hyoid excursion and compromised coordination, respectively. Further, owing to its performance-based weighting of component classifiers, the static reputation-based algorithm outperformed the democratic majority voting algorithm on this clinical data set.

Conclusion

Given its computational efficiency and high sensitivity, reputation-based classification of dual-axis accelerometry ought to be considered in future developments of a point-of-care swallow assessment where clinical informatics are desired.  相似文献   

3.
The rapid accumulation of whole-genome data has renewed interest in the study of genomic rearrangements. Comparative genomics, evolutionary biology, and cancer research all require models and algorithms to elucidate the mechanisms, history, and consequences of these rearrangements. However, even simple models lead to NP-hard problems, particularly in the area of phylogenetic analysis. Current approaches are limited to small collections of genomes and low-resolution data (typically a few hundred syntenic blocks). Moreover, whereas phylogenetic analyses from sequence data are deemed incomplete unless bootstrapping scores (a measure of confidence) are given for each tree edge, no equivalent to bootstrapping exists for rearrangement-based phylogenetic analysis. We describe a fast and accurate algorithm for rearrangement analysis that scales up, in both time and accuracy, to modern high-resolution genomic data. We also describe a novel approach to estimate the robustness of results-an equivalent to the bootstrapping analysis used in sequence-based phylogenetic reconstruction. We present the results of extensive testing on both simulated and real data showing that our algorithm returns very accurate results, while scaling linearly with the size of the genomes and cubically with their number. We also present extensive experimental results showing that our approach to robustness testing provides excellent estimates of confidence, which, moreover, can be tuned to trade off thresholds between false positives and false negatives. Together, these two novel approaches enable us to attack heretofore intractable problems, such as phylogenetic inference for high-resolution vertebrate genomes, as we demonstrate on a set of six vertebrate genomes with 8,380 syntenic blocks. A copy of the software is available on demand.  相似文献   

4.
Microarray technology plays an important role in drawing useful biological conclusions by analyzing thousands of gene expressions simultaneously. Especially, image analysis is a key step in microarray analysis and its accuracy strongly depends on segmentation. The pioneering works of clustering based segmentation have shown that k-means clustering algorithm and moving k-means clustering algorithm are two commonly used methods in microarray image processing. However, they usually face unsatisfactory results because the real microarray image contains noise, artifacts and spots that vary in size, shape and contrast. To improve the segmentation accuracy, in this article we present a combination clustering based segmentation approach that may be more reliable and able to segment spots automatically. First, this new method starts with a very simple but effective contrast enhancement operation to improve the image quality. Then, an automatic gridding based on the maximum between-class variance is applied to separate the spots into independent areas. Next, among each spot region, the moving k-means clustering is first conducted to separate the spot from background and then the k-means clustering algorithms are combined for those spots failing to obtain the entire boundary. Finally, a refinement step is used to replace the false segmentation and the inseparable ones of missing spots. In addition, quantitative comparisons between the improved method and the other four segmentation algorithms--edge detection, thresholding, k-means clustering and moving k-means clustering--are carried out on cDNA microarray images from six different data sets. Experiments on six different data sets, 1) Stanford Microarray Database (SMD), 2) Gene Expression Omnibus (GEO), 3) Baylor College of Medicine (BCM), 4) Swiss Institute of Bioinformatics (SIB), 5) Joe DeRisi’s individual tiff files (DeRisi), and 6) University of California, San Francisco (UCSF), indicate that the improved approach is more robust and sensitive to weak spots. More importantly, it can obtain higher segmentation accuracy in the presence of noise, artifacts and weakly expressed spots compared with the other four methods.  相似文献   

5.
The availability of high-speed, two-dimensional (2-D) confocal microscopes and the expanding armamentarium of fluorescent probes presents unprecedented opportunities and new challenges for studying the spatial and temporal dynamics of cellular processes. The need to remove subjectivity from the detection process, the difficulty of the human eye to detect subtle changes in fluorescence in these 2-D images, and the large volume of data produced by these confocal microscopes call for the need to develop algorithms to automatically mark the changes in fluorescence. These fluorescence signal changes are often subtle, so the statistical estimate of the likelihood that the detected signal is not noise is an integral part of the detection algorithm. This statistical estimation is fundamental to our new approach to detection; in earlier Ca(2+) spark detectors, this statistical assessment was incidental to detection. Importantly, the use of the statistical properties of the signal local to the spark, instead of over the whole image, reduces the false positive and false negative rates. We developed an automatic spark detection algorithm based on these principles and used it to detect sparks on an inhomogeneous background of transverse tubule-labeled rat ventricular cells. Because of the large region of the cell surveyed by the confocal microscope, we can detect a large enough number of sparks to measure the dynamic changes in spark frequency in individual cells. We also found, in contrast to earlier results, that cardiac sparks are spatially symmetric. This new approach puts the detection of fluorescent signals on a firm statistical foundation.  相似文献   

6.
Many cellular proteins are multi-domain proteins. Coupled domain–domain interactions in these multidomain proteins are important for the allosteric relay of signals in the cellular signaling networks. We have initiated the application of neutron spin echo spectroscopy to the study of nanoscale protein domain motions on submicrosecond time scales and on nanometer length scale. Our NSE experiments reveal the activation of protein domain motions over a long distance of over more than 100 Å in a multidomain scaffolding protein NHERF1 upon binding to another protein, Ezrin. Such activation of nanoscale protein domain motions is correlated with the allosteric assembly of multi-protein complexes by NHERF1 and Ezrin. Here, we summarize the theoretical framework that we have developed, which uses simple concepts from nonequilibrium statistical mechanics to interpret the NSE data, and employs a mobility tensor to describe nanoscale protein domain motion. Extracting nanoscale protein domain motion from the NSE does not require elaborate molecular dynamics simulations, nor complex fits to rotational motion, nor elastic network models. The approach is thus more robust than multiparameter techniques that require untestable assumptions. We also demonstrate that an experimental scheme of selective deuteration of a protein subunit in a complex can highlight and amplify specific domain dynamics from the abundant global translational and rotational motions in a protein. We expect NSE to provide a unique tool to determine nanoscale protein dynamics for the understanding of protein functions, such as how signals are propagated in a protein over a long distance to a distal domain.  相似文献   

7.
Reliable statistical validation of peptide and protein identifications is a top priority in large-scale mass spectrometry based proteomics. PeptideProphet is one of the computational tools commonly used for assessing the statistical confidence in peptide assignments to tandem mass spectra obtained using database search programs such as SEQUEST, MASCOT, or X! TANDEM. We present two flexible methods, the variable component mixture model and the semiparametric mixture model, that remove the restrictive parametric assumptions in the mixture modeling approach of PeptideProphet. Using a control protein mixture data set generated on an linear ion trap Fourier transform (LTQ-FT) mass spectrometer, we demonstrate that both methods improve parametric models in terms of the accuracy of probability estimates and the power to detect correct identifications controlling the false discovery rate to the same degree. The statistical approaches presented here require that the data set contain a sufficient number of decoy (known to be incorrect) peptide identifications, which can be obtained using the target-decoy database search strategy.  相似文献   

8.
9.
The accuracy of joint torques calculated from inverse dynamics methods is strongly dependent upon errors in body segment motion profiles, which arise from two sources of noise: the motion capture system and movement artifacts of skin-mounted markers. The current study presents a method to increase the accuracy of estimated joint torques through the optimization of the angular position data used to describe these segment motions. To compute these angular data, we formulated a constrained nonlinear optimization problem with a cost function that minimizes the difference between the known ground reaction forces (GRFs) and the GRF calculated via a top-down inverse dynamics solution. To evaluate this approach, we constructed idealized error-free reference movements (of squatting and lifting) that produced a set of known “true” motions and associated true joint torques and GRF. To simulate real-world inaccuracies in motion data, these true motions were perturbed by artificial noise. We then applied our approach to these noise-induced data to determine optimized motions and related joint torques. To evaluate the efficacy of the optimization approach compared to traditional (bottom-up or top-down) inverse dynamics approaches, we computed the root mean square error (RMSE) values of joint torques derived from each approach relative to the expected true joint torques. Compared to traditional approaches, the optimization approach reduced the RMSE by 54% to 79%. Average reduction due to our method was 65%; previous methods only achieved an overall reduction of 30%. These results suggest that significant improvement in the accuracy of joint torque calculations can be achieved using this approach.  相似文献   

10.
《IRBM》2020,41(6):304-315
Vascular segmentation is often required in medical image analysis for various imaging modalities. Despite the rich literature in the field, the proposed methods need most of the time adaptation to the particular investigation and may sometimes lack the desired accuracy in terms of true positive and false positive detection rate. This paper proposes a general method for vascular segmentation based on locally connected filtering applied in a multiresolution scheme. The filtering scheme performs progressive detection and removal of the vessels from the image relief at each resolution level, by combining directional 2D-3D locally connected filters (LCF). An important property of the LCF is that it preserves (positive contrasted) structures in the image if they are topologically connected with other similar structures in their local environment. Vessels, which appear as curvilinear structures, can be filtered out by an appropriate LCF set-up which will minimally affect sheet-like structures. The implementation in a multiresolution framework allows dealing with different vessel sizes. The outcome of the proposed approach is illustrated on several image modalities including lung, liver and coronary arteries. It is shown that besides preserving high accuracy in detecting small vessels, the proposed technique is less sensitive with respect to noise and the presence of pathologies of positive-contrast appearance on the images. The detection accuracy is compared with a previously developed approach on the 20 patient database from the VESSEL12 challenge.  相似文献   

11.
Intraluminal impedance, a nonradiological method for assessing bolus flow within the gut, may be suitable for investigating pharyngeal disorders. This study evaluated an impedance technique for the detection of pharyngeal bolus flow during swallowing. Patterns of pharyngoesophageal pressure and impedance were simultaneously recorded with videofluoroscopy in 10 healthy volunteers during swallowing of liquid, semisolid, and solid boluses. The timing of bolus head and tail passage recorded by fluoroscopy was correlated with the timing of impedance drop and recovery at each recording site. Bolus swallowing produced a drop in impedance from baseline followed by a recovery to at least 50% of baseline. The timing of the pharyngeal and esophageal impedance drop correlated with the timing of the arrival of the bolus head. In the pharynx, the timing of impedance recovery was delayed relative to the timing of clearance of the bolus tail. In contrast, in the upper esophageal sphincter (UES) and proximal esophagus, the timing of impedance recovery correlated well with the timing of clearance of the bolus tail. Impedance-based estimates of pharyngoesophageal bolus clearance time correlated with true pharyngoesophageal bolus clearance time. Patterns of intraluminal impedance recorded in the pharynx during bolus swallowing are therefore more complex than those in the esophagus. During swallowing, mucosal contact between the tongue base and posterior pharyngeal wall prolongs the duration of pharyngeal impedance drop, leading to overestimation of bolus tail timing. Therefore, we conclude that intraluminal impedance measurement does not accurately reflect the bolus transit in the pharynx but does accurately reflect bolus transit across the UES and below.  相似文献   

12.
Robust smooth segmentation approach for array CGH data analysis   总被引:2,自引:0,他引:2  
MOTIVATION: Array comparative genomic hybridization (aCGH) provides a genome-wide technique to screen for copy number alteration. The existing segmentation approaches for analyzing aCGH data are based on modeling data as a series of discrete segments with unknown boundaries and unknown heights. Although the biological process of copy number alteration is discrete, in reality a variety of biological and experimental factors can cause the signal to deviate from a stepwise function. To take this into account, we propose a smooth segmentation (smoothseg) approach. METHODS: To achieve a robust segmentation, we use a doubly heavy-tailed random-effect model. The first heavy-tailed structure on the errors deals with outliers in the observations, and the second deals with possible jumps in the underlying pattern associated with different segments. We develop a fast and reliable computational procedure based on the iterative weighted least-squares algorithm with band-limited matrix inversion. RESULTS: Using simulated and real data sets, we demonstrate how smoothseg can aid in identification of regions with genomic alteration and in classification of samples. For the real data sets, smoothseg leads to smaller false discovery rate and classification error rate than the circular binary segmentation (CBS) algorithm. In a realistic simulation setting, smoothseg is better than wavelet smoothing and CBS in identification of regions with genomic alterations and better than CBS in classification of samples. For comparative analyses, we demonstrate that segmenting the t-statistics performs better than segmenting the data. AVAILABILITY: The R package smoothseg to perform smooth segmentation is available from http://www.meb.ki.se/~yudpaw.  相似文献   

13.
Dual channel segmentation of the EEG signal has been developed. The purpose was to divide the signals into segments, according to information common for the two channels. The criterion for segmentation was based on the changes in the cross-spectrum of the two signals. It has been shown theoretically, as well as by simulation studies and by analysis of real EEG data that this method is sensitive to changes common for both channels, whereas segmentation does not occur as a result of changes in each channel separately.  相似文献   

14.
Time-series modelling techniques are powerful tools for studying temporal scaling structures and dynamics present in ecological and other complex systems and are gaining popularity for assessing resilience quantitatively. Among other methods, canonical ordinations based on redundancy analysis are increasingly used for determining temporal scaling patterns that are inherent in ecological data. However, modelling outcomes and thus inference about ecological dynamics and resilience may vary depending on the approaches used. In this study, we compare the statistical performance, logical consistency and information content of two approaches: (i) asymmetric eigenvector maps (AEM) that account for linear trends and (ii) symmetric distance-based Moran's eigenvector maps (MEM), which requires detrending of raw data to remove linear trends prior to analysis. Our comparison is done using long-term water quality data (25 years) from three Swedish lakes. This data set therefore provides the opportunity for assessing how the modelling approach used affects performance and inference in time series modelling. We found that AEM models had consistently more explanatory power than MEM, and in two out of three lakes AEM extracted one more temporal scale than MEM. The scale-specific patterns detected by AEM and MEM were uncorrelated. Also individual water quality variables explaining these patterns differed between methods, suggesting that inferences about systems dynamics are dependent on modelling approach. These findings suggest that AEM might be more suitable for assessing dynamics in time series analysis compared to MEM when temporal trends are relevant. The AEM approach is logically consistent with temporal autocorrelation where earlier conditions can influence later conditions but not vice versa. The symmetric MEM approach, which ignores the asymmetric nature of time, might be suitable for addressing specific questions about the importance of correlations in fluctuation patterns where there are no confounding elements of linear trends or a need to assess causality.  相似文献   

15.
 In many applications of signal processing, especially in communications and biomedicine, preprocessing is necessary to remove noise from data recorded by multiple sensors. Typically, each sensor or electrode measures the noisy mixture of original source signals. In this paper a noise reduction technique using independent component analysis (ICA) and subspace filtering is presented. In this approach we apply subspace filtering not to the observed raw data but to a demixed version of these data obtained by ICA. Finite impulse response filters are employed whose vectors are parameters estimated based on signal subspace extraction. ICA allows us to filter independent components. After the noise is removed we reconstruct the enhanced independent components to obtain clean original signals; i.e., we project the data to sensor level. Simulations as well as real application results for EEG-signal noise elimination are included to show the validity and effectiveness of the proposed approach. Received: 6 November 2000 / Accepted in revised form: 12 November 2001  相似文献   

16.
The ability to measure gene expression on a genome-wide scale is one of the most promising accomplishments in molecular biology. Microarrays, the technology that first permitted this, were riddled with problems due to unwanted sources of variability. Many of these problems are now mitigated, after a decade's worth of statistical methodology development. The recently developed RNA sequencing (RNA-seq) technology has generated much excitement in part due to claims of reduced variability in comparison to microarrays. However, we show that RNA-seq data demonstrate unwanted and obscuring variability similar to what was first observed in microarrays. In particular, we find guanine-cytosine content (GC-content) has a strong sample-specific effect on gene expression measurements that, if left uncorrected, leads to false positives in downstream results. We also report on commonly observed data distortions that demonstrate the need for data normalization. Here, we describe a statistical methodology that improves precision by 42% without loss of accuracy. Our resulting conditional quantile normalization algorithm combines robust generalized regression to remove systematic bias introduced by deterministic features such as GC-content and quantile normalization to correct for global distortions.  相似文献   

17.
Automated gray matter segmentation of magnetic resonance imaging data is essential for morphometric analyses of the brain, particularly when large sample sizes are investigated. However, although detection of small structural brain differences may fundamentally depend on the method used, both accuracy and reliability of different automated segmentation algorithms have rarely been compared. Here, performance of the segmentation algorithms provided by SPM8, VBM8, FSL and FreeSurfer was quantified on simulated and real magnetic resonance imaging data. First, accuracy was assessed by comparing segmentations of twenty simulated and 18 real T1 images with corresponding ground truth images. Second, reliability was determined in ten T1 images from the same subject and in ten T1 images of different subjects scanned twice. Third, the impact of preprocessing steps on segmentation accuracy was investigated. VBM8 showed a very high accuracy and a very high reliability. FSL achieved the highest accuracy but demonstrated poor reliability and FreeSurfer showed the lowest accuracy, but high reliability. An universally valid recommendation on how to implement morphometric analyses is not warranted due to the vast number of scanning and analysis parameters. However, our analysis suggests that researchers can optimize their individual processing procedures with respect to final segmentation quality and exemplifies adequate performance criteria.  相似文献   

18.
MOTIVATION: The power of microarray analyses to detect differential gene expression strongly depends on the statistical and bioinformatical approaches used for data analysis. Moreover, the simultaneous testing of tens of thousands of genes for differential expression raises the 'multiple testing problem', increasing the probability of obtaining false positive test results. To achieve more reliable results, it is, therefore, necessary to apply adjustment procedures to restrict the family-wise type I error rate (FWE) or the false discovery rate. However, for the biologist the statistical power of such procedures often remains abstract, unless validated by an alternative experimental approach. RESULTS: In the present study, we discuss a multiplicity adjustment procedure applied to classical univariate as well as to recently proposed multivariate gene-expression scores. All procedures strictly control the FWE. We demonstrate that the use of multivariate scores leads to a more efficient identification of differentially expressed genes than the widely used MAS5 approach provided by the Affymetrix software tools (Affymetrix Microarray Suite 5 or GeneChip Operating Software). The practical importance of this finding is successfully validated using real time quantitative PCR and data from spike-in experiments. AVAILABILITY: The R-code of the statistical routines can be obtained from the corresponding author. CONTACT: Schuster@imise.uni-leipzig.de  相似文献   

19.
Aim Various methods are employed to recover patterns of area relationships in extinct and extant clades. The fidelity of these patterns can be adversely affected by sampling error in the form of missing data. Here we use simulation studies to evaluate the sensitivity of an analytical biogeographical method, namely tree reconciliation analysis (TRA), to this form of sampling failure. Location Simulation study. Methods To approximate varying degrees of taxonomic sampling failure within phylogenies varying in size and in redundancy of biogeographical signal, we applied sequential pruning protocols to artificial taxon–area cladograms displaying congruent patterns of area relationships. Initial trials assumed equal probability of sampling failure among all areas. Additional trials assigned weighted probabilities to each of the areas in order to explore the effects of uneven geographical sampling. Pruned taxon–area cladograms were then analysed with TRA to determine if the optimal area cladograms recovered match the original biogeographical signal, or if they represent false, ambiguous or uninformative signals. Results The results indicate a period of consistently accurate recovery of the true biogeographical signal, followed by a nonlinear decrease in signal recovery as more taxa are pruned. At high levels of sampling failure, false biogeographical signals are more likely to be recovered than the true signal. However, randomization testing for statistical significance greatly decreases the chance of accepting false signals. The primary inflection of the signal recovery curve, and its steepness and slope depend upon taxon–area cladogram size and area redundancy, as well as on the evenness of sampling. Uneven sampling across geographical areas is found to have serious deleterious effects on TRA, with the accuracy of recovery of biogeographical signal varying by an order of magnitude or more across different sampling regimes. Main conclusions These simulations reiterate the importance of taxon sampling in biogeographical analysis, and attest to the importance of considering geographical, as well as overall, sampling failure when interpreting the robustness of biogeographical signals. In addition to randomization testing for significance, we suggest the use of randomized sequential taxon deletions and the construction of signal decay curves as a means to assess the robustness of biogeographical signals for empirical data sets.  相似文献   

20.
Abstract: Obtaining reliable results from life-cycle assessment studies is often quite difficult because life-cycle inventory (LCI) data are usually erroneous, incomplete, and even physically meaningless. The real data must satisfy the laws of thermodynamics, so the quality of LCI data may be enhanced by adjusting them to satisfy these laws. This is not a new idea, but a formal thermodynamically sound and statistically rigorous approach for accomplishing this task is not yet available. This article proposes such an approach based on methods for data rectification developed in process systems engineering. This approach exploits redundancy in the available data and models and solves a constrained optimization problem to remove random errors and estimate some missing values. The quality of the results and presence of gross errors are determined by statistical tests on the constraints and measurements. The accuracy of the rectified data is strongly dependent on the accuracy and completeness of the available models, which should capture information such as the life-cycle network, stream compositions, and reactions. Such models are often not provided in LCI databases, so the proposed approach tackles many new challenges that are not encountered in process data rectification. An iterative approach is developed that relies on increasingly detailed information about the life-cycle processes from the user. A comprehensive application of the method to the chlor-alkali inventory being compiled by the National Renewable Energy Laboratory demonstrates the benefits and challenges of this approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号