首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Given growing interest in functional data analysis (FDA) as a useful method for analyzing human movement data, it is critical to understand the effects of standard FDA procedures, including registration, on biomechanical analyses. Registration is used to reduce phase variability between curves while preserving the individual curve's shape and amplitude. The application of three methods available to assess registration could benefit those in the biomechanics community using FDA techniques: comparison of mean curves, comparison of average RMS values, and assessment of time-warping functions. Therefore, the present study has two purposes. First, the necessity of registration applied to cyclical data after time normalization is assessed. Second, we illustrate the three methods for evaluating registration effects. Masticatory jaw movements of 22 healthy adults (2 males, 21 females) were tracked while subjects chewed a gum-based pellet for 20 s. Motion data were captured at 60 Hz with two gen-locked video cameras. Individual chewing cycles were time normalized and then transformed into functional observations. Registration did not affect mean curves and warping functions were linear. Although registration decreased the RMS, indicating a decrease in inter-subject variability, the difference was not statistically significant. Together these results indicate that registration may not always be necessary for cyclical chewing data. An important contribution of this paper is the illustration of three methods for evaluating registration that are easy to apply and useful for judging whether the extra data manipulation is necessary.  相似文献   

2.
Real-time functional magnetic resonance imaging (rtfMRI) is a recently emerged technique that demands fast data processing within a single repetition time (TR), such as a TR of 2 seconds. Data preprocessing in rtfMRI has rarely involved spatial normalization, which can not be accomplished in a short time period. However, spatial normalization may be critical for accurate functional localization in a stereotactic space and is an essential procedure for some emerging applications of rtfMRI. In this study, we introduced an online spatial normalization method that adopts a novel affine registration (AFR) procedure based on principal axes registration (PA) and Gauss-Newton optimization (GN) using the self-adaptive β parameter, termed PA-GN(β) AFR and nonlinear registration (NLR) based on discrete cosine transform (DCT). In AFR, PA provides an appropriate initial estimate of GN to induce the rapid convergence of GN. In addition, the β parameter, which relies on the change rate of cost function, is employed to self-adaptively adjust the iteration step of GN. The accuracy and performance of PA-GN(β) AFR were confirmed using both simulation and real data and compared with the traditional AFR. The appropriate cutoff frequency of the DCT basis function in NLR was determined to balance the accuracy and calculation load of the online spatial normalization. Finally, the validity of the online spatial normalization method was further demonstrated by brain activation in the rtfMRI data.  相似文献   

3.
In functional data analysis, the time warping model aims at representing a set of curves exhibiting phase and amplitude variation with respect to a common continuous process. Many biological processes, when observed across the time among different individuals, fit into this concept. The observed curves are modeled as the composition of an "amplitude process," which governs the common behavior, and a "warping process" that induces time distortion among the individuals. We aim at characterizing the first one. Because of the phase variation present among the curves, classical sample statistics computed on the observed sample provide poor representations of the amplitude process. Existing methods to estimate the mean behavior of the amplitude process consist on aligning the curves, that is, eliminating time variation, before estimation. However, since they rely on the use of sample means, they are very sensitive to the presence of outliers. In this article, we propose the use of a functional depth-based median as a robust estimator of the central behavior of the amplitude process. We investigate its properties in the time warping model, and we evaluate its performance in different simulation studies where we compare it to existing estimators, and we show its robustness against atypical observations. Finally, we illustrate its use with real data on a yeast time course microarray data set.  相似文献   

4.
Thirty-five archaeal, bacterial and eukaryotic translational systems have been proved against forty different protein synthesis inhibitors with diverse domain and functional specificities. The inhibition curves generated in every ribosome-antibiotic combination had previously shown interesting similarities among organisms belonging to the same phylogenetic group. This opened the possibility of using such functional information for developing evolutionary studies. A new mathematical method based on the main data components analysis has been developed to extract most of the information contained in the inhibition curves. The phenograms obtained closely resemble those generated by the small ribosomal subunit rRNA sequence comparison and such functional clustering is also congruent when a particular subset of organisms and/or antibiotics is used. These results prove the phylogenetic value of our functional analysis and suggest that the ribosome represents an interesting intersection between genotypic and phenotypic (functional) information stored in organisms.  相似文献   

5.
Methods for modeling sets of complex curves where the curves must be aligned in time (or in another continuous predictor) fall into the general class of functional data analysis and include self-modeling regression and time-warping procedures. Self-modeling regression (SEMOR), also known as a shape invariant model (SIM), assumes the curves have a common shape, modeled nonparametrically, and curve-specific differences in amplitude and timing, traditionally modeled by linear transformations. When curves contain multiple features that need to be aligned in time, SEMOR may be inadequate since a linear time transformation generally cannot align more than one feature. Time warping procedures focus on timing variability and on finding flexible time warps to align multiple data features. We draw on these methods to develop a SIM that models the time transformations as random, flexible, monotone functions. The model is motivated by speech movement data from the University of Wisconsin X-ray microbeam speech production project and is applied to these data to test the effect of different speaking conditions on the shape and relative timing of movement profiles.  相似文献   

6.
In order to study the functional phylogeny of organisms, forty different protein synthesis inhibitors with diverse domain and funcional specificities have been used to analyze forty archaeal, bacterial and eukaryotic translational systems. The inhibition curves generated with the different ribosome-antibiotic pairs have shown very interesting similarities among organisms belonging to the same phylogenetic group, confirming the feasibility of using such information in the development of evolutionary studies. A new method to extract most of the information contained in the inhibition curves is presented. Using a statistical treatment based on the principal components analysis of the data, we have defined coordinates for the organisms which have allowed us to perform a functional clustering of them. The phenograms obtained are very similar to those generated by 16/18S rRNA sequence comparison. These results prove the phylogenetic value of our functional analysis and suggest an interesting intersection between genotypic and phenotypic (functional) information.  相似文献   

7.

Background

Differences in sample collection, biomolecule extraction, and instrument variability introduce bias to data generated by liquid chromatography coupled with mass spectrometry (LC-MS). Normalization is used to address these issues. In this paper, we introduce a new normalization method using the Gaussian process regression model (GPRM) that utilizes information from individual scans within an extracted ion chromatogram (EIC) of a peak. The proposed method is particularly applicable for normalization based on analysis order of LC-MS runs. Our method uses measurement variabilities estimated through LC-MS data acquired from quality control samples to correct for bias caused by instrument drift. Maximum likelihood approach is used to find the optimal parameters for the fitted GPRM. We review several normalization methods and compare their performance with GPRM.

Results

To evaluate the performance of different normalization methods, we consider LC-MS data from a study where metabolomic approach is utilized to discover biomarkers for liver cancer. The LC-MS data were acquired by analysis of sera from liver cancer patients and cirrhotic controls. In addition, LC-MS runs from a quality control (QC) sample are included to assess the run to run variability and to evaluate the ability of various normalization method in reducing this undesired variability. Also, ANOVA models are applied to the normalized LC-MS data to identify ions with intensity measurements that are significantly different between cases and controls.

Conclusions

One of the challenges in using label-free LC-MS for quantitation of biomolecules is systematic bias in measurements. Several normalization methods have been introduced to overcome this issue, but there is no universally applicable approach at the present time. Each data set should be carefully examined to determine the most appropriate normalization method. We review here several existing methods and introduce the GPRM for normalization of LC-MS data. Through our in-house data set, we show that the GPRM outperforms other normalization methods considered here, in terms of decreasing the variability of ion intensities among quality control runs.
  相似文献   

8.
9.
This paper reports novel development and preliminary application of an image registration technique for diagnosis of abdominal adhesions imaged with cine-MRI (cMRI). Adhesions can severely compromise the movement and physiological function of the abdominal contents, and their presence is difficult to detect. The image registration approach presented here is designed to expose anomalies in movement of the abdominal organs, providing a movement signature that is indicative of underlying structural abnormalities. Validation of the technique was performed using structurally based in vitro and in silico models, supported with Receiver Operating Characteristic (ROC) methods. For the more challenging cases presented to the small cohort of 4 observers, the AUC (area under curve) improved from a mean value of 0.67 ± 0.02 (without image registration assistance) to a value of 0.87 ± 0.02 when image registration support was included. Also, in these cases, a reduction in time to diagnosis was observed, decreasing by between 20% and 50%. These results provided sufficient confidence to apply the image registration diagnostic protocol to sample magnetic resonance imaging data from healthy volunteers as well as a patient suffering from encapsulating peritoneal sclerosis (an extreme form of adhesions) where immobilization of the gut by cocooning of the small bowel is observed. The results as a whole support the hypothesis that movement analysis using image registration offers a possible method for detecting underlying structural anomalies and encourages further investigation.  相似文献   

10.
Traditionally housekeeping genes have been employed as endogenous reference (internal control) genes for normalization in gene expression studies. Since the utilization of single housekeepers cannot assure an unbiased result, new normalization methods involving multiple housekeeping genes and normalizing using their mean expression have been recently proposed. Moreover, since a gold standard gene suitable for every experimental condition does not exist, it is also necessary to validate the expression stability of every putative control gene on the specific requirements of the planned experiment. As a consequence, finding a good set of reference genes is for sure a non-trivial problem requiring quite a lot of lab-based experimental testing. In this work we identified novel candidate barley reference genes suitable for normalization in gene expression studies. An advanced web search approach aimed to collect, from publicly available web resources, the most interesting information regarding the expression profiling of candidate housekeepers on a specific experimental basis has been set up and applied, as an example, on stress conditions. A complementary lab-based analysis has been carried out to verify the expression profile of the selected genes in different tissues and during heat shock response. This combined dry/wet approach can be applied to any species and physiological condition of interest and can be considered very helpful to identify putative reference genes to be shortlisted every time a new experimental design has to be set up.  相似文献   

11.
12.
Experimental data in human movement science commonly consist of repeated measurements under comparable conditions. One can face the question how to identify a single trial, a set of trials, or erroneous trials from the entire data set. This study presents and evaluates a Selection Method for a Representative Trial (SMaRT) based on the Principal Component Analysis. SMaRT was tested on 1841 data sets containing 11 joint angle curves of gait analysis. The automatically detected characteristic trials were compared with the choice of three independent experts. SMaRT required 1.4s to analyse 100 data sets consisting of 8±3 trials each. The robustness against outliers reached 98.8% (standard visual control). We conclude that SMaRT is a powerful tool to determine a representative, uncontaminated trial in movement analysis data sets with multiple parameters.  相似文献   

13.
In this article we introduce JULIDE, a software toolkit developed to perform the 3D reconstruction, intensity normalization, volume standardization by 3D image registration and voxel-wise statistical analysis of autoradiographs of mouse brain sections. This software tool has been developed in the open-source ITK software framework and is freely available under a GPL license. The article presents the complete image processing chain from raw data acquisition to 3D statistical group analysis. Results of the group comparison in the context of a study on spatial learning are shown as an illustration of the data that can be obtained with this tool.  相似文献   

14.

Background

We present a novel and systematic approach to analyze temporal microarray data. The approach includes normalization, clustering and network analysis of genes.

Methodology

Genes are normalized using an error model based uniform normalization method aimed at identifying and estimating the sources of variations. The model minimizes the correlation among error terms across replicates. The normalized gene expressions are then clustered in terms of their power spectrum density. The method of complex Granger causality is introduced to reveal interactions between sets of genes. Complex Granger causality along with partial Granger causality is applied in both time and frequency domains to selected as well as all the genes to reveal the interesting networks of interactions. The approach is successfully applied to Arabidopsis leaf microarray data generated from 31,000 genes observed over 22 time points over 22 days. Three circuits: a circadian gene circuit, an ethylene circuit and a new global circuit showing a hierarchical structure to determine the initiators of leaf senescence are analyzed in detail.

Conclusions

We use a totally data-driven approach to form biological hypothesis. Clustering using the power-spectrum analysis helps us identify genes of potential interest. Their dynamics can be captured accurately in the time and frequency domain using the methods of complex and partial Granger causality. With the rise in availability of temporal microarray data, such methods can be useful tools in uncovering the hidden biological interactions. We show our method in a step by step manner with help of toy models as well as a real biological dataset. We also analyse three distinct gene circuits of potential interest to Arabidopsis researchers.  相似文献   

15.
PurposeTo devise a novel Spatial Normalization framework for Voxel-based analysis (VBA) in brain radiotherapy. VBAs rely on accurate spatial normalization of different patients’ planning CTs on a common coordinate system (CCS). The cerebral anatomy, well characterized by MRI, shows instead poor contrast in CT, resulting in potential inaccuracies in VBAs based on CT alone.MethodsWe analyzed 50 meningioma patients treated with proton-therapy, undergoing planning CT and T1-weighted (T1w) MRI. The spatial normalization pipeline based on MR and CT images consisted in: intra-patient registration of CT to T1w, inter-patient registration of T1w to MNI space chosen as CCS, doses propagation to MNI.The registration quality was compared with that obtained by Statistical Parametric Mapping software (SPM), used as benchmark. To evaluate the accuracy of dose normalization, the dose organ overlap (DOO) score was computed on gray matter, white matter and cerebrospinal fluid before and after normalization. In addition, the trends in the DOOs distribution were investigated by means of cluster analysis.ResultsThe registration quality was higher for the proposed method compared to SPM (p < 0.001). The DOO scores showed a significant improvement after normalization (p < 0.001). The cluster analysis highlighted 2 clusters, with one of them including the majority of data and exhibiting acceptable DOOs.ConclusionsOur study presents a robust tool for spatial normalization, specifically tailored for brain dose VBAs. Furthermore, the cluster analysis provides a formal criterion for patient exclusion in case of non-acceptable normalization results. The implemented framework lays the groundwork for future reliable VBAs in brain irradiation studies.  相似文献   

16.
In genetic studies, many interesting traits, including growth curves and skeletal shape, have temporal or spatial structure. They are better treated as curves or function-valued traits. Identification of genetic loci contributing to such traits is facilitated by specialized methods that explicitly address the function-valued nature of the data. Current methods for mapping function-valued traits are mostly likelihood-based, requiring specification of the distribution and error structure. However, such specification is difficult or impractical in many scenarios. We propose a general functional regression approach based on estimating equations that is robust to misspecification of the covariance structure. Estimation is based on a two-step least-squares algorithm, which is fast and applicable even when the number of time points exceeds the number of samples. It is also flexible due to a general linear functional model; changing the number of covariates does not necessitate a new set of formulas and programs. In addition, many meaningful extensions are straightforward. For example, we can accommodate incomplete genotype data, and the algorithm can be trivially parallelized. The framework is an attractive alternative to likelihood-based methods when the covariance structure of the data is not known. It provides a good compromise between model simplicity, statistical efficiency, and computational speed. We illustrate our method and its advantages using circadian mouse behavioral data.  相似文献   

17.
Due to the time scale of circular dichroism (CD) measurements, it is theoretically possible to deconvolute such a spectrum if the pure CD spectra differ significantly from one another. In the last decade several methods have been published aiming at obtaining the conformational weights, or percentages (which are the coefficients for a linear combination) of the so-called typical secondary structural elements making up the three-dimensional structure of proteins. Two methods that can be used to determine the secondary structures of proteins are described here. The first method, called LINCOMB, is a simple algorithm based on a least-squares fit with a set of reference spectra representing the known secondary structures and yielding an estimation of weights attributed to alpha-helix, beta-pleated sheet (mainly antiparallel), beta-turns, unordered form, and aromatic/disulfide (or nonpeptide) contributions of the protein being analyzed. This method requires a "template" or reference curve set, which was obtained from the second method. The second method, "convex constraint analysis," is a general deconvolution method for a CD spectra set of any variety of conformational type. The algorithm, based on a set of three constraints, is able to deconvolute a set of CD curves to its common "pure"-component curves and conformational weights. To analyze a single CD spectrum with this method, the spectrum is appended to the data set used as a reference data set. As a way to determine the reliability of the algorithm and provide a guideline to its usage, some applications are presented.  相似文献   

18.
The spanning set technique quantifies intertrial variability as the span between polynomial curves representing upper and lower standard deviation curves of a repeated movement. This study aimed to assess the validity of the spanning set technique in quantifying variability and specifically to determine its sensitivity to variability presented at different phases of a movement cycle. Knee angle data were recorded from a male participant completing 12 overground running trials. Variability was added to each running trial at five different phases of the running stride. Ten variability magnitudes were also used to assess the effect of variability magnitude on the spanning set measure. Variability was quantified in all trials using mean deviation and the spanning set measure. Results of a repeated-measures ANOVA showed significant differences between the spanning set score for trials using different phases of added variability. In contrast, mean deviation values showed no difference related to the phase of added variability. Therefore, the spanning set technique cannot be recommended as a valid measure of intertrial movement variability.  相似文献   

19.
BACKGROUND: Multiplex or multicolor fluorescence in situ hybridization (M-FISH) is a recently developed cytogenetic technique for cancer diagnosis and research on genetic disorders. By simultaneously viewing the multiply labeled specimens in different color channels, M-FISH facilitates the detection of subtle chromosomal aberrations. The success of this technique largely depends on the accuracy of pixel classification (color karyotyping). Improvements in classifier performance would allow the elucidation of more complex and more subtle chromosomal rearrangements. Normalization of M-FISH images has a significant effect on the accuracy of classification. In particular, misalignment or misregistration across multiple channels seriously affects classification accuracy. Image normalization, including automated registration, must be done before pixel classification. METHODS AND RESULTS: We studied several image normalization approaches that affect image classification. In particular, we developed an automated registration technique to correct misalignment across the different fluor images (caused by chromatic aberration and other factors). This new registration algorithm is based on wavelets and spline approximations that have computational advantages and improved accuracy. To evaluate the performance improvement brought about by these data normalization approaches, we used the downstream pixel classification accuracy as a measurement. A Bayesian classifier assumed that each of 24 chromosome classes had a normal probability distribution. The effects that this registration and other normalization steps have on subsequent classification accuracy were evaluated on a comprehensive M-FISH database established by Advanced Digital Imaging Research (http://www.adires.com/05/Project/MFISH_DB/MFISH_DB.shtml). CONCLUSIONS: Pixel misclassification errors result from different factors. These include uneven hybridization, spectral overlap among fluors, and image misregistration. Effective preprocessing of M-FISH images can decrease the effects of those factors and thereby increase pixel classification accuracy. The data normalization steps described in this report, such as image registration and background flattening, can significantly improve subsequent classification accuracy. An improved classifier in turn would allow subtle DNA rearrangements to be identified in genetic diagnosis and cancer research.  相似文献   

20.
Two-color DNA microarrays are commonly used for the analysis of global gene expression. They provide information on relative abundance of thousands of mRNAs. However, the generated data need to be normalized to minimize systematic variations so that biologically significant differences can be more easily identified. A large number of normalization procedures have been proposed and many softwares for microarray data analysis are available. Here, we have applied two normalization methods (median and loess) from two packages of microarray data analysis softwares. They were examined using a sample data set. We found that the number of genes identified as differentially expressed varied significantly depending on the method applied. The obtained results, i.e. lists of differentially expressed genes, were consistent only when we used median normalization methods. Loess normalization implemented in the two software packages provided less coherent and for some probes even contradictory results. In general, our results provide an additional piece of evidence that the normalization method can profoundly influence final results of DNA microarray-based analysis. The impact of the normalization method depends greatly on the algorithm employed. Consequently, the normalization procedure must be carefully considered and optimized for each individual data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号