首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and 1-regularized parallel imaging methods.  相似文献   

2.
Affymetrix high-density oligonucleotide array is a tool that has the capacity to simultaneously measure the abundance of thousands of mRNA sequences in biological samples. In order to allow direct array-to-array comparisons, normalization is a necessity. When deciding on an appropriate normalization procedure there are a couple questions that need to be addressed, e.g., on which level should the normalization be performed: On the level of feature intensities or on the level of expression indexes? Should all features/expression indexes be used or can we choose a subset of features likely to be unregulated? Another question is how to actually perform the normalization: normalize using the overall mean intensity or use a smooth normalization curve? Most of the currently used normalization methods are linear; e.g., the normalization method implemented in the Affymetrix software GeneChip is based on the overall mean intensity. However, along with alternative methods of summarizing feature intensities into an expression index, nonlinear methods have recently started to appear. For many of these alternative methods, the natural choice is to normalize on the level of feature intensities, either using all feature intensities or only perfect match intensities. In this report, a nonlinear normalization procedure aimed for normalizing feature intensities is proposed.  相似文献   

3.
The variance in intensities of MRI scans is a fundamental impediment for quantitative MRI analysis. Intensity values are not only highly dependent on acquisition parameters, but also on the subject and body region being scanned. This warrants the need for image normalization techniques to ensure that intensity values are consistent within tissues across different subjects and visits. Many intensity normalization methods have been developed and proven successful for the analysis of brain pathologies, but evaluation of these methods for images of the prostate region is lagging.In this paper, we compare four different normalization methods on 49 T2-w scans of prostate cancer patients: 1) the well-established histogram normalization, 2) the generalized scale normalization, 3) an extension of generalized scale normalization called generalized ball-scale normalization, and 4) a custom normalization based on healthy prostate tissue intensities. The methods are compared qualitatively and quantitatively in terms of behaviors of intensity distributions as well as impact on radiomic features.Our findings suggest that normalization based on prior knowledge of the healthy prostate tissue intensities may be the most effective way of acquiring the desired properties of normalized images. In addition, the histogram normalization method outperform the generalized scale and generalized ball-scale methods which have proven superior for other body regions.  相似文献   

4.
Shi JY  Zhang SW  Pan Q  Cheng YM  Xie J 《Amino acids》2007,33(1):69-74
As more and more genomes have been discovered in recent years, there is an urgent need to develop a reliable method to predict the subcellular localization for the explosion of newly found proteins. However, many well-known prediction methods based on amino acid composition have problems utilizing the sequence-order information. Here, based on the concept of Chou's pseudo amino acid composition (PseAA), a new feature extraction method, the multi-scale energy (MSE) approach, is introduced to incorporate the sequence-order information. First, a protein sequence was mapped to a digital signal using the amino acid index. Then, by wavelet transform, the mapped signal was broken down into several scales in which the energy factors were calculated and further formed into an MSE feature vector. Following this, combining this MSE feature vector with amino acid composition (AA), we constructed a series of MSEPseAA feature vectors to represent the protein subcellular localization sequences. Finally, according to a new kind of normalization approach, the MSEPseAA feature vectors were normalized to form the improved MSEPseAA vectors, named as IEPseAA. Using the technique of IEPseAA, C-support vector machine (C-SVM) and three multi-class SVMs strategies, quite promising results were obtained, indicating that MSE is quite effective in reflecting the sequence-order effects and might become a useful tool for predicting the other attributes of proteins as well.  相似文献   

5.
Image registration, the process of optimally aligning homologous structures in multiple images, has recently been demonstrated to support automated pixel-level analysis of pedobarographic images and, subsequently, to extract unique and biomechanically relevant information from plantar pressure data. Recent registration methods have focused on robustness, with slow but globally powerful algorithms. In this paper, we present an alternative registration approach that affords both speed and accuracy, with the goal of making pedobarographic image registration more practical for near-real-time laboratory and clinical applications. The current algorithm first extracts centroid-based curvature trajectories from pressure image contours, and then optimally matches these curvature profiles using optimization based on dynamic programming. Special cases of disconnected images (that occur in high-arched subjects, for example) are dealt with by introducing an artificial spatially linear bridge between adjacent image clusters. Two registration algorithms were developed: a ‘geometric’ algorithm, which exclusively matched geometry, and a ‘hybrid’ algorithm, which performed subsequent pseudo-optimization. After testing the two algorithms on 30 control image pairs considered in a previous study, we found that, when compared with previously published results, the hybrid algorithm improved overlap ratio (p=0.010), but both current algorithms had slightly higher mean-squared error, assumedly because they did not consider pixel intensity. Nonetheless, both algorithms greatly improved the computational efficiency (25±8 and 53±9 ms per image pair for geometric and hybrid registrations, respectively). These results imply that registration-based pixel-level pressure image analyses can, eventually, be implemented for practical clinical purposes.  相似文献   

6.
In this paper we introduce a semi-analytic algorithm for 3-dimensional image reconstruction for positron emission tomography (PET). The method consists of the back-projection of the acquired data into the most likely image voxel according to time-of-flight (TOF) information, followed by the filtering step in the image space using an iterative optimization algorithm with a total variation (TV) regularization. TV regularization in image space is more computationally efficient than usual iterative optimization methods for PET reconstruction with full system matrix that use TV regularization. The efficiency comes from the one-time TOF back-projection step that might also be described as a reformatting of the acquired data. An important aspect of our work concerns the evaluation of the filter operator of the linear transform mapping an original radioactive tracer distribution into the TOF back-projected image. We obtain concise, closed-form analytical formula for the filter operator. The proposed method is validated with the Monte Carlo simulations of the NEMA IEC phantom using a one-layer, 50 cm-long cylindrical device called Jagiellonian PET scanner. The results show a better image quality compared with the reference TOF maximum likelihood expectation maximization algorithm.  相似文献   

7.
In this paper, a novel watershed approach based on seed region growing and image entropy is presented which could improve the medical image segmentation. The proposed algorithm enables the prior information of seed region growing and image entropy in its calculation. The algorithm starts by partitioning the image into several levels of intensity using watershed multi-degree immersion process. The levels of intensity are the input to a computationally efficient seed region segmentation process which produces the initial partitioning of the image regions. These regions are fed to entropy procedure to carry out a suitable merging which produces the final segmentation. The latter process uses a region-based similarity representation of the image regions to decide whether regions can be merged. The region is isolated from the level and the residual pixels are uploaded to the next level and so on, we recall this process as multi-level process and the watershed is called multi-level watershed. The proposed algorithm is applied to challenging applications: grey matter–white matter segmentation in magnetic resonance images (MRIs). The established methods and the proposed approach are experimented by these applications to a variety of simulating immersion, multi-degree, multi-level seed region growing and multi-level seed region growing with entropy. It is shown that the proposed method achieves more accurate results for medical image oversegmentation.  相似文献   

8.
Chen X  Goh QY  Tan W  Hossain I  Chen WN  Lau R 《Bioresource technology》2011,102(10):6005-6012
Cultivation of microalgae Chlorella sp. was performed in draft-tube photobioreactors. Effect of light intensity on the microalgae growth performance was conducted under a light intensity range of 82-590 μmol/m2 s. A lumostatic strategy was proposed based on the light distribution profiles obtained by image analysis and specific chlorophyll a content. The proposed lumostatic strategy allowed a maximum biomass dry weight of 5.78 g/L and a productivity of 1.29 g/L d, which were 25.7% and 74.3% higher than that achieved by the optimal constant light intensity, respectively. A comparison with other lumostatic strategies reported in the literature indicated that the proposed lumostatic strategy in the current study can be a promising approach in improving the growth of microalgae.  相似文献   

9.
Photosystem I-driven cyclic electron transport was measured in intact cells of Synechococcus sp PCC 7942 grown under different light intensities using photoacoustic and spectroscopic methods. The light-saturated capacity for PS I cyclic electron transport increased relative to chlorophyll concentration, PS I concentration, and linear electron transport capacity as growth light intensity was raised. In cells grown under moderate to high light intensity, PS I cyclic electron transport was nearly insensitive to methyl viologen, indicating that the cyclic electron supply to PS I derived almost exclusively from a thylakoid dehydrogenase. In cells grown under low light intensity, PS I cyclic electron transport was partially inhibited by methyl viologen, indicating that part of the cyclic electron supply to PS I derived directly from ferredoxin. It is proposed that the increased PSI cyclic electron transport observed in cells grown under high light intensity is a response to chronic photoinhibition.Abbreviations DBMIB 2,5-dibromo-3-methyl-6-isopropyl-p-benzoquinone - DCMU 3-(3,4-dichlorophenyl)-1,1-dimethylurea - ES energy storage - MV methyl viologen - PAm photoacoustic thermal signal with strong non-modulated background light added - PAs photoacoustic thermal signal without background light added CIW/DPB Publication No. 1205.  相似文献   

10.
There are many sources of systematic variation in cDNA microarray experiments which affect the measured gene expression levels (e.g. differences in labeling efficiency between the two fluorescent dyes). The term normalization refers to the process of removing such variation. A constant adjustment is often used to force the distribution of the intensity log ratios to have a median of zero for each slide. However, such global normalization approaches are not adequate in situations where dye biases can depend on spot overall intensity and/or spatial location within the array. This article proposes normalization methods that are based on robust local regression and account for intensity and spatial dependence in dye biases for different types of cDNA microarray experiments. The selection of appropriate controls for normalization is discussed and a novel set of controls (microarray sample pool, MSP) is introduced to aid in intensity-dependent normalization. Lastly, to allow for comparisons of expression levels across slides, a robust method based on maximum likelihood estimation is proposed to adjust for scale differences among slides.  相似文献   

11.
The integration of local agricultural knowledge deepens the understanding of complex phenomena such as the association between climate variability, crop yields and undernutrition. Participatory Sensing (PS) is a concept which enables laymen to easily gather geodata with standard low-cost mobile devices, offering new and efficient opportunities for agricultural monitoring. This study presents a methodological approach for crop height assessment based on PS. In-field crop height variations of a maize field in Heidelberg, Germany, are gathered with smartphones and handheld GPS devices by 19 participants. The comparison of crop height values measured by the participants to reference data based on terrestrial laser scanning (TLS) results in R2 = 0.63 for the handheld GPS devices and R2 = 0.24 for the smartphone-based approach. RMSE for the comparison between crop height models (CHM) derived from PS and TLS data is 10.45 cm (GPS devices) and 14.69 cm (smartphones). Furthermore, the results indicate that incorporating participants’ cognitive abilities in the data collection process potentially improves the quality data captured with the PS approach. The proposed PS methods serve as a fundament to collect agricultural parameters on field-level by incorporating local people. Combined with other methods such as remote sensing, PS opens new perspectives to support agricultural development.  相似文献   

12.
《IRBM》2020,41(6):304-315
Vascular segmentation is often required in medical image analysis for various imaging modalities. Despite the rich literature in the field, the proposed methods need most of the time adaptation to the particular investigation and may sometimes lack the desired accuracy in terms of true positive and false positive detection rate. This paper proposes a general method for vascular segmentation based on locally connected filtering applied in a multiresolution scheme. The filtering scheme performs progressive detection and removal of the vessels from the image relief at each resolution level, by combining directional 2D-3D locally connected filters (LCF). An important property of the LCF is that it preserves (positive contrasted) structures in the image if they are topologically connected with other similar structures in their local environment. Vessels, which appear as curvilinear structures, can be filtered out by an appropriate LCF set-up which will minimally affect sheet-like structures. The implementation in a multiresolution framework allows dealing with different vessel sizes. The outcome of the proposed approach is illustrated on several image modalities including lung, liver and coronary arteries. It is shown that besides preserving high accuracy in detecting small vessels, the proposed technique is less sensitive with respect to noise and the presence of pathologies of positive-contrast appearance on the images. The detection accuracy is compared with a previously developed approach on the 20 patient database from the VESSEL12 challenge.  相似文献   

13.

Background

Differences in sample collection, biomolecule extraction, and instrument variability introduce bias to data generated by liquid chromatography coupled with mass spectrometry (LC-MS). Normalization is used to address these issues. In this paper, we introduce a new normalization method using the Gaussian process regression model (GPRM) that utilizes information from individual scans within an extracted ion chromatogram (EIC) of a peak. The proposed method is particularly applicable for normalization based on analysis order of LC-MS runs. Our method uses measurement variabilities estimated through LC-MS data acquired from quality control samples to correct for bias caused by instrument drift. Maximum likelihood approach is used to find the optimal parameters for the fitted GPRM. We review several normalization methods and compare their performance with GPRM.

Results

To evaluate the performance of different normalization methods, we consider LC-MS data from a study where metabolomic approach is utilized to discover biomarkers for liver cancer. The LC-MS data were acquired by analysis of sera from liver cancer patients and cirrhotic controls. In addition, LC-MS runs from a quality control (QC) sample are included to assess the run to run variability and to evaluate the ability of various normalization method in reducing this undesired variability. Also, ANOVA models are applied to the normalized LC-MS data to identify ions with intensity measurements that are significantly different between cases and controls.

Conclusions

One of the challenges in using label-free LC-MS for quantitation of biomolecules is systematic bias in measurements. Several normalization methods have been introduced to overcome this issue, but there is no universally applicable approach at the present time. Each data set should be carefully examined to determine the most appropriate normalization method. We review here several existing methods and introduce the GPRM for normalization of LC-MS data. Through our in-house data set, we show that the GPRM outperforms other normalization methods considered here, in terms of decreasing the variability of ion intensities among quality control runs.
  相似文献   

14.
Protein-protein interactions govern almost all biological processes and the underlying functions of proteins. The interaction sites of protein depend on the 3D structure which in turn depends on the amino acid sequence. Hence, prediction of protein function from its primary sequence is an important and challenging task in bioinformatics. Identification of the amino acids (hot spots) that leads to the characteristic frequency signifying a particular biological function is really a tedious job in proteomic signal processing. In this paper, we have proposed a new promising technique for identification of hot spots in proteins using an efficient time-frequency filtering approach known as the S-transform filtering. The S-transform is a powerful linear time-frequency representation and is especially useful for the filtering in the time-frequency domain. The potential of the new technique is analyzed in identifying hot spots in proteins and the result obtained is compared with the existing methods. The results demonstrate that the proposed method is superior to its counterparts and is consistent with results based on biological methods for identification of the hot spots. The proposed method also reveals some new hot spots which need further investigation and validation by the biological community.  相似文献   

15.
Pluripotent stem cells are able to self-renew, and to differentiate into all adult cell types. Many studies report data describing these cells, and characterize them in molecular terms. Machine learning yields classifiers that can accurately identify pluripotent stem cells, but there is a lack of studies yielding minimal sets of best biomarkers (genes/features). We assembled gene expression data of pluripotent stem cells and non-pluripotent cells from the mouse. After normalization and filtering, we applied machine learning, classifying samples into pluripotent and non-pluripotent with high cross-validated accuracy. Furthermore, to identify minimal sets of best biomarkers, we used three methods: information gain, random forests and a wrapper of genetic algorithm and support vector machine (GA/SVM). We demonstrate that the GA/SVM biomarkers work best in combination with each other; pathway and enrichment analyses show that they cover the widest variety of processes implicated in pluripotency. The GA/SVM wrapper yields best biomarkers, no matter which classification method is used. The consensus best biomarker based on the three methods is Tet1, implicated in pluripotency just recently. The best biomarker based on the GA/SVM wrapper approach alone is Fam134b, possibly a missing link between pluripotency and some standard surface markers of unknown function processed by the Golgi apparatus.  相似文献   

16.
With the development of IT convergence technologies, users can now more easily access useful information. These days, diverse and far-reaching information is being rapidly produced and distributed instantly in digitized format. Studies are continuously seeking to develop more efficient methods of delivering information to a greater number of users. Image filtering, which extracts features of interest from images, was developed to address the weakness of collaborative filtering, which is limited to superficial data analysis. However, image filtering has its own weakness of requiring complicated calculations to obtain the similarity between images. In this study, to resolve these problems, we propose associative image filtering based on the mining method utilizing the harmonic mean. Using data mining’s Apriori algorithm, this study investigated the association among preferred images from an associative image group and obtained a prediction based on user preference mean. In so doing, we observed a positive relationship between the various image preferences and the various distances between images’ color histograms. Preference mean was calculated based on the arithmetic mean, geometric mean, and harmonic mean. We found through performance analysis that the harmonic mean had the highest accuracy. In associative image filtering, we used the harmonic mean in order to anticipate preferences. In testing accuracy with MAE utilizing the proposed method, this study demonstrated an improvement of approximately 12 % on average compared to previous collaborative image filtering.  相似文献   

17.
Many recent microarrays hold an enormous number of probe sets, thus raising many practical and theoretical problems in controlling the false discovery rate (FDR). Biologically, it is likely that most probe sets are associated with un-expressed genes, so the measured values are simply noise due to non-specific binding; also many probe sets are associated with non-differentially-expressed (non-DE) genes. In an analysis to find DE genes, these probe sets contribute to the false discoveries, so it is desirable to filter out these probe sets prior to analysis. In the methodology proposed here, we first fit a robust linear model for probe-level Affymetrix data that accounts for probe and array effects. We then develop a novel procedure called FLUSH (Filtering Likely Uninformative Sets of Hybridizations), which excludes probe sets that have statistically small array-effects or large residual variance. This filtering procedure was evaluated on a publicly available data set from a controlled spiked-in experiment, as well as on a real experimental data set of a mouse model for retinal degeneration. In both cases, FLUSH filtering improves the sensitivity in the detection of DE genes compared to analyses using unfiltered, presence-filtered, intensity-filtered and variance-filtered data. A freely-available package called FLUSH implements the procedures and graphical displays described in the article.  相似文献   

18.
Image denoising has a profound impact on the precision of estimated parameters in diffusion kurtosis imaging (DKI). This work first proposes an approach to constructing a DKI phantom that can be used to evaluate the performance of denoising algorithms in regard to their abilities of improving the reliability of DKI parameter estimation. The phantom was constructed from a real DKI dataset of a human brain, and the pipeline used to construct the phantom consists of diffusion-weighted (DW) image filtering, diffusion and kurtosis tensor regularization, and DW image reconstruction. The phantom preserves the image structure while minimizing image noise, and thus can be used as ground truth in the evaluation. Second, we used the phantom to evaluate three representative algorithms of non-local means (NLM). Results showed that one scheme of vector-based NLM, which uses DWI data with redundant information acquired at different b-values, produced the most reliable estimation of DKI parameters in terms of Mean Square Error (MSE), Bias and standard deviation (Std). The result of the comparison based on the phantom was consistent with those based on real datasets.  相似文献   

19.
Low-density quantitative real-time PCR (qPCR) arrays are often used to profile expression patterns of microRNAs in various biological milieus. To achieve accurate analysis of expression of miRNAs, non-biological sources of variation in data should be removed through precise normalization of data. We have systematically compared the performance of 19 normalization methods on different subsets of a real miRNA qPCR dataset that covers 40 human tissues. After robustly modeling the mean squared error (MSE) in normalized data, we demonstrate lower variability between replicates is achieved using various methods not applied to high-throughput miRNA qPCR data yet. Normalization methods that use splines or wavelets smoothing to estimate and remove Cq dependent non-linearity between pairs of samples best reduced the MSE of differences in Cq values of replicate samples. These methods also retained between-group variability in different subsets of the dataset.  相似文献   

20.
:分析了当前常用的标准化方法在肿瘤基因芯片中引起错误分类的原因,提出了一种基于类均值的标准化方法.该方法对基因表达谱进行双向标准化,并将标准化过程与聚类过程相互缠绕,利用聚类结果来修正参照表达水平.选取了5组肿瘤基因芯片数据,用层次聚类和K-均值聚类算法在不同的方差水平上分别对常用的标准化和基于类均值的标准化处理后的基因表达数据进行聚类分析比较.实验结果表明,基于类均值的标准化方法能有效提高肿瘤基因表达谱聚类结果的质量.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号