首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, some morphological transformations are used to detect the unevenly illuminated background of text images characterized by poor lighting, and to acquire illumination normalized result. Based on morphologic Top-Hat transform, the uneven illumination normalization algorithm has been carried out, and typically verified by three procedures. The first procedure employs the information from opening based Top-Hat operator, which is a classical method. In order to optimize and perfect the classical Top-Hat transform, the second procedure, featuring the definition of multi direction illumination notion, utilizes opening by reconstruction and closing by reconstruction based on multi direction structuring elements. Finally, multi direction images are merged to the final even illumination image. The performance of the proposed algorithm is illustrated and verified through the processing of different ideal synthetic and camera collected images, with backgrounds characterized by poor lighting conditions.  相似文献   

2.
Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction.  相似文献   

3.
《IRBM》2022,43(6):628-639
ObjectivesAlthough the segmentation of retinal vessels in the fundus is of great significance for screening and diagnosing retinal vascular diseases, it remains difficult to detect the low contrast and the information around the lesions provided by retinal vessels in the fundus and to locate and segment micro-vessels in the fine-grained area. To overcome this problem, we propose herein an improved U-Net segmentation method NoL-UNet.Material and methodsThis work introduces NoL-UNet. First of all, the ordinary convolution block of the U-Net network is changed to random dropout convolution blocks, which can better extract the relevant features of the image and effectively alleviate the network overfitting. Next, a NoL-Block attention mechanism added to the bottom of the encoding-decoding structure expands the receptive field and enhances the correlation of pixel information without increasing the number of parameters.ResultsThe proposed method is verified by applying it to the fundus image datasets DRIVE, CHASE_DB1, and HRF. The AUC for DRIVE, CHASE_DB1 and HRF is 0.9861, 0.9891 and 0.9893, Se for DRIVE, CHASE_DB1 and HRF is 0.8489, 0.8809 and 0.8476, and the Acc for DRIVE, CHASE_DB1 and HRF is 0.9697, 0.9826 and 0.9732, respectively. The total number of parameters is 1.70M, and for DRIVE, it takes 0.050s to segment an image.ConclusionOur method is statistically significantly different from the U-Net method, and the improved method shows superior performance with better accuracy and robustness of the model, which has good practical application in auxiliary diagnosis.  相似文献   

4.
Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM) was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM) and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1) the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2) the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation.  相似文献   

5.
本文提出了一种基于哈达玛变换的频谱图像灰度共生矩阵(Hadamard-GLCM)的高强度聚焦超声治疗无损测温方法。利用高强度聚焦超声辐照新鲜离体猪肉组织,获取辐照前后的B超图像的减影图像,采用Hadamard变换对其进行处理,获取频谱图像,将频谱图像的灰度共生矩阵惯性矩作为反应温度变化的信息参数。实验表明:不仅单组数据的Hadamard-GLCM惯性矩(HGMI)和温度能很好的线性拟合,而且多组数据的Hadamard-GLCM惯性矩与温度也成近似的线性关系,而且斜率非常接近,拟合度更接近1,误差小,对温度的分辨能力高,容错能力强,与传统的测温方法相比有着明显的优势,能为HIFU治疗过程中的无损测温提供有效的实时依据。  相似文献   

6.
本文提出了基于最大熵和改进的PCNN(Pulse Coupled Neural Network)相结合的新方法,采用最大熵确定PCNN网络的循环迭代次数.提出的方法无需考虑PCNN参数的选择,可有效的自动分割各种医学图像,同时利用最大熵得到最优分割结果.该方法对于PCNN理论在医学图像分割领域的应用有着重要的意义.  相似文献   

7.
In some large-scale face recognition task, such as driver license identification and law enforcement, the training set only contains one image per person. This situation is referred to as one sample problem. Because many face recognition techniques implicitly assume that several (at least two) images per person are available for training, they cannot deal with the one sample problem. This paper investigates principal component analysis (PCA), Fisher linear discriminant analysis (LDA), and locality preserving projections (LPP) and shows why they cannot perform well in one sample problem. After that, this paper presents four reasons that make one sample problem itself difficult: the small sample size problem; the lack of representative samples; the underestimated intra-class variation; and the overestimated inter-class variation. Based on the analysis, this paper proposes to enlarge the training set based on the inter-class relationship. This paper also extends LDA and LPP to extract features from the enlarged training set. The experimental results show the effectiveness of the proposed method.  相似文献   

8.
This article considers the asymptotic estimation theory for the log relative potency in a symmetric parallel bioassay when uncertain prior information about the true log relative potency is assumed to be a known quantity. Three classes of point estimation, namely, the unrestricted estimator, the shrinkage restricted estimator and shrinkage preliminary test estimator are proposed. Their asymptotic mean squared errors are derived and compared. The relative dominance picture of the estimators is presented. Interestingly, proposed shrinkage preliminary test estimator dominates the unrestricted estimator in a range that is wider than that of the usual preliminary test estimator. Most importantly, the size of the preliminary test is much appropriate than the usual preliminary test estimator.  相似文献   

9.
Two minimum mean square error estimators of heritability are proposed and compared with the conventional regression estimator with live data.  相似文献   

10.
Alignment of a diamond or glass knife with the face of an epoxy block, prior to sectioning, can be facilitated by the use of high intensity illumination. Such light produces a brilliant reflection of the knife edge on the block face in the form of a bright band which diminishes in height as the knife approaches the block face. Excellent visibility of block face and knife edge is afforded at magnifications up to 40. Allowing the block to cool for 1 min counteracts the thermal effects of the light before sectioning commences. This technique provides a convenient alternative to the use of reflecting devices for alignment of the knife during its approach to the block.  相似文献   

11.
Video panoramic image stitching is extremely time-consuming among other challenges. We present a new algorithm: (i) Improved, self-adaptive selection of Harris corners. The successful stitching relies heavily on the accuracy of corner selection. We fragment each image into numerous regions and select corners within each region according to the normalized variance of region grayscales. Such a selection is self-adaptive and guarantees that corners are distributed proportional to region texture information. The possible clustering of corners is also avoided. (ii) Multiple-constraint corner matching. The traditional Random Sample Consensus (RANSAC) algorithm is inefficient, especially when handling a large number of images with similar features. We filter out many inappropriate corners according to their position information, and then generate candidate matching pairs based on grayscales of adjacent regions around corners. Finally we apply multiple constraints on every two pairs to remove incorrectly matched pairs. By a significantly reduced number of iterations needed in RANSAC, the stitching can be performed in a much more efficient manner. Experiments demonstrate that (i) our corner matching is four times faster than normalized cross-correlation function (NCC) rough match in RANSAC and (ii) generated panoramas feature a smooth transition in overlapping image areas and satisfy real-time human visual requirements.  相似文献   

12.
基于图像处理华中五味子叶面积的回归测算   总被引:4,自引:0,他引:4  
采用数字图像处理法测定华中五味子叶面积,分别构建了老枝和新梢上叶片的面积与叶形特征值(叶长、叶宽和叶长宽乘积)的相关性回归方程,并进行检验的结果显示,叶长宽乘积与叶面积的相关性最高,以其构建的回归方程测算的叶面积精确度最高,是一种简便而科学的非破坏性的测定华中五味子叶面积的方法。  相似文献   

13.
Normalization is an important step in the analysis of quantitative proteomics data. If this step is ignored, systematic biases can lead to incorrect assumptions about regulation. Most statistical procedures for normalizing proteomics data have been borrowed from genomics where their development has focused on the removal of so-called ‘batch effects.’ In general, a typical normalization step in proteomics works under the assumption that most peptides/proteins do not change; scaling is then used to give a median log-ratio of 0. The focus of this work was to identify other factors, derived from knowledge of the variables in proteomics, which might be used to improve normalization. Here we have examined the multi-laboratory data sets from Phase I of the NCI''s CPTAC program. Surprisingly, the most important bias variables affecting peptide intensities within labs were retention time and charge state. The magnitude of these observations was exaggerated in samples of unequal concentrations or “spike-in” levels, presumably because the average precursor charge for peptides with higher charge state potentials is lower at higher relative sample concentrations. These effects are consistent with reduced protonation during electrospray and demonstrate that the physical properties of the peptides themselves can serve as good reporters of systematic biases. Between labs, retention time, precursor m/z, and peptide length were most commonly the top-ranked bias variables, over the standardly used average intensity (A). A larger set of variables was then used to develop a stepwise normalization procedure. This statistical model was found to perform as well or better on the CPTAC mock biomarker data than other commonly used methods. Furthermore, the method described here does not require a priori knowledge of the systematic biases in a given data set. These improvements can be attributed to the inclusion of variables other than average intensity during normalization.The number of laboratories using MS as a quantitative tool for protein profiling continues to grow, propelling the field forward past simple qualitative measurements (i.e. cataloging), with the aim of establishing itself as a robust method for detecting proteomic differences. By analogy, semiquantitative proteomic profiling by MS can be compared with measurement of relative gene expression by genomics technologies such as microarrays or, newer, RNAseq measurements. While proteomics is disadvantaged by the lack of a molecular amplification system for proteins, successful reports from discovery experiments are numerous in the literature and are increasing with advances in instrument resolution and sensitivity.In general, methods for performing relative quantitation can be broadly divided into two categories: those employing labels (e.g. iTRAQ, TMT, and SILAC (1)) and so-called “label-free” techniques. Labeling methods involve adding some form of isobaric or isotopic label(s) to the proteins or peptides prior to liquid chromatography-tandem MS (LC-MS/MS) analysis. Chemical labels are typically applied during sample processing, and isotopic labels are commonly added during cell culture (i.e. metabolic labeling). One advantage of label-based methods is that the two (or more) differently-labeled samples can be mixed and run in single LC-MS analyses. This is in contrast to label-free methods which require the samples to be run independently and the data aligned post-acquisition.Many labs employ label-free methods because they are applicable to a wider range of samples and require fewer sample processing steps. Moreover, data from qualitative experiments can sometimes be re-analyzed using label-free software tools to provide semiquantitative data. Advances in these software tools have been extensively reviewed (2). While analysis of label-based data primarily uses full MS scan (MS1)1 or tandem MS scan (MS2) ion current measurements, analysis of label-free data can employ simple counts of confidently identified tandem mass spectra (3). So-called spectral counting makes the assumption that the number of times a peptide is identified is proportional to its concentration. These values are sometimes summed across all peptides for a given protein and scaled by protein length. Relative abundance can then be calculated for any peptide or protein of interest. While this approach may be easy to perform, its usefulness is particularly limited in smaller data sets and/or when counts are low.This report focuses only on the use of ion current measurements in label-free data sets, specifically those calculated from extracted MS1 ion chromatograms (XICs). In general terms, raw intensity values (i.e. ion counts in arbitrary units) cannot be used for quantitation in the absence of cognate internal standards because individual ion intensities depend on a response factor, related to the chemical properties of the molecule. Intensities are instead almost always reserved for relative determinations. Furthermore, retention times are sometimes used to align the chromatograms between runs to ensure higher confidence prior to calculating relative intensities. This step is crucial for methods without corresponding identity information, particularly for experiments performed on low-resolution instruments. To support a label-free workflow, peptide identifications are commonly made from tandem mass spectra (MS/MS) acquired along with direct electrospray signal (MS1). Or, in alternative workflows seeking deeper coverage, interesting MS1 components can be targeted for identification by MS/MS in follow-up runs (4).“Rolling up” the peptide ion information to the peptide and protein level is also done in different ways in different labs. In most cases, “peptide intensity” or “peptide abundance” is the summed or averaged value of the identified peptide ions. How the peptide information is transferred to the protein level differs between methods but typically involves summing one or more peptide intensities, following parsimony analysis. One such solution is the “Top 3” method developed by Silva and co-workers (5).Because peptides in label-free methods lack labeled analogs and require separate runs, they are more susceptible to analytical noise and systematic variations. Sources of these obscuring variations can come from many sources, including sample preparation, operator error, chromatography, electrospray, and even from the data analysis itself. While analytical noise (e.g. chemical interference) is difficult to selectively reject, systematic biases can often be removed by statistical preprocessing. The goal of these procedures is to normalize the data prior to calculations of relative abundance. Failure to resolve these issues is the common origin of batch effects, previously described for genomics data, which can severely limit meaningful interpretation of experimental data (6, 7).These effects have also been recently explored in proteomics data (8). Methods used to normalize proteomics data have been largely borrowed from the microarray community, or are based on a simple mean/median intensity ratio correction. Methods applied on microarray and/or gene chip and used on proteomics data include scaling, linear regression, nonlinear regression, and quantile normalizations (9). Moreover, work has also been done to improve normalization by subselecting a peptide basis (10). Other work suggests that linear regression, followed by run order analysis, works better than other methods tested (11). Key to this last method is the incorporation of a variable other than intensity during normalization. It is also important to note that little work has been done towards identifying the underlying sources of these variations in proteomics data. Although cause-and-effect is often difficult to determine, understanding these relationships will undoubtedly help remove and avoid the major underlying sources of systematic variations.In this report, we have attempted to combine our efforts focused on understanding variability with the work initiated by others for normalizing ion current-based label-free proteomics data. We have identified several major variables commonly affecting peptide ion intensities both within and between labs. As test data, we used a subset of raw data acquired during Phase I of the National Cancer Institute''s (NCI) Clinical Proteomics Technology Assessment for Cancer (CPTAC) program. With these data, we were able to develop a statistical model to rank bias variables and normalize the intensities using stepwise, semiparametric regression. The data analysis methods have been implemented within the National Institute of Standards and Technology (NIST) MS quality control (MSQC) pipeline. Finally, we have developed R code for removing systematic biases and have tested it using a reference standard spiked into a complex biological matrix (i.e. yeast cell lysate).  相似文献   

14.
Ambitious projects aim to record the activity of ever larger and denser neuronal populations in vivo. Correlations in neural activity measured in such recordings can reveal important aspects of neural circuit organization. However, estimating and interpreting large correlation matrices is statistically challenging. Estimation can be improved by regularization, i.e. by imposing a structure on the estimate. The amount of improvement depends on how closely the assumed structure represents dependencies in the data. Therefore, the selection of the most efficient correlation matrix estimator for a given neural circuit must be determined empirically. Importantly, the identity and structure of the most efficient estimator informs about the types of dominant dependencies governing the system. We sought statistically efficient estimators of neural correlation matrices in recordings from large, dense groups of cortical neurons. Using fast 3D random-access laser scanning microscopy of calcium signals, we recorded the activity of nearly every neuron in volumes 200 μm wide and 100 μm deep (150–350 cells) in mouse visual cortex. We hypothesized that in these densely sampled recordings, the correlation matrix should be best modeled as the combination of a sparse graph of pairwise partial correlations representing local interactions and a low-rank component representing common fluctuations and external inputs. Indeed, in cross-validation tests, the covariance matrix estimator with this structure consistently outperformed other regularized estimators. The sparse component of the estimate defined a graph of interactions. These interactions reflected the physical distances and orientation tuning properties of cells: The density of positive ‘excitatory’ interactions decreased rapidly with geometric distances and with differences in orientation preference whereas negative ‘inhibitory’ interactions were less selective. Because of its superior performance, this ‘sparse+latent’ estimator likely provides a more physiologically relevant representation of the functional connectivity in densely sampled recordings than the sample correlation matrix.  相似文献   

15.
People usually see things using frontal viewing, and avoid lateral viewing (or eccentric gaze) where the directions of the head and eyes are largely different. Lateral viewing interferes with attentive visual search performance, probably because the head is directed away from the target and/or because the head and eyes are misaligned. In this study, we examined which of these factors is the primary one for interference by conducting a visual identification experiment where a target was presented in the peripheral visual field. The critical manipulation was the participants’ head direction and fixation position: the head was directed to the fixation location, the target position, or the opposite side of the fixation. The performance was highest when the head was directed to the target position even when there was misalignment of the head and eye, suggesting that visual perception can be influenced by both head direction and fixation position.  相似文献   

16.
:分析了当前常用的标准化方法在肿瘤基因芯片中引起错误分类的原因,提出了一种基于类均值的标准化方法.该方法对基因表达谱进行双向标准化,并将标准化过程与聚类过程相互缠绕,利用聚类结果来修正参照表达水平.选取了5组肿瘤基因芯片数据,用层次聚类和K-均值聚类算法在不同的方差水平上分别对常用的标准化和基于类均值的标准化处理后的基因表达数据进行聚类分析比较.实验结果表明,基于类均值的标准化方法能有效提高肿瘤基因表达谱聚类结果的质量.  相似文献   

17.
Structured illumination microscopy (SIM) with axially optical sectioning capability has found widespread applications in three-dimensional live cell imaging in recent years, since it combines high sensitivity, short image acquisition time, and high spatial resolution. To obtain one sectioned slice, three raw images with a fixed phase-shift, normally 2π/3, are generally required. In this paper, we report a data processing algorithm based on the one-dimensional Hilbert transform, which needs only two raw images with arbitrary phase-shift for each single slice. The proposed algorithm is different from the previous two-dimensional Hilbert spiral transform algorithm in theory. The presented algorithm has the advantages of simpler data processing procedure, faster computation speed and better reconstructed image quality. The validity of the scheme is verified by imaging biological samples in our developed DMD-based LED-illumination SIM system.  相似文献   

18.
A 3-D beam scanning antenna array design is proposed that gives a whole 3-D spherical coverage and also suitable for various radar and body-worn devices in the Body Area Networks applications. The Array Factor (AF) of the proposed antenna is derived and its various parameters like directivity, Half Power Beam Width (HPBW) and Side Lobe Level (SLL) are calculated by varying the size of the proposed antenna array. Simulations were carried out in MATLAB 2012b. The radiators are considered isotropic and hence mutual coupling effects are ignored. The proposed array shows a considerable improvement against the existing cylindrical and coaxial cylindrical arrays in terms of 3-D scanning, size, directivity, HPBW and SLL.  相似文献   

19.
20.
基于PACS的医学图像压缩   总被引:4,自引:0,他引:4  
从PACS和DICOM的定义出发,对基于PACS的医学图像压缩的要求和算法等方面作了阐述,还介绍了JPEG2000在医学图像压缩中的优势。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号