首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We have investigated the restoration of electron micrographs exhibiting blurring due to drift and rotation. Blurring due to drift arises in micrographs taken of a specimen which is moving relative to the image plane. A related problem is that of rotational blurring which arises in micrographs of thin sections of helical particles viewed in cross section. The twist of the particle within the finite thickness of the section causes the image to appear rotationally blurred about the helical axis. Restoration algorithms were evaluated by applying them to the restoration of blurred model images degraded by additive Gaussian noise. Model images were also used to investigate how an incorrect estimate of the point spread function describing the blur would effect the restoration. Images were, if necessary, geometrically transformed to a space in which the point spread function of the blur can be considered as linear and space invariant as, under these conditions, the restoration algorithms are greatly simplified. In the case of the rotationally blurred images this procedure was accomplished by transforming the image to polar coordinates. The restoration techniques were successfully applied to blurred micrographs of bacteriophage T4 and crystals of catalase. The quality of the restoration was judged by comparisons of the restored images to undegraded images. Application to micrographs of rotationally blurred cross sections of helical macrofibers of sickle hemoglobin resulted in a reduction in the amount of rotational blurring.  相似文献   

2.
As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition.  相似文献   

3.
We have examined the structure of hemoglobin S fibers, which are associated into large bundles, or fascicles. Electron micrographs of embedded and cross-sectioned fascicles provide an end-on view of the component fibers. The cross-sectional images are rotationally blurred as a result of the twist of the fiber within the finite thickness of the section. We have applied restoration techniques to recover a deblurred image of the fiber. The first step in this procedure involved correlation averaging images of cross-sections of individual fibers in order to improve the signal-to-noise ratio. The rotationally blurred image was then geometrically transformed to polar co-ordinates. In this space, the rotational blur is transformed into a linear blur. The linearly blurred image is the convolution of the unblurred image and a point spread function that can be closely approximated by a square pulse. Deconvolution in Fourier space, followed by remapping to Cartesian co-ordinates, produced a deblurred image of the original micrograph. The deblurred images indicate that the fiber is comprised of 14 strands of hemoglobin S. This result provides confirmation of the fiber structure determined using helical reconstruction techniques and indicates that the association of fibers into ordered arrays does not alter their molecular structure.  相似文献   

4.
Xiang Li  Yi Sun 《Cluster computing》2017,20(4):3003-3014
In the industrial production line, the motion of the target is the main reason for blurred image of the camera monitoring. A coded-exposure devices and circuits are designed to get restored image from this motion blurring. A given binary code sequence which represent open or close of shutter in CCD circuits driven by FPGA is used to control the exposure-time. The sampled images are processed by deconvolution algorithm and the high frequency information of them could be preserved by using the coded-exposure sequence resulting in blurred image restoration. The de-blurred problem could be converted to a well-posed from an ill-posed one. Experiments demonstrate that using the coded-exposure, the device proposed is able to improve the quality of blurred image.  相似文献   

5.
The increasing number of demanding consumer image applications has led to increased interest in no-reference objective image quality assessment (IQA) algorithms. In this paper, we propose a new blind blur index for still images based on singular value similarity. The algorithm consists of three steps. First, a re-blurred image is produced by applying a Gaussian blur to the test image. Second, a singular value decomposition is performed on the test image and re-blurred image. Finally, an image blur index is constructed based on singular value similarity. The experimental results obtained on four simulated databases to demonstrate that the proposed algorithm has high correlation with human judgment when assessing blur or noise distortion of images.  相似文献   

6.
We extend a neural network model, developed to examine neural correlates for the dynamic synthesis of edges from luminance gradients (O?men, 1993), to account for the effects of exposure duration, base blur and contrast on the perceived sharpness of edges. This model of REtino-COrtical Dynamics (RECOD) predicts that (i) a decrease in exposure duration causes an increase in the perceived blur and the blur discrimination threshold for edges, (ii) this increase in perceived blur is more pronounced for sharper edges than for blurred edges, (iii) perceived blur is independent of contrast while the blur discrimination threshold decreases with contrast, (iv) perceived blur increases with increasing base blur while the blur discrimination threshold has a nonmonotonic U-shaped dependence on base blur, (v) the perceived location of an edge shifts progressively towards the low-luminance side of the edge with increasing contrast, and (vi) perceived contrast of suprathreshold stimuli is essentially independent of spatial frequency over a wide range of contrast values. These predictions are shown to be in quantitative agreement with existing psychophysical data from the literature and with data collected on three observers to quantify the effect of exposure duration on perceived blur.  相似文献   

7.
We investigate an artificial neural network model with a modified Hebb rule. It is an auto-associative neural network similar to the Hopfield model and to the Willshaw model. It has properties of both of these models. Another property is that the patterns are sparsely coded and are stored in cycles of synchronous neural activities. The cycles of activity for some ranges of parameter increase the capacity of the model. We discuss basic properties of the model and some of the implementation issues, namely optimizing of the algorithms. We describe the modification of the Hebb learning rule, the learning algorithm, the generation of patterns, decomposition of patterns into cycles and pattern recall.  相似文献   

8.
The perception of blur in images can be strongly affected by prior adaptation to blurry images or by spatial induction from blurred surrounds. These contextual effects may play a role in calibrating visual responses for the spatial structure of luminance variations in images. We asked whether similar adjustments might also calibrate the visual system for spatial variations in color. Observers adjusted the amplitude spectra of luminance or chromatic images until they appeared correctly focused, and repeated these measurements either before or after adaptation to blurred or sharpened images or in the presence of blurred or sharpened surrounds. Prior adaptation induced large and distinct changes in perceived focus for both luminance and chromatic patterns, suggesting that luminance and chromatic mechanisms are both able to adjust to changes in the level of blur. However, judgments of focus were more variable for color, and unlike luminance there was little effect of surrounding spatial context on perceived blur. In additional measurements we explored the effects of adaptation on threshold contrast sensitivity for luminance and color. Adaptation to filtered noise with a 1/f spectrum characteristic of natural images strongly and selectively elevated thresholds at low spatial frequencies for both luminance and color, thus transforming the chromatic contrast sensitivity function from lowpass to nearly bandpass. These threshold changes were found to reflect interactions between different spatial scales that bias sensitivity against the lowest spatial grain in the image, and may reflect adaptation to different stimulus attributes than the attributes underlying judgments of image focus. Our results suggest that spatial sensitivity for variations in color can be strongly shaped by adaptation to the spatial structure of the stimulus, but point to dissociations in these visual adjustments both between luminance and color and different measures of spatial sensitivity.  相似文献   

9.

Background

The image formed by the eye''s optics is inherently blurred by aberrations specific to an individual''s eyes. We examined how visual coding is adapted to the optical quality of the eye.

Methods and Findings

We assessed the relationship between perceived blur and the retinal image blur resulting from high order aberrations in an individual''s optics. Observers judged perceptual blur in a psychophysical two-alternative forced choice paradigm, on stimuli viewed through perfectly corrected optics (using a deformable mirror to compensate for the individual''s aberrations). Realistic blur of different amounts and forms was computer simulated using real aberrations from a population. The blur levels perceived as best focused were close to the levels predicted by an individual''s high order aberrations over a wide range of blur magnitudes, and were systematically biased when observers were instead adapted to the blur reproduced from a different observer''s eye.

Conclusions

Our results provide strong evidence that spatial vision is calibrated for the specific blur levels present in each individual''s retinal image and that this adaptation at least partly reflects how spatial sensitivity is normalized in the neural coding of blur.  相似文献   

10.
In the last decades a standard model regarding the function of the hippocampus in memory formation has been established and tested computationally. It has been argued that the CA3 region works as an auto-associative memory and that its recurrent fibers are the actual storing place of the memories. Furthermore, to work properly CA3 requires memory patterns that are mutually uncorrelated. It has been suggested that the dentate gyrus orthogonalizes the patterns before storage, a process known as pattern separation. In this study we review the model when random input patterns are presented for storage and investigate whether it is capable of storing patterns of more realistic entorhinal grid cell input. Surprisingly, we find that an auto-associative CA3 net is redundant for random inputs up to moderate noise levels and is only beneficial at high noise levels. When grid cell input is presented, auto-association is even harmful for memory performance at all levels. Furthermore, we find that Hebbian learning in the dentate gyrus does not support its function as a pattern separator. These findings challenge the standard framework and support an alternative view where the simpler EC-CA1-EC network is sufficient for memory storage.  相似文献   

11.

Background

The high-resolution X-ray imaging system employing synchrotron radiation source, thin scintillator, optical lens and advanced CCD camera can achieve a resolution in the range of tens of nanometers to sub-micrometer. Based on this advantage, it can effectively image tissues, cells and many other small samples, especially the calcification in the vascular or in the glomerulus. In general, the thickness of the scintillator should be several micrometers or even within nanometers because it has a big relationship with the resolution. However, it is difficult to make the scintillator so thin, and additionally thin scintillator may greatly reduce the efficiency of collecting photons.

Methods

In this paper, we propose an approach to extend the depth of focus (DOF) to solve these problems. We develop equation sets by deducing the relationship between the high-resolution image generated by the scintillator and the degraded blur image due to defect of focus first, and then we adopt projection onto convex sets (POCS) and total variation algorithm to get the solution of the equation sets and to recover the blur image.

Results

By using a 20 μm thick unmatching scintillator to replace the 1 μm thick matching one, we simulated a high-resolution X-ray imaging system and got a degraded blur image. Based on the algorithm proposed, we recovered the blur image and the result in the experiment showed that the proposed algorithm has good performance on the recovery of image blur caused by unmatching thickness of scintillator.

Conclusions

The method proposed is testified to be able to efficiently recover the degraded image due to defect of focus. But, the quality of the recovery image especially of the low contrast image depends on the noise level of the degraded blur image, so there is room for improving and the corresponding denoising algorithm is worthy for further study and discussion.
  相似文献   

12.
An in situ microscope (ISM) device is utilised in this study to monitor hybridoma cells concentration in a stirred bioreactor. It generates images by using pulsed illumination of the liquid broth synchronised with the camera frame generation to avoid blur from the cell's motion. An appropriate image processing isolates the sharp objects from the blurred ones that are far from the focal plane. As image processing involves several parameters, this paper focuses on the robustness of the results of the cells counting. This stage determines the applicability of the measuring device and has seldom been tackled in the presentations of ISM devices. Calibration is secondly performed for assessing the cell-concentration from the cell automated numeration provided by the ISM. Flow cytometry and hemacytometer chamber were used as reference analytical methods. These measures and the output of the image processing allow estimating a single calibration parameter: the reference volume per image equal to 1.08 x 10(-6) mL. In these conditions, the correlation coefficient between both reference and ISM data sets becomes equal to 0.99. A saturation of this system during an ultrasonic wave perfusion phase that deeply changes the culture conditions is observed and discussed. Principal component analysis (PCA) is used to undergo the robustness study and the ISM calibration step.  相似文献   

13.
Jensen et al. (Learn Memory 3(2–3):243–256, 1996b) proposed an auto-associative memory model using an integrated short-term memory (STM) and long-term memory (LTM) spiking neural network. Their model requires that distinct pyramidal cells encoding different STM patterns are fired in different high-frequency gamma subcycles within each low-frequency theta oscillation. Auto-associative LTM is formed by modifying the recurrent synaptic efficacy between pyramidal cells. In order to store auto-associative LTM correctly, the recurrent synaptic efficacy must be bounded. The synaptic efficacy must be upper bounded to prevent re-firing of pyramidal cells in subsequent gamma subcycles. If cells encoding one memory item were to re-fire synchronously with other cells encoding another item in subsequent gamma subcycle, LTM stored via modifiable recurrent synapses would be corrupted. The synaptic efficacy must also be lower bounded so that memory pattern completion can be performed correctly. This paper uses the original model by Jensen et al. as the basis to illustrate the following points. Firstly, the importance of coordinated long-term memory (LTM) synaptic modification. Secondly, the use of a generic mathematical formulation (spiking response model) that can theoretically extend the results to other spiking network utilizing threshold-fire spiking neuron model. Thirdly, the interaction of long-term and short-term memory networks that possibly explains the asymmetric distribution of spike density in theta cycle through the merger of STM patterns with interaction of LTM network.  相似文献   

14.
Coral reefs are rich in fisheries and aquatic resources, and the study and monitoring of coral reef ecosystems are of great economic value and practical significance. Due to complex backgrounds and low-quality videos, it is challenging to identify coral reef fish. This study proposed an image enhancement approach for fish detection in complex underwater environments. The method first uses a Siamese network to obtain a saliency map and then multiplies this saliency map by the input image to construct an image enhancement module. Applying this module to the existing mainstream one-stage and two-stage target detection frameworks can significantly improve their detection accuracy. Good detection performance was achieved in a variety of scenarios, such as those with luminosity variations, aquatic plant movements, blurred images, large targets and multiple targets, demonstrating the robustness of the algorithm. The best performance was achieved on the LCF-15 dataset when combining the proposed method with the cascade region-based convolutional neural network (Cascade-RCNN). The average precision at an intersection-over-union (IoU) threshold of 0.5 (AP50) was 0.843, and the F1 score was 0.817, exceeding the best reported results on this dataset. This study provides an automated video analysis tool for marine-related researchers and technical support for downstream applications.  相似文献   

15.
Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.  相似文献   

16.
基于视觉编码的图象处理研究──Ⅰ.原理、算法及实现   总被引:1,自引:1,他引:0  
提出Gabor小波表达的概念及其在图象处理中实现的数学模型和算法框架,主要解决了运算中非正交性问题、收敛性问题,使得该算法及模型可以实现图象的分析和重建,并且可以应用于对视觉编码理论的解释。  相似文献   

17.
提出Gabor小波表达的概念及其在图象处理中实现的数学模型和算法框架,主要解决了运算中非正交性问题、收敛性问题,使得该算法及模型可以实现图象的分解和重建,并且可以应用于对视觉编码理论的解释。  相似文献   

18.
19.
The function of lateral inhibitory synapses between striatal projection neurons is currently poorly understood. This paper puts forward a model suggesting that inhibitory collaterals can be used to enhance the incoming cortical signals. In particular, we propose that lateral inhibition between projection neurons performs a signal-enhancing process that resembles the image processing technique of “unsharp masking”, where a blurred copy is used to enhance and sharpen an input image. The paper also presents the results of computer simulations deomsntrating that the proposed mechanisms is compatible with known properties of striatal projection neurons, and outperforms alternative models of lateral inhibition. Finally, this paper illustrates the advantages of the proposed model and discusses the relevance of these conclusions for existing computational models of the basal ganglia and their role in cognition.  相似文献   

20.
The responses of “complex” simple cells to sharp and blurred ramp edges were studied. These responses are quite similar to those in the case of lines, which implies that phase information cannot be used to discriminate between ramp edges and lines. Furthermore, if the maximum of the modulus is used as a position estimate, a systematic bias toward the ramp side results, and this bias increases with edge blur. In contrast, a local extremum in the real part of the cell responses provides a precise position estimate, even for strongly blurred edges. Possible multiscale detection strategies are discussed in the context of a syntactical visual reconstruction. This is illustrated by an explanation of Mach bands as perceived at trapezoidal edges, including Ratliff’s Mach-band cancellation stimulus, and criteria for local probability summation in the prediction of Mach-band detection thresholds are presented. Received: 10 December 1992/Accepted in revised form: 6 August 1993  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号