首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Shape from texture   总被引:4,自引:0,他引:4  
A central goal for visual perception is the recovery of the three-dimensional structure of the surfaces depicted in an image. Crucial information about three-dimensional structure is provided by the spatial distribution of surface markings, particularly for static monocular views: projection distorts texture geometry in a manner tha depends systematically on surface shape and orientation. To isolate and measure this projective distortion in an image is to recover the three dimensional structure of the textured surface. For natural textures, we show that the uniform density assumption (texels are uniformly distributed) is enough to recover the orientation of a single textured plane in view, under perspective projection. Furthermore, when the texels cannot be found, the edges of the image are enough to determine shape, under a more general assumption, that the sum of the lengths of the contours on the world plane is about the same everywhere. Finally, several experimental results for synthetic and natural images are presented.  相似文献   

2.
Color-to-Grayscale: Does the Method Matter in Image Recognition?   总被引:2,自引:0,他引:2  
Kanan C  Cottrell GW 《PloS one》2012,7(1):e29740
  相似文献   

3.
MOTIVATION: Physical mapping of chromosomes using the maximum likelihood (ML) model is a problem of high computational complexity entailing both discrete optimization to recover the optimal probe order as well as continuous optimization to recover the optimal inter-probe spacings. In this paper, two versions of the genetic algorithm (GA) are proposed, one with heuristic crossover and deterministic replacement and the other with heuristic crossover and stochastic replacement, for the physical mapping problem under the maximum likelihood model. The genetic algorithms are compared with two other discrete optimization approaches, namely simulated annealing (SA) and large-step Markov chains (LSMC), in terms of solution quality and runtime efficiency. RESULTS: The physical mapping algorithms based on the GA, SA and LSMC have been tested using synthetic datasets and real datasets derived from cosmid libraries of the fungus Neurospora crassa. The GA, especially the version with heuristic crossover and stochastic replacement, is shown to consistently outperform the SA-based and LSMC-based physical mapping algorithms in terms of runtime and final solution quality. Experimental results on real datasets and simulated datasets are presented. Further improvements to the GA in the context of physical mapping under the maximum likelihood model are proposed. AVAILABILITY: The software is available upon request from the first author.  相似文献   

4.
Within the range of images that we might categorize as a “beach”, for example, some will be more representative of that category than others. Here we first confirmed that humans could categorize “good” exemplars better than “bad” exemplars of six scene categories and then explored whether brain regions previously implicated in natural scene categorization showed a similar sensitivity to how well an image exemplifies a category. In a behavioral experiment participants were more accurate and faster at categorizing good than bad exemplars of natural scenes. In an fMRI experiment participants passively viewed blocks of good or bad exemplars from the same six categories. A multi-voxel pattern classifier trained to discriminate among category blocks showed higher decoding accuracy for good than bad exemplars in the PPA, RSC and V1. This difference in decoding accuracy cannot be explained by differences in overall BOLD signal, as average BOLD activity was either equivalent or higher for bad than good scenes in these areas. These results provide further evidence that V1, RSC and the PPA not only contain information relevant for natural scene categorization, but their activity patterns mirror the fundamentally graded nature of human categories. Analysis of the image statistics of our good and bad exemplars shows that variability in low-level features and image structure is higher among bad than good exemplars. A simulation of our neuroimaging experiment suggests that such a difference in variance could account for the observed differences in decoding accuracy. These results are consistent with both low-level models of scene categorization and models that build categories around a prototype.  相似文献   

5.
 Texture-discrimination algorithms have often been tested on images containing either mosaics of synthetic textures or artificially created mosaics of real textures – in any case, images in which most of the changes in intensity can be ascribed to the textures themselves. However, real images are not formed like this and may contain steep gradations in intensity which have nothing to do with local texture, such as those caused by incident shadows. A texture discrimination algorithm based on linear filters can fail in the presence of these strong gradations, as they may easily contain an order of magnitude more energy than the gradations in intensity due to texture in the image per se. In these cases, the mechanism may become responsive only to strong luminance effects, and not to texture. I have found that good performance on natural images containing texture can only be obtained from a filter-based texture detection scheme if it includes a stage which attempts to bring large intensity gradients within bounds. The exact nature of the best precompensator appears to depend somewhat on the way the filter outputs are processed. The fit to psychophysical data and the implications for more detailed models of human texture processing will be discussed. Received: 3 May 1993/Accepted in revised form: 7 June 1993  相似文献   

6.
The ultimate goal of machine vision is image understanding-the ability not only to recover image structure but also to know what it represents. By definition, this involves the use of models which describe and label the expected structure of the world. Over the past decade, model-based vision has been applied successfully to images of man-made objects. It has proved much more difficult to develop model-based approaches to the interpretation of images of complex and variable structures such as faces or the internal organs of the human body (as visualized in medical images). In such cases it has been problematic even to recover image structure reliably, without a model to organize the often noisy and incomplete image evidence. The key problem is that of variability. To be useful, a model needs to be specific-that is, to be capable of representing only ''legal'' examples of the modelled object(s). It has proved difficult to achieve this whilst allowing for natural variability. Recent developments have overcome this problem; it has been shown that specific patterns of variability in shape and grey-level appearance can be captured by statistical models that can be used directly in image interpretation. The details of the approach are outlined and practical examples from medical image interpretation and face recognition are used to illustrate how previously intractable problems can now be tackled successfully. It is also interesting to ask whether these results provide any possible insights into natural vision; for example, we show that the apparent changes in shape which result from viewing three-dimensional objects from different viewpoints can be modelled quite well in two dimensions; this may lend some support to the ''characteristic views'' model of natural vision.  相似文献   

7.
With the development of medical imaging modalities and image processing algorithms, there arises a need for methods of their comprehensive quantitative evaluation. In particular, this concerns the algorithms for vessel tracking and segmentation in magnetic resonance angiography images. The problem can be approached by using synthetic images, where true geometry of vessels is known. This paper presents a framework for computer modeling of MRA imaging and the results of its validation. A new model incorporates blood flow simulation within MR signal computation kernel. The proposed solution is unique, especially with respect to the interface between flow and image formation processes. Furthermore it utilizes the concept of particle tracing. The particles reflect the flow of fluid they are immersed in and they are assigned magnetization vectors with temporal evolution controlled by MR physics. Such an approach ensures flexibility as the designed simulator is able to reconstruct flow profiles of any type. The proposed model is validated in a series of experiments with physical and digital flow phantoms. The synthesized 3D images contain various features (including artifacts) characteristic for the time-of-flight protocol and exhibit remarkable correlation with the data acquired in a real MR scanner. The obtained results support the primary goal of the conducted research, i.e. establishing a reference technique for a quantified validation of MR angiography image processing algorithms.  相似文献   

8.
Fractal geometry in mosaic organs: a new interpretation of mosaic pattern   总被引:2,自引:0,他引:2  
P M Iannaccone 《FASEB journal》1990,4(5):1508-1512
Fractal geometries have been widely observed in nature. The formulation of mathematical treatments of non-Euclidean geometry has generated models of highly complex natural phenomena. In the field of developmental biology, branching morphogenesis has been explained in terms of self-similar iterating branching rules that have done much toward explaining branch patterns observed in a range of real tissue. In solid viscera the problem is more complicated because there is no readily available marker of geometry in parenchymal tissue. Mosaic pattern provides such a marker. The patches observed in mosaic liver are shown to be fractal, indicating that the pattern may have arisen from a self-similar process (i.e., a process that creates an object in which small areas are representative of, although not necessarily identical to, the whole object). This observation offers a new analytical approach to the study of biologic structure in organogenesis.  相似文献   

9.
Crystal unbending, the process that aims to recover a perfect crystal from experimental data, is one of the more important steps in electron crystallography image processing. The unbending process involves three steps: estimation of the unit cell displacements from their ideal positions, extension of the deformation field to the whole image and transformation of the image in order to recover an ideal crystal. In this work, we present a systematic analysis of the second step oriented to address two issues. First, whether the unit cells remain undistorted and only the distance between them should be changed (rigid case) or should be modified with the same deformation suffered by the whole crystal (elastic case). Second, the performance of different extension algorithms (interpolation versus approximation) is explored. Our experiments show that there is no difference between elastic and rigid cases or among the extension algorithms. This implies that the deformation fields are constant over large areas. Furthermore, our results indicate that the main source of error is the transformation of the crystal image.  相似文献   

10.
A fundamental problem in molecular biology is the determination of the conformation of macromolecules from NMR data. Several successful distance geometry programs have been developed for this purpose, for example DISGEO. A particularly difficult facet of these programs is the embedding problem, that is the problem of determining those conformations whose distances between atoms are nearest those measured by the NMR techniques. The embedding problem is the distance geometry equivalent of the multiple minima problem, which arises in energy minimization approaches to conformation determination. We show that the distance geometry approach has some nice geometry not associated with other methods that allows one to prove detailed results with regard to the location of local minima. We exploit this geometry to develop some algorithms which are faster and find more minima than the algorithms presently used. The authors were partially supported by National Science Foundation Grant CHE-8802341.  相似文献   

11.
Organisms inhabiting river systems contend with downstream biased flow in a complex tree-like network. Differential equation models are often used to study population persistence, thus suggesting resolutions of the ‘drift paradox’, by considering the dependence of persistence on such variables as advection rate, dispersal characteristics, and domain size. Most previous models that explicitly considered network geometry artificially discretized river habitat into distinct patches. With the recent exception of Ramirez (J Math Biol 65:919–942, 2012), partial differential equation models have largely ignored the global geometry of river systems and the effects of tributary junctions by using intervals to describe the spatial domain. Taking advantage of recent developments in the analysis of eigenvalue problems on quantum graphs, we use a reaction–diffusion–advection equation on a metric tree graph to analyze persistence of a single population in terms of dispersal parameters and network geometry. The metric graph represents a continuous network where edges represent actual domain rather than connections among patches. Here, network geometry usually has a significant impact on persistence, and occasionally leads to dramatically altered predictions. This work ranges over such themes as model definition, reduction to a diffusion equation with the associated model features, numerical and analytical studies in radially symmetric geometries, and theoretical results for general domains. Notable in the model assumptions is that the zero-flux interior junction conditions are not restricted to conservation of hydrological discharge.  相似文献   

12.
Mass spectrometry has become one of the most popular analysis techniques in Proteomics and Systems Biology. With the creation of larger datasets, the automated recalibration of mass spectra becomes important to ensure that every peak in the sample spectrum is correctly assigned to some peptide and protein. Algorithms for recalibrating mass spectra have to be robust with respect to wrongly assigned peaks, as well as efficient due to the amount of mass spectrometry data. The recalibration of mass spectra leads us to the problem of finding an optimal matching between mass spectra under measurement errors. We have developed two deterministic methods that allow robust computation of such a matching: The first approach uses a computational geometry interpretation of the problem, and tries to find two parallel lines with constant distance that stab a maximal number of points in the plane. The second approach is based on finding a maximal common approximate subsequence, and improves existing algorithms by one order of magnitude exploiting the sequential nature of the matching problem. We compare our results to a computational geometry algorithm using a topological line-sweep.  相似文献   

13.
Chemical reaction?Cdiffusion is a basic component of morphogenesis, and can be used to obtain interesting and unconventional self-organizing algorithms for swarms of autonomous agents, using natural or artificial chemistries. However, the performance of these algorithms in the face of disruptions has not been sufficiently studied. In this paper we evaluate the use of reaction?Cdiffusion for the morphogenetic engineering of distributed coordination algorithms, in particular, cluster head election in a distributed computer system. We consider variants of reaction?Cdiffusion systems around the activator?Cinhibitor model, able to produce spot patterns. We evaluate the robustness of these models against the deletion of activator peaks that signal the location of cluster heads, and the destruction of large patches of chemicals. Three models are selected for evaluation: the Gierer?CMeinhardt Activator?CInhibitor model, the Activator?CSubstrate Depleted model, and the Gray?CScott model. Our results reveal a trade-off between these models. The Gierer?CMeinhardt model is more stable, with rare failures, but is slower to recover from disruptions. The Gray?CScott model is able to recover more quickly, at the expense of more frequent failures. The Activator?CSubstrate model lies somewhere in the middle.  相似文献   

14.
Image registration, the process of optimally aligning homologous structures in multiple images, has recently been demonstrated to support automated pixel-level analysis of pedobarographic images and, subsequently, to extract unique and biomechanically relevant information from plantar pressure data. Recent registration methods have focused on robustness, with slow but globally powerful algorithms. In this paper, we present an alternative registration approach that affords both speed and accuracy, with the goal of making pedobarographic image registration more practical for near-real-time laboratory and clinical applications. The current algorithm first extracts centroid-based curvature trajectories from pressure image contours, and then optimally matches these curvature profiles using optimization based on dynamic programming. Special cases of disconnected images (that occur in high-arched subjects, for example) are dealt with by introducing an artificial spatially linear bridge between adjacent image clusters. Two registration algorithms were developed: a ‘geometric’ algorithm, which exclusively matched geometry, and a ‘hybrid’ algorithm, which performed subsequent pseudo-optimization. After testing the two algorithms on 30 control image pairs considered in a previous study, we found that, when compared with previously published results, the hybrid algorithm improved overlap ratio (p=0.010), but both current algorithms had slightly higher mean-squared error, assumedly because they did not consider pixel intensity. Nonetheless, both algorithms greatly improved the computational efficiency (25±8 and 53±9 ms per image pair for geometric and hybrid registrations, respectively). These results imply that registration-based pixel-level pressure image analyses can, eventually, be implemented for practical clinical purposes.  相似文献   

15.
16.
EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data.  相似文献   

17.
The problem of the kinetic depth effect is revisited. We study how many points in how many views are necessary and sufficient to recover structure. The constraints in the cases where the velocities of the image points are known, and the positions of the image points are known with the correspondence between them established, are different and they have to be studied separately. In the case of two projections of any number of points there are infinitely many solutions, but if we regularize the problem we get a unique solution under some assumptions. Finally, an algorithm is discussed for learning this particular kind of regularization.  相似文献   

18.
Cellular force generation and force transmission are of fundamental importance for numerous biological processes and can be studied with the methods of Traction Force Microscopy (TFM) and Monolayer Stress Microscopy. Traction Force Microscopy and Monolayer Stress Microscopy solve the inverse problem of reconstructing cell-matrix tractions and inter- and intra-cellular stresses from the measured cell force-induced deformations of an adhesive substrate with known elasticity. Although several laboratories have developed software for Traction Force Microscopy and Monolayer Stress Microscopy computations, there is currently no software package available that allows non-expert users to perform a full evaluation of such experiments. Here we present pyTFM, a tool to perform Traction Force Microscopy and Monolayer Stress Microscopy on cell patches and cell layers grown in a 2-dimensional environment. pyTFM was optimized for ease-of-use; it is open-source and well documented (hosted at https://pytfm.readthedocs.io/) including usage examples and explanations of the theoretical background. pyTFM can be used as a standalone Python package or as an add-on to the image annotation tool ClickPoints. In combination with the ClickPoints environment, pyTFM allows the user to set all necessary analysis parameters, select regions of interest, examine the input data and intermediary results, and calculate a wide range of parameters describing forces, stresses, and their distribution. In this work, we also thoroughly analyze the accuracy and performance of the Traction Force Microscopy and Monolayer Stress Microscopy algorithms of pyTFM using synthetic and experimental data from epithelial cell patches.  相似文献   

19.
Tissue‐depolarization and linear‐retardance are the main polarization characteristics of interest for bulk tissue characterization, and are normally interpreted from Mueller polarimetry. Stokes polarimetry can be conducted using simpler instrumentation and in a shorter time. Here, we use Stokes polarimetric imaging with circularly polarized illumination to assess the circular‐depolarization and linear‐retardance properties of tissue. Results obtained were compared with Mueller polarimetry in transmission and reflection geometry, respectively. It is found that circular‐depolarization obtained from these 2 methods is very similar in both geometries, and that linear‐retardance is highly quantitatively similar for transmission geometry and qualitatively similar for reflection geometry. The majority of tissue circular‐depolarization and linear‐retardance image information (represented by local image contrast features) obtained from Mueller polarimetry is well preserved from Stokes polarimetry in both geometries. These findings can be referred to for further understanding tissue Stokes polarimetric data, and for further application of Stokes polarimetry under the circumstances where short acquisition time or low optical system complexity is a priority, such as polarimetric endoscopy and microscopy.   相似文献   

20.
Sorting by reciprocal translocations via reversals theory.   总被引:1,自引:0,他引:1  
The understanding of genome rearrangements is an important endeavor in comparative genomics. A major computational problem in this field is finding a shortest sequence of genome rearrangements that transforms, or sorts, one genome into another. In this paper we focus on sorting a multi-chromosomal genome by translocations. We reveal new relationships between this problem and the well studied problem of sorting by reversals. Based on these relationships, we develop two new algorithms for sorting by reciprocal translocations, which mimic known algorithms for sorting by reversals: a score-based method building on Bergeron's algorithm, and a recursive procedure similar to the Berman-Hannenhalli method. Though their proofs are more involved, our procedures for reciprocal translocations match the complexities of the original ones for reversals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号