共查询到20条相似文献,搜索用时 15 毫秒
1.
HARTLEY HO 《Biometrika》1948,35(PTS 1-2):32-45
2.
Susko E 《Systematic biology》2011,60(5):668-675
Generalized least squares (GLS) methods provide a relatively fast means of constructing a confidence set of topologies. Because they utilize information about the covariances between distances, it is reasonable to expect additional efficiency in estimation and confidence set construction relative to other least squares (LS) methods. Difficulties have been found to arise in a number of practical settings due to estimates of covariance matrices being ill conditioned or even noninvertible. We present here new ways of estimating the covariance matrices for distances that are much more likely to be positive definite, as the actual covariance matrices are. A thorough investigation of performance is also conducted. An alternative to GLS that has been proposed for constructing confidence sets of topologies is weighted least squares (WLS). As currently implemented, this approach is equivalent to the use of GLS but with covariances set to zero rather than being estimated. In effect, this approach assumes normality of the estimated distances and zero covariances. As the results here illustrate, this assumption leads to poor performance. A 95% confidence set is almost certain to contain the true topology but will contain many more topologies than are needed. On the other hand, the results here also indicate that, among LS methods, WLS performs quite well at estimating the correct topology. It turns out to be possible to improve the performance of WLS for confidence set construction through a relatively inexpensive normal parametric bootstrap that utilizes the same variances and covariances of GLS. The resulting procedure is shown to perform at least as well as GLS and thus provides a reasonable alternative in cases where covariance matrices are ill conditioned. 相似文献
3.
Albright TD 《Current biology : CB》1991,1(6):391-393
4.
Background
Vision provides the most salient information with regard to the stimulus motion. However, it has recently been demonstrated that static visual stimuli are perceived as moving laterally by alternating left-right sound sources. The underlying mechanism of this phenomenon remains unclear; it has not yet been determined whether auditory motion signals, rather than auditory positional signals, can directly contribute to visual motion perception.Methodology/Principal Findings
Static visual flashes were presented at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flash appeared to move by means of the auditory motion when the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the lateral auditory motion altered visual motion perception in a global motion display where different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception.Conclusions/Significance
These findings suggest there exist direct interactions between auditory and visual motion signals, and that there might be common neural substrates for auditory and visual motion processing. 相似文献5.
6.
Scott-Samuel NE Georgeson MA 《Proceedings. Biological sciences / The Royal Society》1999,266(1435):2289-2294
We examined the role of feature matching in motion perception. The stimulus sequence was constructed from a vertical, 1 cycle deg-1 sinusoidal grating divided into horizontal strips of equal height, where alternate strips moved leftward and rightward. The initial relative phase of adjacent strips was either 0 degree (aligned) or 90 degrees (non-aligned) and the motion was sampled at 90 degrees phase steps. A blank interstimulus interval (ISI) of 0-117 ms was introduced between each 33 ms presentation of the stimulus frames. The observers had to identify the direction of motion of the central strip. Motion was perceived correctly at short ISIs, but at longer ISIs performance was much better for the non-aligned sequence than the aligned sequence. This difference in performance may reflect a role for feature correspondence and grouping of features in motion perception at longer ISIs. In the aligned sequence half the frames consisted of a single coherent vertical grating, while the interleaved frames contained short strips. We argue that to achieve feature matching over time, the long edge and bar features must be broken up perceptually (segmented) into shorter elements before these short segments can appear to move in opposite directions. This idea correctly predicted that overlaying narrow, stationary, black horizontal lines at the junctions of the grating strips would improve performance in the aligned condition. The results support the view that, in addition to motion energy, feature analysis and feature tracking play an important role in motion perception. 相似文献
7.
Feature-tracking explanations of 2D motion perception are fundamentally distinct from motion-energy, correlation, and gradient explanations, all of which can be implemented by applying spatiotemporal filters to raw image data. Filter-based explanations usually suffer from the aperture problem, but 2D motion predictions for moving plaids have been derived from the intersection of constraints (IOC) imposed by the outputs of such filters, and from the vector sum of signals generated by such filters. In most previous experiments, feature-tracking and IOC predictions are indistinguishable. By constructing plaids in apparent motion from missing-fundamental gratings, we set feature-tracking predictions in opposition to both IOC and vector-sum predictions. The perceived directions that result are inconsistent with feature tracking. Furthermore, we show that increasing size and spatial frequency in Type 2 missing-fundamental plaids drives perceived direction from vector-sum toward IOC directions. This reproduces results that have been used to support feature-tracking, but under experimental conditions that rule it out. We discuss our data in the context of a Bayesian model with a gradient-based likelihood and a prior favoring slow speeds. We conclude that filter-based explanations alone can explain both veridical and non-veridical 2D motion perception in such stimuli. 相似文献
8.
A computational approach to motion perception 总被引:10,自引:0,他引:10
In this paper it is shown that the computation of the optical flow from a sequence of timevarying images is not, in general, an underconstrained problem. A local algorithm for the computation of the optical flow which uses second order derivatives of the image brightness pattern, and that avoids the aperture problem, is presented. The obtained optical flow is very similar to the true motion field — which is the vector field associated with moving features on the image plane — and can be used to recover 3D motion information. Experimental results on sequences of real images, together with estimates of relevant motion parameters, like time-to-crash for translation and angular velocity for rotation, are presented and discussed. Due to the remarkable accuracy which can be achieved in estimating motion parameters, the proposed method is likely to be very useful in a number of computer vision applications. 相似文献
9.
R J Snowden 《Current opinion in neurobiology》1992,2(2):175-179
Recent developments have led to a greater insight into the complex processes of perception of visual motion. A better understanding of the neuronal circuitry involved and advances in electrophysiological techniques have allowed researchers to alter the perception of an animal with a stimulating electrode. In addition, studies have further elucidated the processes by which signals are combined and compared, allowing a greater understanding of the effects of selective brain damage. 相似文献
10.
11.
Doerschner K Fleming RW Yilmaz O Schrater PR Hartung B Kersten D 《Current biology : CB》2011,21(23):2010-2016
Many critical perceptual judgments, from telling whether fruit is ripe to determining whether the ground is slippery, involve estimating the material properties of surfaces. Very little is known about how the brain recognizes materials, even though the problem is likely as important for survival as navigating or recognizing objects. Though previous research has focused nearly exclusively on the properties of static images, recent evidence suggests that motion may affect the appearance of surface material. However, what kind of information motion conveys and how this information may be used by the brain is still unknown. Here, we identify three motion cues that the brain could rely on to distinguish between matte and shiny surfaces. We show that these motion measurements can override static cues, leading to dramatic changes in perceived material depending on the image motion characteristics. A classifier algorithm based on these cues correctly predicts both successes and some striking failures of human material perception. Together these results reveal a previously unknown use for optic flow in the perception of surface material properties. 相似文献
12.
Leo J. Fleishman Adam C. Pallus 《Proceedings. Biological sciences / The Royal Society》2010,277(1700):3547-3554
Anolis lizards communicate with displays consisting of motion of the head and body. Early portions of long-distance displays require movements that are effective at eliciting the attention of potential receivers. We studied signal-motion efficacy using a two-dimensional visual-motion detection (2DMD) model consisting of a grid of correlation-type elementary motion detectors. This 2DMD model has been shown to accurately predict Anolis lizard behavioural response. We tested different patterns of artificially generated motion and found that an abrupt 0.3° shift of position in less than 100 ms is optimal. We quantified motion in displays of 25 individuals from five species. Four species employ near-optimal movement patterns. We tested displays of these species using the 2DMD model on scenes with and without moderate wind. Display movements can easily be detected, even in the presence of windblown vegetation. The fifth species does not typically use the most effective display movements and display movements cannot be discerned by the 2DMD model in the presence of windblown vegetation. A number of Anolis species use abrupt up-and-down head movements approximately 10 mm in amplitude in displays, and these movements appear to be extremely effective for stimulating the receiver visual system. 相似文献
13.
Experimental evidence suggests a link between perception and the execution of actions . In particular, it has been proposed that motor programs might directly influence visual action perception . According to this hypothesis, the acquisition of novel motor behaviors should improve their visual recognition, even in the absence of visual learning. We tested this prediction by using a new experimental paradigm that dissociates visual and motor learning during the acquisition of novel motor patterns. The visual recognition of gait patterns from point-light stimuli was assessed before and after nonvisual motor training. During this training, subjects were blindfolded and learned a novel coordinated upper-body movement based only on verbal and haptic feedback. The learned movement matched one of the visual test patterns. Despite the absence of visual stimulation during training, we observed a selective improvement of the visual recognition performance for the learned movement. Furthermore, visual recognition performance after training correlated strongly with the accuracy of the execution of the learned motor pattern. These results prove, for the first time, that motor learning has a direct and highly selective influence on visual action recognition that is not mediated by visual learning. 相似文献
14.
Neural correlates of chromatic motion perception. 总被引:2,自引:0,他引:2
A variety of psychophysical and neurophysiological studies suggest that chromatic motion perception in the primate brain may be performed outside the classical motion processing pathway. We addressed this provocative proposal directly by assessing the sensitivity of neurons in motion area MT to moving colored stimuli while simultaneously determining perceptual sensitivity in nonhuman primate observers. The results of these studies demonstrate a strong correspondence between neuronal and perceptual measures. Our findings testify that area MT is indeed a principal component of the neuronal substrate for color-based motion processing. 相似文献
15.
D Rose R Blake 《Philosophical transactions of the Royal Society of London. Series B, Biological sciences》1998,353(1371):967-980
When human observers view dynamic random noise, such as television ''snow'', through a curved or annular aperture, they experience a compelling illusion that the noise is moving smoothly and coherently around the curve (the ''omega effect''). In several series of experiments, we have investigated the conditions under which this effect occurs and the possible mechanisms that might cause it. We contrast the omega effect with ''phi motion'', seen when an object suddenly changes position. Our conclusions are that the visual scene is first segmented into objects before a coherent velocity is assigned to the texture on each object''s surface. The omega effect arises because there are motion mechanisms that deal specifically with object rotation and these interact with pattern mechanisms sensitive to curvature. 相似文献
16.
Tobimatsu S Goto Y Yamasaki T Tsurusawa R Taniwaki T 《Journal of PHYSIOLOGICAL ANTHROPOLOGY and Applied Human Science》2004,23(6):273-276
The neural mechanisms for the perception of face and motion were studied using psychophysical threshold measurements, event-related potentials (ERPs), and functional magnetic resonance imaging (fMRI). A face-specific ERP component, N170, was recorded over the posterior temporal cortex. Removal of the high-spatial-frequency components of the face altered the perception of familiar faces significantly, and familiarity can facilitate the cortico-cortical processing of facial perceptions. Similarly, the high-spatial-frequency components of the face seemed to be crucial for the recognition of facial expressions. Aging and visuospatial impairments affected motion perception significantly. Two distinct components of motion ERPs, N170 and P200, were recorded over the parietal region. The former was related to horizontal motion perception while the latter reflected the perception of radial optic flow motion. The results of fMRI showed that horizontal movements of objects and radial optic flow motion were perceived differently in the V5/MT and superior parietal lobe. We conclude that an integrated approach can provide useful information on spatial and temporal processing of face and motion non-invasively. 相似文献
17.
Laminar cortical dynamics of visual form and motion interactions during coherent object motion perception 总被引:3,自引:0,他引:3
How do visual form and motion processes cooperate to compute object motion when each process separately is insufficient? Consider, for example, a deer moving behind a bush. Here the partially occluded fragments of motion signals available to an observer must be coherently grouped into the motion of a single object. A 3D FORMOTION model comprises five important functional interactions involving the brain's form and motion systems that address such situations. Because the model's stages are analogous to areas of the primate visual system, we refer to the stages by corresponding anatomical names. In one of these functional interactions, 3D boundary representations, in which figures are separated from their backgrounds, are formed in cortical area V2. These depth-selective V2 boundaries select motion signals at the appropriate depths in MT via V2-to-MT signals. In another, motion signals in MT disambiguate locally incomplete or ambiguous boundary signals in V2 via MT-to-V1-to-V2 feedback. The third functional property concerns resolution of the aperture problem along straight moving contours by propagating the influence of unambiguous motion signals generated at contour terminators or corners. Here, sparse 'feature tracking signals' from, for example, line ends are amplified to overwhelm numerically superior ambiguous motion signals along line segment interiors. In the fourth, a spatially anisotropic motion grouping process takes place across perceptual space via MT-MST feedback to integrate veridical feature-tracking and ambiguous motion signals to determine a global object motion percept. The fifth property uses the MT-MST feedback loop to convey an attentional priming signal from higher brain areas back to V1 and V2. The model's use of mechanisms such as divisive normalization, endstopping, cross-orientation inhibition, and long-range cooperation is described. Simulated data include: the degree of motion coherence of rotating shapes observed through apertures, the coherent vs. element motion percepts separated in depth during the chopsticks illusion, and the rigid vs. nonrigid appearance of rotating ellipses. 相似文献
18.
Tracking facilitates 3-D motion estimation 总被引:1,自引:0,他引:1
The recently emerging paradigm of Active Vision advocates studying visual problems in form of modules that are directly related to a visual task for observers that are active. Along these lines, we are arguing that in many cases when an object is moving in an unrestricted manner (translation and rotation) in the 3D world, we are just interested in the motion's translational components. For a monocular observer, using only the normal flow — the spatio-temporal derivatives of the image intensity function — we solve the problem of computing the direction of translation and the time to collision. We do not use optical flow since its computation is an ill-posed problem, and in the general case it is not the same as the motion field — the projection of 3D motion on the image plane. The basic idea of our motion parameter estimation strategy lies in the employment of fixation and tracking. Fixation simplifies much of the computation by placing the object at the center of the visual field, and the main advantage of tracking is the accumulation of information over time. We show how tracking is accomplished using normal flow measurements and use it for two different tasks in the solution process. First it serves as a tool to compensate for the lack of existence of an optical flow field and thus to estimate the translation parallel to the image plane; and second it gathers information about the motion component perpendicular to the image plane. 相似文献
19.
Stationary objects appear to move in the opposite direction to a pursuit eye movement (Filehne illusion) and moving objects appear slower when pursued (Aubert-Fleischl phenomenon). Both illusions imply that extra-retinal, eye-velocity signals lead to lower estimates of speed than corresponding retinal motion signals. Intriguingly, the velocity (i.e. speed and direction) of the Filehne illusion depends on the age of the observer, especially for brief display durations (Wertheim and Bekkering, 1992). This suggests relative signal size changes as the visual system matures. To test the signal-size hypothesis, we compared the Filehne illusion and Aubert-Fleischl phenomenon in young and old observers using short and long display durations. The trends in the Filehne data were similar to those reported by Wertheim and Bekkering. However, we found no evidence for an effect of age or duration in the Aubert-Fleischl phenomenon. The differences between the two illusions could not be reconciled on the basis of actual eye movements made. The findings suggest a more complicated explanation of the combined influence of age and duration on head-centred motion perception than that described by the signal-size hypothesis. 相似文献
20.
We usually perceive a stationary, stable world and we are able to correctly estimate the direction of heading from optic flow despite coherent visual motion induced by eye movements. This astonishing example of perceptual invariance results from a comparison of visual information with internal reference signals predicting the visual consequences of an eye movement. Here we demonstrate that the reference signal predicting the consequences of smooth-pursuit eye movements is continuously calibrated on the basis of direction-selective interactions between the pursuit motor command and the rotational flow induced by the eye movement, thereby minimizing imperfections of the reference signal and guaranteeing an ecologically optimal interpretation of visual motion. 相似文献