首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights.  相似文献   

2.
Light plays a fundamental role in the ecology of organisms in nearly all habitats on Earth and is central for processes such as vision and the entrainment of the circadian clock. The poles represent extreme light regimes with an annual light cycle including periods of Midnight Sun and Polar Night. The Arctic Ocean extends to the North Pole, and marine light extremes reach their maximum extent in this habitat. During the Polar Night, traditional definitions of day and night and seasonal photoperiod become irrelevant since there are only “twilight” periods defined by the sun’s elevation below the horizon at midday; we term this “midday twilight.” Here, we characterize light across a latitudinal gradient (76.5° N to 81° N) during Polar Night in January. Our light measurements demonstrate that the classical solar diel light cycle dominant at lower latitudes is modulated during Arctic Polar Night by lunar and auroral components. We therefore question whether this particular ambient light environment is relevant to behavioral and visual processes. We reveal from acoustic field observations that the zooplankton community is undergoing diel vertical migration (DVM) behavior. Furthermore, using electroretinogram (ERG) recording under constant darkness, we show that the main migratory species, Arctic krill (Thysanoessa inermis) show endogenous increases in visual sensitivity during the subjective night. This change in sensitivity is comparable to that under exogenous dim light acclimations, although differences in speed of vision suggest separate mechanisms. We conclude that the extremely weak midday twilight experienced by krill at high latitudes during the darkest parts of the year has physiological and ecological relevance.

This study shows that ambient light cycles set an internal rhythm that controls visual sensitivity of Arctic krill during the Polar Night – the darkest part of the year, when the sun remains below the horizon all day. This demonstrates that biologically relevant photoperiods can be achieved during this time of "midday twilight".  相似文献   

3.
Why is Real-World Visual Object Recognition Hard?   总被引:1,自引:0,他引:1  
Progress in understanding the brain mechanisms underlying vision requires the construction of computational models that not only emulate the brain's anatomy and physiology, but ultimately match its performance on visual tasks. In recent years, “natural” images have become popular in the study of vision and have been used to show apparently impressive progress in building such models. Here, we challenge the use of uncontrolled “natural” images in guiding that progress. In particular, we show that a simple V1-like model—a neuroscientist's “null” model, which should perform poorly at real-world visual object recognition tasks—outperforms state-of-the-art object recognition systems (biologically inspired and otherwise) on a standard, ostensibly natural image recognition test. As a counterpoint, we designed a “simpler” recognition test to better span the real-world variation in object pose, position, and scale, and we show that this test correctly exposes the inadequacy of the V1-like model. Taken together, these results demonstrate that tests based on uncontrolled natural images can be seriously misleading, potentially guiding progress in the wrong direction. Instead, we reexamine what it means for images to be natural and argue for a renewed focus on the core problem of object recognition—real-world image variation.  相似文献   

4.
The cognitive and neural mechanisms for recognizing and categorizing behavior are not well understood in non-human animals. In the current experiments, pigeons and humans learned to categorize two non-repeating, complex human behaviors (“martial arts” vs. “Indian dance”). Using multiple video exemplars of a digital human model, pigeons discriminated these behaviors in a go/no-go task and humans in a choice task. Experiment 1 found that pigeons already experienced with discriminating the locomotive actions of digital animals acquired the discrimination more rapidly when action information was available than when only pose information was available. Experiments 2 and 3 found this same dynamic superiority effect with naïve pigeons and human participants. Both species used the same combination of immediately available static pose information and more slowly perceived dynamic action cues to discriminate the behavioral categories. Theories based on generalized visual mechanisms, as opposed to embodied, species-specific action networks, offer a parsimonious account of how these different animals recognize behavior across and within species.  相似文献   

5.
Participants tasted two cups of coffee, decided which they preferred, and then rated each coffee. They were told (in lure) that one of the cups contained “eco-friendly” coffee while the other did not, although the two cups contained identical coffee. In Experiments 1 and 3, but not in Experiment 2, the participants were also told which cup contained which type of coffee before they tasted. The participants preferred the taste of, and were willing to pay more for, the “eco-friendly” coffee, at least those who scored high on a questionnaire on attitudes toward sustainable consumer behavior (Experiment 1). High sustainability consumers were also willing to pay more for “eco-friendly” coffee, even when they were told, after their decision, that they preferred the non-labeled alternative (Experiment 2). Moreover, the eco-label effect does not appear to be a consequence of social desirability, as participants were just as biased when reporting the taste estimates and willingness to pay anonymously (Experiment 3). Eco labels not only promote a willingness to pay more for the product but also lead to a more favorable perceptual experience of it.  相似文献   

6.
Abundance estimation of carnivore populations is difficult and has prompted the use of non-invasive detection methods, such as remotely-triggered cameras, to collect data. To analyze photo data, studies focusing on carnivores with unique pelage patterns have utilized a mark-recapture framework and studies of carnivores without unique pelage patterns have used a mark-resight framework. We compared mark-resight and mark-recapture estimation methods to estimate bobcat (Lynx rufus) population sizes, which motivated the development of a new "hybrid" mark-resight model as an alternative to traditional methods. We deployed a sampling grid of 30 cameras throughout the urban southern California study area. Additionally, we physically captured and marked a subset of the bobcat population with GPS telemetry collars. Since we could identify individual bobcats with photos of unique pelage patterns and a subset of the population was physically marked, we were able to use traditional mark-recapture and mark-resight methods, as well as the new “hybrid” mark-resight model we developed to estimate bobcat abundance. We recorded 109 bobcat photos during 4,669 camera nights and physically marked 27 bobcats with GPS telemetry collars. Abundance estimates produced by the traditional mark-recapture, traditional mark-resight, and “hybrid” mark-resight methods were similar, however precision differed depending on the models used. Traditional mark-recapture and mark-resight estimates were relatively imprecise with percent confidence interval lengths exceeding 100% of point estimates. Hybrid mark-resight models produced better precision with percent confidence intervals not exceeding 57%. The increased precision of the hybrid mark-resight method stems from utilizing the complete encounter histories of physically marked individuals (including those never detected by a camera trap) and the encounter histories of naturally marked individuals detected at camera traps. This new estimator may be particularly useful for estimating abundance of uniquely identifiable species that are difficult to sample using camera traps alone.  相似文献   

7.
Lightness illusions are fundamental to human perception, and yet why we see them is still the focus of much research. Here we address the question by modelling not human physiology or perception directly as is typically the case but our natural visual world and the need for robust behaviour. Artificial neural networks were trained to predict the reflectance of surfaces in a synthetic ecology consisting of 3-D “dead-leaves” scenes under non-uniform illumination. The networks learned to solve this task accurately and robustly given only ambiguous sense data. In addition—and as a direct consequence of their experience—the networks also made systematic “errors” in their behaviour commensurate with human illusions, which includes brightness contrast and assimilation—although assimilation (specifically White's illusion) only emerged when the virtual ecology included 3-D, as opposed to 2-D scenes. Subtle variations in these illusions, also found in human perception, were observed, such as the asymmetry of brightness contrast. These data suggest that “illusions” arise in humans because (i) natural stimuli are ambiguous, and (ii) this ambiguity is resolved empirically by encoding the statistical relationship between images and scenes in past visual experience. Since resolving stimulus ambiguity is a challenge faced by all visual systems, a corollary of these findings is that human illusions must be experienced by all visual animals regardless of their particular neural machinery. The data also provide a more formal definition of illusion: the condition in which the true source of a stimulus differs from what is its most likely (and thus perceived) source. As such, illusions are not fundamentally different from non-illusory percepts, all being direct manifestations of the statistical relationship between images and scenes.  相似文献   

8.
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process.  相似文献   

9.
Rapidly growing public gene expression databases contain a wealth of data for building an unprecedentedly detailed picture of human biology and disease. This data comes from many diverse measurement platforms that make integrating it all difficult. Although RNA-sequencing (RNA-seq) is attracting the most attention, at present, the rate of new microarray studies submitted to public databases far exceeds the rate of new RNA-seq studies. There is clearly a need for methods that make it easier to combine data from different technologies. In this paper, we propose a new method for processing RNA-seq data that yields gene expression estimates that are much more similar to corresponding estimates from microarray data, hence greatly improving cross-platform comparability. The method we call PREBS is based on estimating the expression from RNA-seq reads overlapping the microarray probe regions, and processing these estimates with standard microarray summarisation algorithms. Using paired microarray and RNA-seq samples from TCGA LAML data set we show that PREBS expression estimates derived from RNA-seq are more similar to microarray-based expression estimates than those from other RNA-seq processing methods. In an experiment to retrieve paired microarray samples from a database using an RNA-seq query sample, gene signatures defined based on PREBS expression estimates were found to be much more accurate than those from other methods. PREBS also allows new ways of using RNA-seq data, such as expression estimation for microarray probe sets. An implementation of the proposed method is available in the Bioconductor package “prebs.”  相似文献   

10.
To increase our basic understanding of the ecology and evolution of conjugative plasmids, we need reliable estimates of their rate of transfer between bacterial cells. Current assays to measure transfer rate are based on deterministic modeling frameworks. However, some cell numbers in these assays can be very small, making estimates that rely on these numbers prone to noise. Here, we take a different approach to estimate plasmid transfer rate, which explicitly embraces this noise. Inspired by the classic fluctuation analysis of Luria and Delbrück, our method is grounded in a stochastic modeling framework. In addition to capturing the random nature of plasmid conjugation, our new methodology, the Luria–Delbrück method (“LDM”), can be used on a diverse set of bacterial systems, including cases for which current approaches are inaccurate. A notable example involves plasmid transfer between different strains or species where the rate that one type of cell donates the plasmid is not equal to the rate at which the other cell type donates. Asymmetry in these rates has the potential to bias or constrain current transfer estimates, thereby limiting our capabilities for estimating transfer in microbial communities. In contrast, the LDM overcomes obstacles of traditional methods by avoiding restrictive assumptions about growth and transfer rates for each population within the assay. Using stochastic simulations and experiments, we show that the LDM has high accuracy and precision for estimation of transfer rates compared to the most widely used methods, which can produce estimates that differ from the LDM estimate by orders of magnitude.

Plasmid transfer can often spread resistance between important clinical pathogens. This study shows that widely used methods can lead to biased estimates of plasmid transfer rate by several orders of magnitude, and presents a new approach, inspired by the classic Luria-Delbrück approach, for accurately assessing this fundamental rate parameter  相似文献   

11.
We investigate the effect of spatial categories on visual perception. In three experiments, participants made same/different judgments on pairs of simultaneously presented dot-cross configurations. For different trials, the position of the dot within each cross could differ with respect to either categorical spatial relations (the dots occupied different quadrants) or coordinate spatial relations (the dots occupied different positions within the same quadrant). The dot-cross configurations also varied in how readily the dot position could be lexicalized. In harder-to-name trials, crosses formed a “+” shape such that each quadrant was associated with two discrete lexicalized spatial categories (e.g., “above” and “left”). In easier-to-name trials, both crosses were rotated 45° to form an “×” shape such that quadrants were unambiguously associated with a single lexicalized spatial category (e.g., “above” or “left”). In Experiment 1, participants were more accurate when discriminating categorical information between easier-to-name categories and more accurate at discriminating coordinate spatial information within harder-to-name categories. Subsequent experiments attempted to down-regulate or up-regulate the involvement of language in task performance. Results from Experiment 2 (verbal interference) and Experiment 3 (verbal training) suggest that the observed spatial relation type-by-nameability interaction is resistant to online language manipulations previously shown to affect color and object-based perceptual processing. The results across all three experiments suggest that robust biases in the visual perception of spatial relations correlate with patterns of lexicalization, but do not appear to be modulated by language online.  相似文献   

12.
Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a “corrected” empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.  相似文献   

13.
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.  相似文献   

14.
We respond more quickly to our own face than to other faces, but there is debate over whether this is connected to attention-grabbing properties of the self-face. In two experiments, we investigate whether the self-face selectively captures attention, and the attentional conditions under which this might occur. In both experiments, we examined whether different types of face (self, friend, stranger) provide differential levels of distraction when processing self, friend and stranger names. In Experiment 1, an image of a distractor face appeared centrally – inside the focus of attention – behind a target name, with the faces either upright or inverted. In Experiment 2, distractor faces appeared peripherally – outside the focus of attention – in the left or right visual field, or bilaterally. In both experiments, self-name recognition was faster than other name recognition, suggesting a self-referential processing advantage. The presence of the self-face did not cause more distraction in the naming task compared to other types of face, either when presented inside (Experiment 1) or outside (Experiment 2) the focus of attention. Distractor faces had different effects across the two experiments: when presented inside the focus of attention (Experiment 1), self and friend images facilitated self and friend naming, respectively. This was not true for stranger stimuli, suggesting that faces must be robustly represented to facilitate name recognition. When presented outside the focus of attention (Experiment 2), no facilitation occurred. Instead, we report an interesting distraction effect caused by friend faces when processing strangers’ names. We interpret this as a “social importance” effect, whereby we may be tuned to pick out and pay attention to familiar friend faces in a crowd. We conclude that any speed of processing advantages observed in the self-face processing literature are not driven by automatic attention capture.  相似文献   

15.
An increasing number of neuroscience papers capitalize on the assumption published in this journal that visual speech would be typically 150 ms ahead of auditory speech. It happens that the estimation of audiovisual asynchrony in the reference paper is valid only in very specific cases, for isolated consonant-vowel syllables or at the beginning of a speech utterance, in what we call “preparatory gestures”. However, when syllables are chained in sequences, as they are typically in most parts of a natural speech utterance, asynchrony should be defined in a different way. This is what we call “comodulatory gestures” providing auditory and visual events more or less in synchrony. We provide audiovisual data on sequences of plosive-vowel syllables (pa, ta, ka, ba, da, ga, ma, na) showing that audiovisual synchrony is actually rather precise, varying between 20 ms audio lead and 70 ms audio lag. We show how more complex speech material should result in a range typically varying between 40 ms audio lead and 200 ms audio lag, and we discuss how this natural coordination is reflected in the so-called temporal integration window for audiovisual speech perception. Finally we present a toy model of auditory and audiovisual predictive coding, showing that visual lead is actually not necessary for visual prediction.  相似文献   

16.
Human-caused climate change is happening; nearly all climate scientists are convinced of this basic fact according to surveys of experts and reviews of the peer-reviewed literature. Yet, among the American public, there is widespread misunderstanding of this scientific consensus. In this paper, we report results from two experiments, conducted with national samples of American adults, that tested messages designed to convey the high level of agreement in the climate science community about human-caused climate change. The first experiment tested hypotheses about providing numeric versus non-numeric assertions concerning the level of scientific agreement. We found that numeric statements resulted in higher estimates of the scientific agreement. The second experiment tested the effect of eliciting respondents’ estimates of scientific agreement prior to presenting them with a statement about the level of scientific agreement. Participants who estimated the level of agreement prior to being shown the corrective statement gave higher estimates of the scientific consensus than respondents who were not asked to estimate in advance, indicating that incorporating an “estimation and reveal” technique into public communication about scientific consensus may be effective. The interaction of messages with political ideology was also tested, and demonstrated that messages were approximately equally effective among liberals and conservatives. Implications for theory and practice are discussed.  相似文献   

17.
Karin Meyer  Mark Kirkpatrick 《Genetics》2010,185(3):1097-1110
Obtaining accurate estimates of the genetic covariance matrix for multivariate data is a fundamental task in quantitative genetics and important for both evolutionary biologists and plant or animal breeders. Classical methods for estimating are well known to suffer from substantial sampling errors; importantly, its leading eigenvalues are systematically overestimated. This article proposes a framework that exploits information in the phenotypic covariance matrix in a new way to obtain more accurate estimates of . The approach focuses on the “canonical heritabilities” (the eigenvalues of ), which may be estimated with more precision than those of because is estimated more accurately. Our method uses penalized maximum likelihood and shrinkage to reduce bias in estimates of the canonical heritabilities. This in turn can be exploited to get substantial reductions in bias for estimates of the eigenvalues of and a reduction in sampling errors for estimates of . Simulations show that improvements are greatest when sample sizes are small and the canonical heritabilities are closely spaced. An application to data from beef cattle demonstrates the efficacy this approach and the effect on estimates of heritabilities and correlations. Penalized estimation is recommended for multivariate analyses involving more than a few traits or problems with limited data.QUANTITATIVE geneticists, including evolutionary biologists and plant and animal breeders, are increasingly dependent on multivariate analyses of genetic variation, for example, to understand evolutionary constraints and design efficient selection programs. New challenges arise when one moves from estimating the genetic variance of a single phenotype to the multivariate setting. An important but unresolved issue is how best to deal with sampling variation and the corresponding bias in the eigenvalues of estimates for the genetic covariance matrix, . It is well known that estimates for the largest eigenvalues of a covariance matrix are biased upward and those for the smallest eigenvalues are biased downward (Lawley 1956; Hayes and Hill 1981). For genetic problems, where we need to estimate at least two covariance matrices simultaneously, this tends to be exacerbated, especially for . In turn, this can result in invalid estimates of , i.e., estimates with negative eigenvalues, and can produce systematic errors in predictions for the response to selection.There has been longstanding interest in “regularization” of covariance matrices, in particular for cases where the ratio between the number of observations and the number of variables is small. Various studies recently employed such techniques for the analysis of high-dimensional, genomic data. In general, this involves a compromise between additional bias and reduced sampling variation of “improved” estimators that have less statistical risk than standard methods (Bickel and Li 2006). For instance, various types of shrinkage estimators of covariance matrices have been suggested that counteract bias in estimates of eigenvalues by shrinking all sample eigenvalues toward their mean. Often this is equivalent to a weighted combination of the sample covariance matrix and a target matrix, assumed to have a simple structure. A common choice for the latter is an identity matrix. This yields a ridge regression type formulation (Hoerl and Kennard 1970). Numerous simulation studies in a variety of settings are available, which demonstrate that regularization can yield closer agreement between estimated and population covariance matrices, less variable estimates of model terms, or improved performance of statistical tests.In quantitative genetic analyses, we attempt to partition observed, overall (phenotypic) covariances into their genetic and environmental components. Typically, this results in strong sampling correlations between them. Hence, while the partitioning into sources of variation and estimates of individual covariance matrices may be subject to substantial sampling variances, their sum, i.e., the phenotypic covariance matrix, can generally be estimated much more accurately. This has led to suggestions to “borrow strength” from estimates of phenotypic components to estimate the genetic covariances. In particular, Hayes and Hill (1981) proposed a method termed “bending” that involved regressing the eigenvalues of the product of the genetic and the inverse of the phenotypic covariance matrix toward their mean. One objective of this procedure was to ensure that estimates of the genetic covariance matrix from an analysis of variance were positive definite. In addition, the authors showed by simulation that shrinking eigenvalues even further than needed to make all values nonnegative could improve the achieved response to selection when using the resulting estimates to derive weights for a selection index, especially for estimation based on small samples. Subsequent work demonstrated that bending could also be advantageous in more general scenarios such as indexes that included information from relatives (Meyer and Hill 1983).Modern, mixed model (“animal model”)-based analyses to estimate genetic parameters using maximum likelihood or Bayesian methods generally constrain estimates to the parameter space, so that—at the expense of introducing some bias—estimates of covariance matrices are positive semidefinite. However, the problems arising from substantial sampling variation in multivariate analyses remain. In spite of increasing applications of such analyses in scenarios where data sets are invariably small, e.g., the analysis of data from natural populations (e.g., Kruuk et al. 2008), there has been little interest in regularization and shrinkage techniques in genetic parameter estimation, other than through the use of informative priors in a Bayesian context. Instead, suggestions for improved estimation have focused on parsimonious modeling of covariance matrices, e.g., through reduced rank estimation or by imposing a known structure, such as a factor-analytic structure (Kirkpatrick and Meyer 2004; Meyer 2009), or by fitting covariance functions for longitudinal data (Kirkpatrick et al. 1990). While such methods can be highly advantageous when the underlying assumptions are at least approximately correct, data-driven methods of regularization may be preferable in other scenarios.This article explores the scope for improved estimation of genetic covariance matrices by implementing the equivalent to bending within animal model-type analyses. We begin with a review of the underlying statistical principles (which the impatient reader might skip), examining the concept of improved estimation, its implementation via shrinkage estimators or penalized estimation, and selected applications. We then describe a penalized restricted maximum-likelihood (REML) procedure for the estimation of genetic covariance matrices that utilizes information from its phenotypic counterparts and present a simulation study demonstrating the effect of penalties on parameter estimates and their sampling properties. The article concludes with an application to a problem relevant in genetic improvement of beef cattle and a discussion.  相似文献   

18.
The lateral geniculate nucleus (LGN) is increasingly regarded as a “smart-gating” operator for processing visual information. Therefore, characterizing the response properties of LGN neurons will enable us to better understand how neurons encode and transfer visual signals. Efforts have been devoted to study its anatomical and functional features, and recent advances have highlighted the existence in rodents of complex features such as direction/orientation selectivity. However, unlike well-researched higher-order mammals such as primates, the full array of response characteristics vis-à-vis its morphological features have remained relatively unexplored in the mouse LGN. To address the issue, we recorded from mouse LGN neurons using multisite-electrode-arrays (MEAs) and analysed their discharge patterns in relation to their location under a series of visual stimulation paradigms. Several response properties paralleled results from earlier studies in the field and these include centre-surround organization, size of receptive field, spontaneous firing rate and linearity of spatial summation. However, our results also revealed “high-pass” and “low-pass” features in the temporal frequency tuning of some cells, and greater average contrast gain than reported by earlier studies. In addition, a small proportion of cells had direction/orientation selectivity. Both “high-pass” and “low-pass” cells, as well as direction and orientation selective cells, were found only in small numbers, supporting the notion that these properties emerge in the cortex. ON- and OFF-cells showed distinct contrast sensitivity and temporal frequency tuning properties, suggesting parallel projections from the retina. Incorporating a novel histological technique, we created a 3-D LGN volume model explicitly capturing the morphological features of mouse LGN and localising individual cells into anterior/middle/posterior LGN. Based on this categorization, we show that the ON/OFF, DS/OS and linear response properties are not regionally restricted. Our study confirms earlier findings of spatial pattern selectivity in the LGN, and builds on it to demonstrate that relatively elaborate features are computed early in the visual pathway.  相似文献   

19.
Changing the visual body appearance by use of as virtual reality system, funny mirror, or binocular glasses has been reported to be helpful in rehabilitation of pain. However, there are interindividual differences in the analgesic effect of changing the visual body image. We hypothesized that a negative body image associated with changing the visual body appearance causes interindividual differences in the analgesic effect although the relationship between the visual body appearance and analgesic effect has not been clarified. We investigated whether a negative body image associated with changes in the visual body appearance increased pain. Twenty-five healthy individuals participated in this study. To evoke a negative body image, we applied the method of rubber hand illusion. We created an “injured rubber hand” to evoke unpleasantness associated with pain, a “hairy rubber hand” to evoke unpleasantness associated with embarrassment, and a “twisted rubber hand” to evoke unpleasantness associated with deviation from the concept of normality. We also created a “normal rubber hand” as a control. The pain threshold was measured while the participant observed the rubber hand using a device that measured pain caused by thermal stimuli. Body ownership experiences were elicited by observation of the injured rubber hand and hairy rubber hand as well as the normal rubber hand. Participants felt more unpleasantness by observing the injured rubber hand and hairy rubber hand than the normal rubber hand and twisted rubber hand (p<0.001). The pain threshold was lower under the injured rubber hand condition than with the other conditions (p<0.001). We conclude that a negative body appearance associated with pain can increase pain sensitivity.  相似文献   

20.
Although fluency theory predominates psychological research on human aesthetics, its most severe limitation may be to explain why art that challenges or even violates easy processing can nevertheless be aesthetically rewarding. We discuss long-standing notions on art’s potential to offer mental growth opportunities and to tap into a basic epistemic predisposition that hint at a fluency counteracting aesthetic pleasure mechanism. Based on divergent strands of literature on empirical, evolutionary, and philosophical aesthetics, as well as research on disfluency, we presumed that challenging art requires deliberate reflexive processing at the level of “aboutness” in order to be experientially pleasing. Here, we probed such a cognitive mastering mechanism, achieved by iterative cycles of elaboration, as predicted by our model of aesthetic experiences. For the study, two kinds of portraits were applied, one associable to a high fluency and one to a high stimulation potential (according to results of an extensive rating study). In Experiment 1, we provided a repeated evaluation task, which revealed a distinctive preference effect for challenging portraits that was absent in the visual exposition conditions of a familiarity and a mere exposure task (Experiment 2). In a follow-up task (Experiment 3) this preference effect was observed with a novel and more encompassing pool of portraits, which corroborated its stability and robustness. In an explorative stimulus-transfer task (Experiment 4), we investigated the presumed underlying mechanism by testing whether the observed effect would generalize onto novel portraits of the same artist-specific styles. Results discounted an alternative interpretation of a perceptual adaptation effect and hinted at meaning-driven mental activity. Conjointly, findings for inexperienced viewers were indicative of an elaboration based mastering mechanism that selectively operated for mentally challenging portraits. Moreover, findings were in line with a dual-process view of human preference formation with art. Theoretical implications and boundary conditions are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号