首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Why is Real-World Visual Object Recognition Hard?   总被引:1,自引:0,他引:1  
Progress in understanding the brain mechanisms underlying vision requires the construction of computational models that not only emulate the brain's anatomy and physiology, but ultimately match its performance on visual tasks. In recent years, “natural” images have become popular in the study of vision and have been used to show apparently impressive progress in building such models. Here, we challenge the use of uncontrolled “natural” images in guiding that progress. In particular, we show that a simple V1-like model—a neuroscientist's “null” model, which should perform poorly at real-world visual object recognition tasks—outperforms state-of-the-art object recognition systems (biologically inspired and otherwise) on a standard, ostensibly natural image recognition test. As a counterpoint, we designed a “simpler” recognition test to better span the real-world variation in object pose, position, and scale, and we show that this test correctly exposes the inadequacy of the V1-like model. Taken together, these results demonstrate that tests based on uncontrolled natural images can be seriously misleading, potentially guiding progress in the wrong direction. Instead, we reexamine what it means for images to be natural and argue for a renewed focus on the core problem of object recognition—real-world image variation.  相似文献   

2.
A dissociation between visual awareness and visual discrimination is referred to as “blindsight”. Blindsight results from loss of function of the primary visual cortex (V1) which can occur due to cerebrovascular accidents (i.e. stroke-related lesions). There are also numerous reports of similar, though reversible, effects on vision induced by transcranial Magnetic Stimulation (TMS) to early visual cortex. These effects point to V1 as the “gate” of visual awareness and have strong implications for understanding the neurological underpinnings of consciousness. It has been argued that evidence for the dissociation between awareness of, and responses to, visual stimuli can be a measurement artifact of the use of a high response criterion under yes-no measures of visual awareness when compared with the criterion free forced-choice responses. This difference between yes-no and forced-choice measures suggests that evidence for a dissociation may actually be normal near-threshold conscious vision. Here we describe three experiments that tested visual performance in normal subjects when their visual awareness was suppressed by applying TMS to the occipital pole. The nature of subjects’ performance whilst undergoing occipital TMS was then verified by use of a psychophysical measure (d'') that is independent of response criteria. This showed that there was no genuine dissociation in visual sensitivity measured by yes-no and forced-choice responses. These results highlight that evidence for visual sensitivity in the absence of awareness must be analysed using a bias-free psychophysical measure, such as d'', In order to confirm whether or not visual performance is truly unconscious.  相似文献   

3.
Some journals are using ineffective software to screen images for manipulation. In doing so, they are creating a false sense of security in the research community about the integrity of the image data they publish.“There must be an easier way!” It''s the mantra of anyone performing a labor-intensive task, and the motivation behind the human desire for automation. Apparently, it also holds true for image screening.Open in a separate windowFigure 1©cartoonbank.com. All Rights Reserved.At the Rockefeller University Press, we screen all images in all accepted papers for evidence of manipulation (1). We do this by visually inspecting every image using basic adjustments in Photoshop. When editors from other publishers see a demonstration of our process, they often assert, “There must be an easier way!”The possibility of automating the image screening process was described in a Nature news article more than two years ago (2). About a year ago, one of the largest publisher services providers, Cadmus Communications, started offering an automated image screening service using a program called Rigour, which they publicize as “the world''s first automated Image Manipulation Analysis Software” (www.suprocktech.com).Cadmus demonstrated an early version of this software at the Press, but we found that it could not detect blatant examples of band deletions, band intensity adjustments, large regions of duplication, or composite images. In an e-mail to Cadmus dated September 11, 2007, I expressed my concern, “I am worried about causing a setback in the publishing community if editors think the current Rigour software is effective at detecting problems in biomedical images (specifically gel images). I have already heard of editors saying they will not initiate visual screening because they will just use the Cadmus software. This is creating a false sense of security in the community, because the software is not yet an effective screening tool.” I received no response to this e-mail.I was surprised to learn that, within a couple of months, Cadmus had started to sell an image screening service to publishers using this software. But given the availability of such a service, I was not surprised to learn that editors at two very prominent journals were using it. Publishers were clearly looking for a less labor-intensive solution to an image problem, in two senses of the word—image data, and public image. They wanted to be seen by the public to be actively addressing the problem of image manipulation.I asked these publishers if they had tested the service before they started to use it. Both had done so, but one of them declined to send the results of their tests; the other indicated that the Cadmus service had a 20% success rate. It seems that these publishers were not really concerned if the screening process they used actually worked.Problems with the service were still evident recently when I was consulted by a third party about a case of image manipulation in a paper published in one of these journals. The paper made a surprising claim with important clinical implications. Given that journal''s policy of only screening a fraction of papers for image manipulation, one might expect that they would at least select those with important clinical implications. In fact, the papers are selected at random, and this one had not been screened. After questions were raised, the figures were screened by Cadmus using their software, but they did not detect problems with the images that were easily revealed with visual screening.In personal communications, publishers have argued that using the Cadmus service must be better than doing nothing. In fact, it is worse than doing nothing. These publishers are creating a false sense of security in the community about the integrity of the image data they publish.A recent test of the Cadmus image screening service showed some improvement, with the software detecting manipulation in 10 out of 22 images (45%) in which image manipulation had previously been detected by visual inspection. However, when multiplied by the small fraction of images being screened by these journals, the percentage of images that are effectively screened is dramatically lower. At the very least, these journals should fully disclose their screening practices (and their efficacy) to their readers.Although complete protection against manipulated images cannot be guaranteed, it is incumbent on journal editors to screen the images they publish using the best available method, not just to some known (and low) percentage of efficacy. The issue of data integrity should not be left to chance and probability. This is scholarly publishing, not blackjack.There are others developing software to detect image manipulation, and it is possible that these applications may eventually prove to be useful and effective tools for editors. But journal editors should not rely on an automated method for image screening unless they know it is as effective as the visual method. Otherwise, readers are left to hedge their bets.  相似文献   

4.
Various visual functions decline in ageing and even more so in patients with Alzheimer''s disease (AD). Here we investigated whether the complex visual processes involved in ignoring illumination-related variability (specifically, cast shadows) in visual scenes may also be compromised. Participants searched for a discrepant target among items which appeared as posts with shadows cast by light-from-above when upright, but as angled objects when inverted. As in earlier reports, young participants gave slower responses with upright than inverted displays when the shadow-like part was dark but not white (control condition). This is consistent with visual processing mechanisms making shadows difficult to perceive, presumably to assist object recognition under varied illumination. Contrary to predictions, this interaction of “shadow” colour with item orientation was maintained in healthy older and AD groups. Thus, the processing mechanisms which assist complex light-independent object identification appear to be robust to the effects of both ageing and AD. Importantly, this means that the complexity of a function does not necessarily determine its vulnerability to age- or AD-related decline.We also report slower responses to dark than light “shadows” of either orientation in both ageing and AD, in keeping with increasing light scatter in the ageing eye. Rather curiously, AD patients showed further slowed responses to “shadows” of either colour at the bottom than the top of items as if they applied shadow-specific rules to non-shadow conditions. This suggests that in AD, shadow-processing mechanisms, while preserved, might be applied in a less selective way.  相似文献   

5.
Within the range of images that we might categorize as a “beach”, for example, some will be more representative of that category than others. Here we first confirmed that humans could categorize “good” exemplars better than “bad” exemplars of six scene categories and then explored whether brain regions previously implicated in natural scene categorization showed a similar sensitivity to how well an image exemplifies a category. In a behavioral experiment participants were more accurate and faster at categorizing good than bad exemplars of natural scenes. In an fMRI experiment participants passively viewed blocks of good or bad exemplars from the same six categories. A multi-voxel pattern classifier trained to discriminate among category blocks showed higher decoding accuracy for good than bad exemplars in the PPA, RSC and V1. This difference in decoding accuracy cannot be explained by differences in overall BOLD signal, as average BOLD activity was either equivalent or higher for bad than good scenes in these areas. These results provide further evidence that V1, RSC and the PPA not only contain information relevant for natural scene categorization, but their activity patterns mirror the fundamentally graded nature of human categories. Analysis of the image statistics of our good and bad exemplars shows that variability in low-level features and image structure is higher among bad than good exemplars. A simulation of our neuroimaging experiment suggests that such a difference in variance could account for the observed differences in decoding accuracy. These results are consistent with both low-level models of scene categorization and models that build categories around a prototype.  相似文献   

6.
Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to “spy” on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI), target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males), were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system). Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration) and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks). Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.  相似文献   

7.
We investigated coordinated movements between the eyes and head (“eye-head coordination”) in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.  相似文献   

8.
The analysis of motion crowds is concerned with the detection of potential hazards for individuals of the crowd. Existing methods analyze the statistics of pixel motion to classify non-dangerous or dangerous behavior, to detect outlier motions, or to estimate the mean throughput of people for an image region. We suggest a biologically inspired model for the analysis of motion crowds that extracts motion features indicative for potential dangers in crowd behavior. Our model consists of stages for motion detection, integration, and pattern detection that model functions of the primate primary visual cortex area (V1), the middle temporal area (MT), and the medial superior temporal area (MST), respectively. This model allows for the processing of motion transparency, the appearance of multiple motions in the same visual region, in addition to processing opaque motion. We suggest that motion transparency helps to identify “danger zones” in motion crowds. For instance, motion transparency occurs in small exit passages during evacuation. However, motion transparency occurs also for non-dangerous crowd behavior when people move in opposite directions organized into separate lanes. Our analysis suggests: The combination of motion transparency and a slow motion speed can be used for labeling of candidate regions that contain dangerous behavior. In addition, locally detected decelerations or negative speed gradients of motions are a precursor of danger in crowd behavior as are globally detected motion patterns that show a contraction toward a single point. In sum, motion transparency, image speeds, motion patterns, and speed gradients extracted from visual motion in videos are important features to describe the behavioral state of a motion crowd.  相似文献   

9.
Short-latency afferent inhibition (SAI) occurs when a single transcranial magnetic stimulation (TMS) pulse delivered over the primary motor cortex is preceded by peripheral electrical nerve stimulation at a short inter-stimulus interval (∼20–28 ms). SAI has been extensively examined at rest, but few studies have examined how this circuit functions in the context of performing a motor task and if this circuit may contribute to surround inhibition. The present study investigated SAI in a muscle involved versus uninvolved in a motor task and specifically during three pre-movement phases; two movement preparation phases between a “warning” and “go” cue and one movement initiation phase between a “go” cue and EMG onset. SAI was tested in the first dorsal interosseous (FDI) and abductor digiti minimi (ADM) muscles in twelve individuals. In a second experiment, the origin of SAI modulation was investigated by measuring H-reflex amplitudes from FDI and ADM during the motor task. The data indicate that changes in SAI occurred predominantly in the movement initiation phase during which SAI modulation depended on the specific digit involved. Specifically, the greatest reduction in SAI occurred when FDI was involved in the task. In contrast, these effects were not present in ADM. Changes in SAI were primarily mediated via supraspinal mechanisms during movement preparation, while both supraspinal and spinal mechanisms contributed to SAI reduction during movement initiation.  相似文献   

10.
BackgroundWriting is a sequential motor action based on sensorimotor integration in visuospatial and linguistic functional domains. To test the hypothesis of lateralized circuitry concerning spatial and language components involved in such action, we employed an fMRI paradigm including writing and drawing with each hand. In this way, writing-related contributions of dorsal and ventral premotor regions in each hemisphere were assessed, together with effects in wider distributed circuitry. Given a right-hemisphere dominance for spatial action, right dorsal premotor cortex dominance was expected in left-hand writing while dominance of the left ventral premotor cortex was expected during right-hand writing.MethodsSixteen healthy right-handed subjects were scanned during audition-guided writing of short sentences and simple figure drawing without visual feedback. Tapping with a pencil served as a basic control task for the two higher-order motor conditions. Activation differences were assessed with Statistical Parametric Mapping (SPM).ResultsWriting and drawing showed parietal-premotor and posterior inferior temporal activations in both hemispheres when compared to tapping. Drawing activations were rather symmetrical for each hand. Activations in left- and right-hand writing were left-hemisphere dominant, while right dorsal premotor activation only occurred in left-hand writing, supporting a spatial motor contribution of particularly the right hemisphere. Writing contrasted to drawing revealed left-sided activations in the dorsal and ventral premotor cortex, Broca’s area, pre-Supplementary Motor Area and posterior middle and inferior temporal gyri, without parietal activation.DiscussionThe audition-driven postero-inferior temporal activations indicated retrieval of virtual visual form characteristics in writing and drawing, with additional activation concerning word form in the left hemisphere. Similar parietal processing in writing and drawing pointed at a common mechanism by which such visually formatted information is used for subsequent sensorimotor integration along a dorsal visuomotor pathway. In this, the left posterior middle temporal gyrus subserves phonological-orthographical conversion, dissociating dorsal parietal-premotor circuitry from perisylvian circuitry including Broca''s area.  相似文献   

11.
It was previously shown that a small lesion in the primary somatosensory cortex (S1) prevented both cortical plasticity and sensory learning in the adult mouse visual system: While 3-month-old control mice continued to show ocular dominance (OD) plasticity in their primary visual cortex (V1) after monocular deprivation (MD), age-matched mice with a small photothrombotically induced (PT) stroke lesion in S1, positioned at least 1 mm anterior to the anterior border of V1, no longer expressed OD-plasticity. In addition, in the S1-lesioned mice, neither the experience-dependent increase of the spatial frequency threshold (“visual acuity”) nor of the contrast threshold (“contrast sensitivity”) of the optomotor reflex through the open eye was present. To assess whether these plasticity impairments can also occur if a lesion is placed more distant from V1, we tested the effect of a PT-lesion in the secondary motor cortex (M2). We observed that mice with a small M2-lesion restricted to the superficial cortical layers no longer expressed an OD-shift towards the open eye after 7 days of MD in V1 of the lesioned hemisphere. Consistent with previous findings about the consequences of an S1-lesion, OD-plasticity in V1 of the nonlesioned hemisphere of the M2-lesioned mice was still present. In addition, the experience-dependent improvements of both visual acuity and contrast sensitivity of the open eye were severely reduced. In contrast, sham-lesioned mice displayed both an OD-shift and improvements of visual capabilities of their open eye. To summarize, our data indicate that even a very small lesion restricted to the superficial cortical layers and more than 3mm anterior to the anterior border of V1 compromised V1-plasticity and impaired learning-induced visual improvements in adult mice. Thus both plasticity phenomena cannot only depend on modality-specific and local nerve cell networks but are clearly influenced by long-range interactions even from distant brain regions.  相似文献   

12.
The proper allocation of public health resources for research and control requires quantification of both a disease''s current burden and the trend in its impact. Infectious diseases that have been labeled as “emerging infectious diseases” (EIDs) have received heightened scientific and public attention and resources. However, the label ‘emerging’ is rarely backed by quantitative analysis and is often used subjectively. This can lead to over-allocation of resources to diseases that are incorrectly labelled “emerging,” and insufficient allocation of resources to diseases for which evidence of an increasing or high sustained impact is strong. We suggest a simple quantitative approach, segmented regression, to characterize the trends and emergence of diseases. Segmented regression identifies one or more trends in a time series and determines the most statistically parsimonious split(s) (or joinpoints) in the time series. These joinpoints in the time series indicate time points when a change in trend occurred and may identify periods in which drivers of disease impact change. We illustrate the method by analyzing temporal patterns in incidence data for twelve diseases. This approach provides a way to classify a disease as currently emerging, re-emerging, receding, or stable based on temporal trends, as well as to pinpoint the time when the change in these trends happened. We argue that quantitative approaches to defining emergence based on the trend in impact of a disease can, with appropriate context, be used to prioritize resources for research and control. Implementing this more rigorous definition of an EID will require buy-in and enforcement from scientists, policy makers, peer reviewers and journal editors, but has the potential to improve resource allocation for global health.  相似文献   

13.

Background

In the continuum between a stroke and a circle including all possible ellipses, some eccentricities seem more “biologically preferred” than others by the motor system, probably because they imply less demanding coordination patterns. Based on the idea that biological motion perception relies on knowledge of the laws that govern the motor system, we investigated whether motorically preferential and non-preferential eccentricities are visually discriminated differently. In contrast with previous studies that were interested in the effect of kinematic/time features of movements on their visual perception, we focused on geometric/spatial features, and therefore used a static visual display.

Methodology/Principal Findings

In a dual-task paradigm, participants visually discriminated 13 static ellipses of various eccentricities while performing a finger-thumb opposition sequence with either the dominant or the non-dominant hand. Our assumption was that because the movements used to trace ellipses are strongly lateralized, a motor task performed with the dominant hand should affect the simultaneous visual discrimination more strongly. We found that visual discrimination was not affected when the motor task was performed by the non-dominant hand. Conversely, it was impaired when the motor task was performed with the dominant hand, but only for the ellipses that we defined as preferred by the motor system, based on an assessment of individual preferences during an independent graphomotor task.

Conclusions/Significance

Visual discrimination of ellipses depends on the state of the motor neural networks controlling the dominant hand, but only when their eccentricity is “biologically preferred”. Importantly, this effect emerges on the basis of a static display, suggesting that what we call “biological geometry”, i.e., geometric features resulting from preferential movements is relevant information for the visual processing of bidimensional shapes.  相似文献   

14.

Introduction

The aim of the present study was to investigate how the speed of observed action affects the excitability of the primary motor cortex (M1), as assessed by the size of motor evoked potentials (MEPs) induced by transcranial magnetic stimulation (TMS).

Methods

Eighteen healthy subjects watched a video clip of a person catching a ball, played at three different speeds (normal-, half-, and quarter-speed). MEPs were induced by TMS when the model''s hand had opened to the widest extent just before catching the ball (“open”) and when the model had just caught the ball (“catch”). These two events were locked to specific frames of the video clip (“phases”), rather than occurring at specific absolute times, so that they could easily be compared across different speeds. MEPs were recorded from the thenar (TH) and abductor digiti minimi (ADM) muscles of the right hand.

Results

The MEP amplitudes were higher when the subjects watched the video clip at low speed than when they watched the clip at normal speed. A repeated-measures ANOVA, with the factor VIDEO-SPEED, showed significant main effects. Bonferroni''s post hoc test showed that the following MEP amplitude differences were significant: TH, normal vs. quarter; ADM, normal vs. half; and ADM, normal vs. quarter. Paired t-tests showed that the significant MEP amplitude differences between TMS phases under each speed condition were TH, “catch” higher than “open” at quarter speed; ADM, “catch” higher than “open” at half speed.

Conclusions

These results indicate that the excitability of M1 was higher when the observed action was played at low speed. Our findings suggest that the action observation system became more active when the subjects observed the video clip at low speed, because the subjects could then recognize the elements of action and intention in others.  相似文献   

15.
When we observe a motor act (e.g. grasping a cup) done by another individual, we extract, according to how the motor act is performed and its context, two types of information: the goal (grasping) and the intention underlying it (e.g. grasping for drinking). Here we examined whether children with autistic spectrum disorder (ASD) are able to understand these two aspects of motor acts. Two experiments were carried out. In the first, one group of high-functioning children with ASD and one of typically developing (TD) children were presented with pictures showing hand-object interactions and asked what the individual was doing and why. In half of the “why” trials the observed grip was congruent with the function of the object (“why-use” trials), in the other half it corresponded to the grip typically used to move that object (“why-place” trials). The results showed that children with ASD have no difficulties in reporting the goals of individual motor acts. In contrast they made several errors in the why task with all errors occurring in the “why-place” trials. In the second experiment the same two groups of children saw pictures showing a hand-grip congruent with the object use, but within a context suggesting either the use of the object or its placement into a container. Here children with ASD performed as TD children, correctly indicating the agent''s intention. In conclusion, our data show that understanding others'' intentions can occur in two ways: by relying on motor information derived from the hand-object interaction, and by using functional information derived from the object''s standard use. Children with ASD have no deficit in the second type of understanding, while they have difficulties in understanding others'' intentions when they have to rely exclusively on motor cues.  相似文献   

16.
We present three experiments on horizon estimation. In Experiment 1 we verify the human ability to estimate the horizon in static images from only visual input. Estimates are given without time constraints with emphasis on precision. The resulting estimates are used as baseline to evaluate horizon estimates from early visual processes. Stimuli are presented for only ms and then masked to purge visual short-term memory and enforcing estimates to rely on early processes, only. The high agreement between estimates and the lack of a training effect shows that enough information about viewpoint is extracted in the first few hundred milliseconds to make accurate horizon estimation possible. In Experiment 3 we investigate several strategies to estimate the horizon in the computer and compare human with machine “behavior” for different image manipulations and image scene types.  相似文献   

17.
Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of “sameness” among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16–17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item''s prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of “sameness.”  相似文献   

18.
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application – SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary “Manual Initialize” and “Hand Draw” tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.  相似文献   

19.
Changing the visual body appearance by use of as virtual reality system, funny mirror, or binocular glasses has been reported to be helpful in rehabilitation of pain. However, there are interindividual differences in the analgesic effect of changing the visual body image. We hypothesized that a negative body image associated with changing the visual body appearance causes interindividual differences in the analgesic effect although the relationship between the visual body appearance and analgesic effect has not been clarified. We investigated whether a negative body image associated with changes in the visual body appearance increased pain. Twenty-five healthy individuals participated in this study. To evoke a negative body image, we applied the method of rubber hand illusion. We created an “injured rubber hand” to evoke unpleasantness associated with pain, a “hairy rubber hand” to evoke unpleasantness associated with embarrassment, and a “twisted rubber hand” to evoke unpleasantness associated with deviation from the concept of normality. We also created a “normal rubber hand” as a control. The pain threshold was measured while the participant observed the rubber hand using a device that measured pain caused by thermal stimuli. Body ownership experiences were elicited by observation of the injured rubber hand and hairy rubber hand as well as the normal rubber hand. Participants felt more unpleasantness by observing the injured rubber hand and hairy rubber hand than the normal rubber hand and twisted rubber hand (p<0.001). The pain threshold was lower under the injured rubber hand condition than with the other conditions (p<0.001). We conclude that a negative body appearance associated with pain can increase pain sensitivity.  相似文献   

20.
Public concern over the environmental and public health impacts of the emerging contaminant class “microplastics” has recently prompted government agencies to consider mitigation efforts. Microplastics do not easily fit within traditional risk-based regulatory frameworks because their persistence and extreme diversity (of size, shape, and chemical properties associated with sorbed chemicals) result in high levels of uncertainty in hazard and exposure estimates. Due to these serious complexities, addressing microplastics’ impacts requires open collaboration between scientists, regulators, and policymakers. Here we describe ongoing international mitigation efforts, with California as a case study, and draw lessons from a similarly diverse and environmentally persistent class of emerging contaminants (per- and polyfluoroalkyl substances) that is already disrupting traditional regulatory paradigms, discuss strategies to address challenges associated with developing health-protective regulations and policies related to microplastics, and suggest ways to maximize impacts of research.

Mounting concerns regarding the deleterious effects of plastic pollution have prompted recent government mitigation efforts, but microplastics challenge traditional risk-based regulatory frameworks due to their particle properties, diverse composition and persistence. Using California as a case study, the Essay suggests strategies to address challenges with regulations, policies and research, drawing parallels with a similar class of emerging contaminants (per- and polyfluoroalkyl substances).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号