首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The foraging and nesting performance of bees can provide important information on bee health and is of interest for risk and impact assessment of environmental stressors. While radiofrequency identification (RFID) technology is an efficient tool increasingly used for the collection of behavioral data in social bee species such as honeybees, behavioral studies on solitary bees still largely depend on direct observations, which is very time‐consuming. Here, we present a novel automated methodological approach of individually and simultaneously tracking and analyzing foraging and nesting behavior of numerous cavity‐nesting solitary bees. The approach consists of monitoring nesting units by video recording and automated analysis of videos by machine learning‐based software. This Bee Tracker software consists of four trained deep learning networks to detect bees that enter or leave their nest and to recognize individual IDs on the bees’ thorax and the IDs of their nests according to their positions in the nesting unit. The software is able to identify each nest of each individual nesting bee, which permits to measure individual‐based measures of reproductive success. Moreover, the software quantifies the number of cavities a female enters until it finds its nest as a proxy of nest recognition, and it provides information on the number and duration of foraging trips. By training the software on 8 videos recording 24 nesting females per video, the software achieved a precision of 96% correct measurements of these parameters. The software could be adapted to various experimental setups by training it according to a set of videos. The presented method allows to efficiently collect large amounts of data on cavity‐nesting solitary bee species and represents a promising new tool for the monitoring and assessment of behavior and reproductive success under laboratory, semi‐field, and field conditions.  相似文献   

2.
A challenging goal for cognitive neuroscience researchers is to determine how mental representations are mapped onto the patterns of neural activity. To address this problem, functional magnetic resonance imaging (fMRI) researchers have developed a large number of encoding and decoding methods. However, previous studies typically used rather limited stimuli representation, like semantic labels and Wavelet Gabor filters, and largely focused on voxel-based brain patterns. Here, we present a new fMRI encoding model to predict the human brain’s responses to free viewing of video clips which aims to deal with this limitation. In this model, we represent the stimuli using a variety of representative visual features in the computer vision community, which can describe the global color distribution, local shape and spatial information and motion information contained in videos, and apply the functional connectivity to model the brain’s activity pattern evoked by these video clips. Our experimental results demonstrate that brain network responses during free viewing of videos can be robustly and accurately predicted across subjects by using visual features. Our study suggests the feasibility of exploring cognitive neuroscience studies by computational image/video analysis and provides a novel concept of using the brain encoding as a test-bed for evaluating visual feature extraction.  相似文献   

3.
The increasing use of the video playback technique in behavioural ecology reveals a growing need to ensure better control of the visual stimuli that focal animals experience. Technological advances now allow researchers to develop computer-generated animations instead of using video sequences of live-acting demonstrators. However, care must be taken to match the motion characteristics (speed and velocity) of the animation to the original video source. Here, we presented a tool based on the use of an optic flow analysis program to measure the resemblance of motion characteristics of computer-generated animations compared to videos of live-acting animals. We examined three distinct displays (tail-flick (TF), push-up body rock (PUBR), and slow arm wave (SAW)) exhibited by animations of Jacky dragons (Amphibolurus muricatus) that were compared to the original video sequences of live lizards. We found no significant differences between the motion characteristics of videos and animations across all three displays. Our results showed that our animations are similar the speed and velocity features of each display. Researchers need to ensure that similar motion characteristics in animation and video stimuli are represented, and this feature is a critical component in the future success of the video playback technique.  相似文献   

4.
Online educational videos have the potential to enhance undergraduate biology learning, for example by showcasing contemporary scientific research and providing content coverage. Here, we describe the integration of nine videos into a large‐enrollment (n = 356) introductory evolution and ecology course via weekly homework assignments. We predicted that videos that feature research stories from contemporary scientists could reinforce topics introduced in lecture and provide students with novel insights into the nature of scientific research. Using qualitative analysis of open‐ended written feedback from the students on each video assigned throughout the term (n = 133–229 responses per video) and on end‐of‐quarter evaluations (n = 243), we identified common categories of student perspectives. All videos received more positive than negative comments and all videos received comments indicating that students found them intellectually and emotionally stimulating, accessible, and relevant to course content. Additionally, all videos also received comments indicating some students found them intellectually unstimulating, though these comments were generally far less numerous than positive comments. Students responded positively to videos that incorporated at least one of the following: documentary‐style filming, very clear links to course content (especially hands‐on activities completed by the students), relevance to recent world events, clarity on difficult topics, and/or charismatic narrators or species. We discuss opportunities and challenges for the use of online educational videos in teaching ecology and evolution, and we provide guidelines instructors can use to integrate them into their courses.  相似文献   

5.
With the development and wide application of motion capture technology, the captured motion data sets are becoming larger and larger. For this reason, an efficient retrieval method for the motion database is very important. The retrieval method needs an appropriate indexing scheme and an effective similarity measure that can organize the existing motion data well. In this paper, we represent a human motion hierarchical index structure and adopt a nonlinear method to segment motion sequences. Based on this, we extract motion patterns and then we employ a fast similarity measure algorithm for motion pattern similarity computation to efficiently retrieve motion sequences. The experiment results show that the approach proposed in our paper is effective and efficient.  相似文献   

6.
In vivo imaging using two-photon microscopy is an essential tool to explore the dynamic of physiological events deep within biological tissues for short or extended periods of time. The new capabilities offered by this technology (e.g. high tissue penetrance, low toxicity) have opened a whole new era of investigations in modern biomedical research. However, the potential of using this promising technique in tissues of living animals is greatly limited by the intrinsic irregular movements that are caused by cardiac and respiratory cycles and muscular and vascular tone. Here, we show real-time imaging of the brain, spinal cord, sciatic nerve and myenteric plexus of living mice using a new automated program, named Intravital_Microscopy_Toolbox, that removes frames corrupted with motion artifacts from time-lapse videos. Our approach involves generating a dissimilarity score against precalculated reference frames in a specific reference channel, thus allowing the gating of distorted, out-of-focus or translated frames. Since the algorithm detects the uneven peaks of image distortion caused by irregular animal movements, the macro allows a fast and efficient filtering of the image sequence. In addition, extra features have been implemented in the macro, such as XY registration, channel subtraction, extended field of view with maximum intensity projection, noise reduction with average intensity projections, and automated timestamp and scale bar overlay. Thus, the Intravital_Microscopy_Toolbox macro for ImageJ provides convenient tools for biologists who are performing in vivo two-photon imaging in tissues prone to motion artifacts.  相似文献   

7.
A swarm of the δ-proteobacterium Myxococcus xanthus contains millions of cells that act as a collective, coordinating movement through a series of signals to create complex, dynamic patterns as a response to environmental cues. These patterns are self-organizing and emergent; they cannot be predicted by observing the behavior of the individual cells. Using a time-lapse microcinematography tracking assay, we identified a distinct emergent pattern in M. xanthus called chemotaxis, defined as the directed movement of a swarm up a nutrient gradient toward its source 1.In order to efficiently characterize chemotaxis via time-lapse microcinematography, we developed a highly modifiable plate complex (Figure 1) and constructed a cluster of 8 microscopes (Figure 2), each capable of capturing time-lapse videos. The assay is rigorous enough to allow consistent replication of quantifiable data, and the resulting videos allow us to observe and track subtle changes in swarm behavior. Once captured, the videos are transferred to an analysis/storage computer with enough memory to process and store thousands of videos. The flexibility of this setup has proven useful to several members of the M. xanthus community.Download video file.(133M, mp4)  相似文献   

8.
This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.  相似文献   

9.
The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The encoding time complexities have also increased multiple times that is not suitable for realtime video coding applications. To address this limitation, this paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features (HVF). The HVF in the proposed technique comprise with human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the average computational time of the latest HEVC reference encoder by 34% while providing similar rate-distortion (RD) performance for a wide range of video sequences.  相似文献   

10.
We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.  相似文献   

11.
During initiation, the ribosome is tasked to efficiently recognize open reading frames (ORFs) for accurate and fast translation of mRNAs. A critical step is start codon recognition, which is modulated by initiation factors, mRNA structure, a Shine Dalgarno (SD) sequence and the start codon itself. Within the Escherichia coli genome, we identified more than 50 annotated initiation sites harboring AUGUG or GUGUG sequence motifs that provide two canonical start codons, AUG and GUG, in immediate proximity. As these sites may challenge start codon recognition, we studied if and how the ribosome is accurately guided to the designated ORF, with a special focus on the SD sequence as well as adenine at the fourth coding sequence position (A4). By in vitro and in vivo experiments, we characterized key requirements for unambiguous start codon recognition, but also discovered initiation sites that lead to the translation of both overlapping reading frames. Our findings corroborate the existence of an ambiguous translation initiation mechanism, implicating a multitude of so far unrecognized ORFs and translation products in bacteria.  相似文献   

12.
Time perception is defined as a subjective judgment on the elapsed time of an event. It can change according to both external and internal factors. There are two main paradigms of time perception; retrospective time perception (RTP) and prospective time perception (PTP). Two paradigms differ from each other according to whether the subject has knowledge on the importance of passage of time in the given task. Since RTP paradigm studies are harder to conduct, studies on RTP paradigm is far fewer than studies on PTP. Thus in the current study, both RTP and PTP paradigms are investigated. Also, time perception is discussed in relation to internal clock model and cognitive load. Emotional motion videos are used to create cognitive load and manipulate internal clock. Results showed the effect of emotion on time perception. Another major finding is that shorter videos are perceived longer whereas longer videos are perceived shorter as in accordance with Vierordt’s Law. However, there was no difference between RTP and PTP paradigms. These results indicate that emotional videos change our internal clock while a number of changes in a motion video create cognitive load causing disturbance of time perception.  相似文献   

13.
Recent advances in understanding cultural ecosystem services (CES) using big data such as social media and other web archives have been primarily made to identify the relationships between the specific indicators for CES and large-scale features of ecosystems, such as vegetation covers and types, ecosystem types, naturalness, and the proportion of areas designated as protected areas. Yet, we know little about how biodiversity and specific species contribute to the enhancement of CES. Here, we examined the factors influencing the number of views in YouTube videos displaying wild birds in nature as a direct indicator of CES related to aesthetic enjoyment, environmental education, and nature experience. We found that the presence of specific wild bird species (i.e., Streptopelia orientalis and Larvivora cyane) increased the number of views while controlling for confounding factors such as the length of the video and the number of days since uploading. We suggest that these species are widely recognized, positively perceived presumably owing to their cultural significance, and preferred among viewers watching videos of wild birds, resulting in more views for videos including these species. Finally, we depicted the geographic distribution (on a national scale) of YouTube videos displaying wild birds in nature. Urban and agricultural land cover around the geotagged location of each video negatively affected the number of views, suggesting that over-exploitation of ecosystems may lead to the loss of important CES. Our study thus demonstrates the contributions of specific wild bird species to enhancing the CES related to aesthetic enjoyment, environmental education, and nature experience, provided through online shared videos.  相似文献   

14.
It is increasingly clear that we extract patterns of temporal regularity between events to optimize information processing. The ability to extract temporal patterns and regularity of events is referred as temporal expectation. Temporal expectation activates the same cerebral network usually engaged in action selection, comprising cerebellum. However, it is unclear whether the cerebellum is directly involved in temporal expectation, when timing information is processed to make predictions on the outcome of a motor act. Healthy volunteers received one session of either active (inhibitory, 1Hz) or sham repetitive transcranial magnetic stimulation covering the right lateral cerebellum prior the execution of a temporal expectation task. Subjects were asked to predict the end of a visually perceived human body motion (right hand handwriting) and of an inanimate object motion (a moving circle reaching a target). Videos representing movements were shown in full; the actual tasks consisted of watching the same videos, but interrupted after a variable interval from its onset by a dark interval of variable duration. During the ‘dark’ interval, subjects were asked to indicate when the movement represented in the video reached its end by clicking on the spacebar of the keyboard. Performance on the timing task was analyzed measuring the absolute value of timing error, the coefficient of variability and the percentage of anticipation responses. The active group exhibited greater absolute timing error compared with the sham group only in the human body motion task. Our findings suggest that the cerebellum is engaged in cognitive and perceptual domains that are strictly connected to motor control.  相似文献   

15.
In this paper a tool, MENU, is presented, with which demonstration packages can be easily constructed. The teacher designs the set-up of the package by editing a demonstration specification file, containing both commands to MENU to display frames to the end-user of the execute tasks and the text of the frames. The text contains explanations for the end-user together with the options he can choose. MENU takes care that the corresponding actions are executed. Two image analysis packages, one about CT and one about gated cardiac bloodpool scintigraphy, are presented as examples of the use of MENU. It is concluded, that with MENU (existing) programs can be modeled into packages very easily and efficiently. MENU proves to be a tool that is worthwhile for educational purposes.  相似文献   

16.
Neuroethological experiments often require video images of animal behavior and recordings of physiological data to be acquired simultaneously, synchronized with each other, stored, and analyzed together. The use of inexpensive multimedia computers offers new possibilities for mixing video images, analog voltages, and computer data, storing these combined signals to videotape, and extracting quantitative data for analysis. In this paper, we summarize methods for mixing images from multiple video cameras and a Macintosh computer display to facilitate manipulation of data generated during our neurophysiological and behavioral research. These technologies enhance accuracy, speed, and flexibility during experiments, and facilitate selecting and extracting quantitative data from the videotape for further analysis. Three applications are presented: (A) we used an analog video mixer to synchronize neurophysiological recordings with ongoing behaviors of freely moving rats; (B) we used a chroma keyed digital overlay to generate positional data for the rat's face during drinking behavior; and (C) we combined a computer model of a rat's head and whiskers with videos of exploratory behaviors to better track and quantify movements in three dimensions. Although the applications described here are specific to our neuroethological work, these methods will be useful to anyone wishing to combine the signals from multiple video sources into a single image or to extract series of positional or movement data from video frames without frame grabbing.  相似文献   

17.
18.
SD Kelly  BC Hansen  DT Clark 《PloS one》2012,7(8):e42620
Co-speech hand gestures influence language comprehension. The present experiment explored what part of the visual processing system is optimized for processing these gestures. Participants viewed short video clips of speech and gestures (e.g., a person saying "chop" or "twist" while making a chopping gesture) and had to determine whether the two modalities were congruent or incongruent. Gesture videos were designed to stimulate the parvocellular or magnocellular visual pathways by filtering out low or high spatial frequencies (HSF versus LSF) at two levels of degradation severity (moderate and severe). Participants were less accurate and slower at processing gesture and speech at severe versus moderate levels of degradation. In addition, they were slower for LSF versus HSF stimuli, and this difference was most pronounced in the severely degraded condition. However, exploratory item analyses showed that the HSF advantage was modulated by the range of motion and amount of motion energy in each video. The results suggest that hand gestures exploit a wide range of spatial frequencies, and depending on what frequencies carry the most motion energy, parvocellular or magnocellular visual pathways are maximized to quickly and optimally extract meaning.  相似文献   

19.
The use of miniaturized video cameras to study the at‐sea behavior of flying seabirds has increased in recent years. These cameras allow researchers to record several behaviors that were not previously possible to observe. However, video recorders produce large amounts of data and videos can often be time‐consuming to analyze. We present a new technique using open‐source software to extract bank angles from bird‐borne video footage. Bank angle is a key facet of dynamic soaring, which allows albatrosses and petrels to efficiently search vast areas of ocean for food. Miniaturized video cameras were deployed on 28 Wandering Albatrosses (Diomedea exulans) on Marion Island (one of the two Prince Edward Islands) from 2016 to 2018. The OpenCV library for the Python programming language was used to extract the angle of the horizon relative to the bird’s body (= bank angle) from footage when the birds were flying using a series of steps focused on edge detection. The extracted angles were not significantly different from angles measured manually by three independent observers, thus being a valid method to measure bank angles. Image quality, high wind speeds, and sunlight all influenced the accuracy of angle estimates, but post‐processing eliminated most of these errors. Birds flew most often with cross‐winds (58%) and tailwinds (39%), resulting in skewed distributions of bank angles when birds turned into the wind more often. Higher wind speeds resulted in extreme bank angles (maximum observed was 94°). We present a novel method for measuring postural data from seabirds that can be used to describe the fine‐scale movements of the dynamic‐soaring cycle. Birds appeared to alter their bank angle in response to varying wind conditions to counter wind drift associated with the prevailing westerly winds in the Southern Ocean. These data, in combination with fine‐scale positional data, may lead to new insights into dynamic‐soaring flight.  相似文献   

20.
Summer diets are crucial for large herbivores in the subarctic and are affected by weather, harassment from insects and a variety of environmental changes linked to climate. Yet, understanding foraging behavior and diet of large herbivores is challenging in the subarctic because of their remote ranges. We used GPS video‐camera collars to observe behaviors and summer diets of the migratory Fortymile Caribou Herd (Rangifer tarandus granti) across Alaska, USA and the Yukon, Canada. First, we characterized caribou behavior. Second, we tested if videos could be used to quantify changes in the probability of eating events. Third, we estimated summer diets at the finest taxonomic resolution possible through videos. Finally, we compared summer diet estimates from video collars to microhistological analysis of fecal pellets. We classified 18,134 videos from 30 female caribou over two summers (2018 and 2019). Caribou behaviors included eating (mean = 43.5%), ruminating (25.6%), travelling (14.0%), stationary awake (11.3%) and napping (5.1%). Eating was restricted by insect harassment. We classified forage(s) consumed in 5,549 videos where diet composition (monthly) highlighted a strong tradeoff between lichens and shrubs; shrubs dominated diets in June and July when lichen use declined. We identified 63 species, 70 genus and 33 family groups of summer forages from videos. After adjusting for digestibility, monthly estimates of diet composition were strongly correlated at the scale of the forage functional type (i.e., forage groups composed of forbs, graminoids, mosses, shrubs and lichens; = 0.79, p < .01). Using video collars, we identified (1) a pronounced tradeoff in summer foraging between lichens and shrubs and (2) the costs of insect harassment on eating. Understanding caribou foraging ecology is needed to plan for their long‐term conservation across the circumpolar north, and video collars can provide a powerful approach across remote regions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号