首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper investigates image processing and pattern recognition techniques to estimate atmospheric visibility based on the visual content of images from off-the-shelf cameras. We propose a prediction model that first relates image contrast measured through standard image processing techniques to atmospheric transmission. This is then related to the most common measure of atmospheric visibility, the coefficient of light extinction. The regression model is learned using a training set of images and corresponding light extinction values as measured using a transmissometer.The major contributions of this paper are twofold. First, we propose two predictive models that incorporate multiple scene regions into the estimation: regression trees and multivariate linear regression. Incorporating multiple regions is important since regions at different distances are effective for estimating light extinction under different visibility regimes. The second major contribution is a semi-supervised learning framework, which incorporates unlabeled training samples to improve the learned models. Leveraging unlabeled data for learning is important since in many applications, it is easier to obtain observations than to label them. We evaluate our models using a dataset of images and ground truth light extinction values from a visibility camera system in Phoenix, Arizona.  相似文献   

2.
We propose a novel methodology for predicting human gait pattern kinematics based on a statistical and stochastic approach using a method called Gaussian process regression (GPR). We selected 14 body parameters that significantly affect the gait pattern and 14 joint motions that represent gait kinematics. The body parameter and gait kinematics data were recorded from 113 subjects by anthropometric measurements and a motion capture system. We generated a regression model with GPR for gait pattern prediction and built a stochastic function mapping from body parameters to gait kinematics based on the database and GPR, and validated the model with a cross validation method. The function can not only produce trajectories for the joint motions associated with gait kinematics, but can also estimate the associated uncertainties. Our approach results in a novel, low-cost and subject-specific method for predicting gait kinematics with only the subject's body parameters as the necessary input, and also enables a comprehensive understanding of the correlation and uncertainty between body parameters and gait kinematics.  相似文献   

3.
In gait studies body pose reconstruction (BPR) techniques have been widely explored, but no previous protocols have been developed for speed skating, while the peculiarities of the skating posture and technique do not automatically allow for the transfer of the results of those explorations to kinematic skating data. The aim of this paper is to determine the best procedure for body pose reconstruction and inverse dynamics of speed skating, and to what extend this choice influences the estimation of joint power. The results show that an eight body segment model together with a global optimization method with revolute joint in the knee and in the lumbosacral joint, while keeping the other joints spherical, would be the most realistic model to use for the inverse kinematics in speed skating. To determine joint power, this method should be combined with a least-square error method for the inverse dynamics. Reporting on the BPR technique and the inverse dynamic method is crucial to enable comparison between studies. Our data showed an underestimation of up to 74% in mean joint power when no optimization procedure was applied for BPR and an underestimation of up to 31% in mean joint power when a bottom-up inverse dynamics method was chosen instead of a least square error approach. Although these results are aimed at speed skating, reporting on the BPR procedure and the inverse dynamics method, together with setting a golden standard should be common practice in all human movement research to allow comparison between studies.  相似文献   

4.
Traditional techniques of human motion analysis use markers located on body articulations. The position of each marker is extracted from each image. Temporal and kinematic analysis is given by matching these data with a reference model of the human body. However, as human skin is not rigidly linked with the skeleton, each movement causes displacements of the markers and induces uncertainty in results. Moreover, the experiments are mostly conducted in restricted laboratory conditions. The aim of our project was to develop a new method for human motion analysis which needs non-sophisticated recording devices, avoids constraints to the subject studied, and can be used in various surroundings such as stadiums or gymnasiums. Our approach consisted of identifying and locating body parts in image, without markers, by using a multi-sensory sensor. This sensor exploits both data given by a video camera delivering intensity images, and data given by a 3D sensor delivering in-depth images. Our goal, in this design, was to show up the feasibility of our approach. In any case the hardware we used could facilitate an automated motion analysis. We used a linked segment model which referred to Winter's model, and we applied our method not on a human subject but on a life size articulated locomotion model. Our approach consists of finding the posture of this articulated locomotion model in the image. By performing a telemetric image segmentation, we obtained an approximate correspondence between linked segment model position and locomotion model position. This posture was then improved by injecting segmentation results in an intensity image segmentation algorithm. Several tests were conducted with video/telemetric images taken in an outdoor surrounding with the articulated model. This real life-size model was equipped with movable joints which, in static positions, described two strides of a runner. With our fusion method, we obtained relevant limbs identification and location for most postures.  相似文献   

5.
6.
Our nervous system continuously combines new information from our senses with information it has acquired throughout life. Numerous studies have found that human subjects manage this by integrating their observations with their previous experience (priors) in a way that is close to the statistical optimum. However, little is known about the way the nervous system acquires or learns priors. Here we present results from experiments where the underlying distribution of target locations in an estimation task was switched, manipulating the prior subjects should use. Our experimental design allowed us to measure a subject''s evolving prior while they learned. We confirm that through extensive practice subjects learn the correct prior for the task. We found that subjects can rapidly learn the mean of a new prior while the variance is learned more slowly and with a variable learning rate. In addition, we found that a Bayesian inference model could predict the time course of the observed learning while offering an intuitive explanation for the findings. The evidence suggests the nervous system continuously updates its priors to enable efficient behavior.  相似文献   

7.
Recently, automated observation systems for animals using artificial intelligence have been proposed. In the wild, animals are difficult to detect and track automatically because of lamination and occlusions. Our study proposes a new approach to automatically detect and track wild Japanese macaques (Macaca fuscata) using deep learning and a particle filter algorithm. Macaque likelihood is derived through deep learning and used as an observation model in a particle filter to predict the macaques’ position and size in an image. By using deep learning as an observation model, it is possible to simplify the observation model and improve the accuracy of the classifier. We investigated whether the algorithm could find body regions of macaques in video recordings of free‐ranging groups at Katsuyama, Japan to evaluate our model. Experimental results showed that our method with deep learning as an observation model had higher tracking accuracy than a method that uses a support vector machine. More generally, our study will help researchers to develop automatic observation systems for animals in the wild.  相似文献   

8.
A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.  相似文献   

9.
Estimating the position of the bones from optical motion capture data is a challenge associated with human movement analysis. Bone pose estimation techniques such as the Point Cluster Technique (PCT) and simulations of movement through software packages such as OpenSim are used to minimize soft tissue artifact and estimate skeletal position; however, using different methods for analysis may produce differing kinematic results which could lead to differences in clinical interpretation such as a misclassification of normal or pathological gait. This study evaluated the differences present in knee joint kinematics as a result of calculating joint angles using various techniques. We calculated knee joint kinematics from experimental gait data using the standard PCT, the least squares approach in OpenSim applied to experimental marker data, and the least squares approach in OpenSim applied to the results of the PCT algorithm. Maximum and resultant RMS differences in knee angles were calculated between all techniques. We observed differences in flexion/extension, varus/valgus, and internal/external rotation angles between all approaches. The largest differences were between the PCT results and all results calculated using OpenSim. The RMS differences averaged nearly 5° for flexion/extension angles with maximum differences exceeding 15°. Average RMS differences were relatively small (< 1.08°) between results calculated within OpenSim, suggesting that the choice of marker weighting is not critical to the results of the least squares inverse kinematics calculations. The largest difference between techniques appeared to be a constant offset between the PCT and all OpenSim results, which may be due to differences in the definition of anatomical reference frames, scaling of musculoskeletal models, and/or placement of virtual markers within OpenSim. Different methods for data analysis can produce largely different kinematic results, which could lead to the misclassification of normal or pathological gait. Improved techniques to allow non-uniform scaling of generic models to more accurately reflect subject-specific bone geometries and anatomical reference frames may reduce differences between bone pose estimation techniques and allow for comparison across gait analysis platforms.  相似文献   

10.
Learning how to allocate attention properly is essential for success at many categorization tasks. Advances in our understanding of learned attention are stymied by a chicken-and-egg problem: there are no theoretical accounts of learned attention that predict patterns of eye movements, making data collection difficult to justify, and there are not enough datasets to support the development of a rich theory of learned attention. The present work addresses this by reporting five measures relating to the overt allocation of attention across 10 category learning experiments: accuracy, probability of fixating irrelevant information, number of fixations to category features, the amount of change in the allocation of attention (using a new measure called Time Proportion Shift - TIPS), and a measure of the relationship between attention change and erroneous responses. Using these measures, the data suggest that eye-movements are not substantially connected to error in most cases and that aggregate trial-by-trial attention change is generally stable across a number of changing task variables. The data presented here provide a target for computational models that aim to account for changes in overt attentional behaviors across learning.  相似文献   

11.
A new method is presented for measuring joint kinematics by optimally matching modeled trajectories of geometric surface models of bones with cine phase contrast (cine-PC) magnetic resonance imaging data. The incorporation of the geometric bone models (GBMs) allows computation of kinematics based on coordinate systems placed relative to full 3-D anatomy, as well as quantification of changes in articular contact locations and relative velocities during dynamic motion. These capabilities are additional to those of cine-PC based techniques that have been used previously to measure joint kinematics during activity. Cine-PC magnitude and velocity data are collected on a fixed image plane prescribed through a repetitively moved skeletal joint. The intersection of each GBM with a simulated image plane is calculated as the model moves along a computed trajectory, and cine-PC velocity data are sampled from the regions of the velocity images within the area of this intersection. From the sampled velocity data, the instantaneous linear and angular velocities of a coordinate system fixed to the GBM are estimated, and integration of the linear and angular velocities is used to predict updated trajectories. A moving validation phantom that produces motions and velocity data similar to those observed in an experiment on human knee kinematics was designed. This phantom was used to assess cine-PC rigid body tracking performance by comparing the kinematics of the phantom measured by this method to similar measurements made using a magnetic tracking system. Average differences between the two methods were measured as 2.82 mm rms for anterior/posterior tibial position, and 2.63 deg rms for axial rotation. An intertrial repeatability study of human knee kinematics using the new method produced rms differences in anterior/posterior tibial position and axial rotation of 1.44 mm and 2.35 deg. The performance of the method is concluded to be sufficient for the effective study of kinematic changes caused to knees by soft tissue injuries.  相似文献   

12.
Plants, the only natural source of oxygen, are the most important resources for every species in the world. A proper identification of plants is important for different fields. The observation of leaf characteristics is a popular method as leaves are easily available for examination. Researchers are increasingly applying image processing techniques for the identification of plants based on leaf images. In this paper, we have proposed a leaf image classification model, called BLeafNet, for plant identification, where the concept of deep learning is combined with Bonferroni fusion learning. Initially, we have designed five classification models, using ResNet-50 architecture, where five different inputs are separately used in the models. The inputs are the five variants of the leaf grayscale images, RGB, and three individual channels of RGB - red, green, and blue. For fusion of the five ResNet-50 outputs, we have used the Bonferroni mean operator as it expresses better connectivity among the confidence scores, and it also obtains better results than the individual models. We have also proposed a two-tier training method for properly training the end-to-end model. To evaluate the proposed model, we have used the Malayakew dataset, collected at the Royal Botanic Gardens in New England, which is a very challenging dataset as many leaves from different species have a very similar appearance. Besides, the proposed method is evaluated using the Leafsnap and the Flavia datasets. The obtained results on both the datasets confirm the superiority of the model as it outperforms the results achieved by many state-of-the-art models.  相似文献   

13.
Haplotype inference has become an important part of human genetic data analysis due to its functional and statistical advantages over the single-locus approach in linkage disequilibrium mapping. Different statistical methods have been proposed for detecting haplotype - disease associations using unphased multi-locus genotype data, ranging from the early approach by the simple gene-counting method to the recent work using the generalized linear model. However, these methods are either confined to case - control design or unable to yield unbiased point and interval estimates of haplotype effects. Based on the popular logistic regression model, we present a new approach for haplotype association analysis of human disease traits. Using haplotype-based parameterization, our model infers the effects of specific haplotypes (point estimation) and constructs confidence interval for the risks of haplotypes (interval estimation). Based on the estimated parameters, the model calculates haplotype frequency conditional on the trait value for both discrete and continuous traits. Moreover, our model provides an overall significance level for the association between the disease trait and a group or all of the haplotypes. Featured by the direct maximization in haplotype estimation, our method also facilitates a computer simulation approach for correcting the significance level of individual haplotype to adjust for multiple testing. We show, by applying the model to an empirical data set, that our method based on the well-known logistic regression model is a useful tool for haplotype association analysis of human disease traits.  相似文献   

14.
Hund L  Chen JT  Krieger N  Coull BA 《Biometrics》2012,68(3):849-858
Summary Temporal boundary misalignment occurs when area boundaries shift across time (e.g., census tract boundaries change at each census year), complicating the modeling of temporal trends across space. Large area-level datasets with temporal boundary misalignment are becoming increasingly common in practice. The few existing approaches for temporally misaligned data do not account for correlation in spatial random effects over time. To overcome issues associated with temporal misalignment, we construct a geostatistical model for aggregate count data by assuming that an underlying continuous risk surface induces spatial correlation between areas. We implement the model within the framework of a generalized linear mixed model using radial basis splines. Using this approach, boundary misalignment becomes a nonissue. Additionally, this disease-mapping framework facilitates fast, easy model fitting by using a penalized quasilikelihood approximation to maximum likelihood estimation. We anticipate that the method will also be useful for large disease-mapping datasets for which fully Bayesian approaches are infeasible. We apply our method to assess socioeconomic trends in breast cancer incidence in Los Angeles between the periods 1988-1992 and 1998-2002.  相似文献   

15.
In this paper, we propose a genetic algorithm based approach to determine the pose of an object in Automated Visual Inspection having three degrees of freedom. We have investigated the effect of noise at 20 dB SNR and also mismatch resulting from incorrect correspondences between the object space points and the image space points, on the estimation of pose parameters. The maximum error in translation parameters is less than 0.45 cm and rotational error is less than 0.2 degree at 20 dB SNR. The error in parameter estimation is insignificant upto 7 pairs of mismatched points out of 24 points in object space and the results skyrockets when 8 or more pairs of points are mismatched. We have compared our result with that obtained by least square technique and it shows that GA based method outperform the gradient based technique when the number of vertices of the object to be inspected is small. These results have clearly established the robustness of GA in estimating the pose of an object with small number of vertices in automated visual inspection.  相似文献   

16.
The utility of machine learning in understanding the motor system is promising a revolution in how to collect, measure, and analyze data. The field of movement science already elegantly incorporates theory and engineering principles to guide experimental work, and in this review we discuss the growing use of machine learning: from pose estimation, kinematic analyses, dimensionality reduction, and closed-loop feedback, to its use in understanding neural correlates and untangling sensorimotor systems. We also give our perspective on new avenues, where markerless motion capture combined with biomechanical modeling and neural networks could be a new platform for hypothesis-driven research.  相似文献   

17.
In this study, a model for the estimation of the dynamics of the lower extremities in standing sway from force plate data only is presented. A three-dimensional, five-segment, four-joint model of the human body was used to describe postural standing sway dynamics. Force-plate data of the reactive forces and centers of pressure were measured bilaterally. By applying the equations of motion to these data, the transversal trajectory of the center of gravity (CG) of the body was resolved in the sagittal and coronal planes. An inverse kinematics algorithm was used to evaluate the kinematics of the body segments. The dynamics of the segments was then resolved by using the Newton-Euler equations, and the model's estimated dynamic quantities of the distal segments were compared with those actually measured. Differences between model and measured dynamics were calculated and minimized, using an iterative algorithm to re-estimate joint positioning and anthropometric properties. The above method was tested with a group of 11 able-bodied subjects, and the results indicated that the relative errors obtained in the final iteration were of the same order of magnitude as those reported for closed loop problems involved in direct kinematics measurements of human gait. Received: 22 July 1997 / Accepted in revised form: 29 January 1998  相似文献   

18.
MOTIVATION: A promising and reliable approach to annotate gene function is clustering genes not only by using gene expression data but also literature information, especially gene networks. RESULTS: We present a systematic method for gene clustering by combining these totally different two types of data, particularly focusing on network modularity, a global feature of gene networks. Our method is based on learning a probabilistic model, which we call a hidden modular random field in which the relation between hidden variables directly represents a given gene network. Our learning algorithm which minimizes an energy function considering the network modularity is practically time-efficient, regardless of using the global network property. We evaluated our method by using a metabolic network and microarray expression data, changing with microarray datasets, parameters of our model and gold standard clusters. Experimental results showed that our method outperformed other four competing methods, including k-means and existing graph partitioning methods, being statistically significant in all cases. Further detailed analysis showed that our method could group a set of genes into a cluster which corresponds to the folate metabolic pathway while other methods could not. From these results, we can say that our method is highly effective for gene clustering and annotating gene function.  相似文献   

19.
We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.  相似文献   

20.
Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号