首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Evidence of animal multimodal signalling is widespread and compelling. Dogs’ aggressive vocalisations (growls and barks) have been extensively studied, but without any consideration of the simultaneously produced visual displays. In this study we aimed to categorize dogs’ bimodal aggressive signals according to the redundant/non-redundant classification framework. We presented dogs with unimodal (audio or visual) or bimodal (audio-visual) stimuli and measured their gazing and motor behaviours. Responses did not qualitatively differ between the bimodal and two unimodal contexts, indicating that acoustic and visual signals provide redundant information. We could not further classify the signal as ‘equivalent’ or ‘enhancing’ as we found evidence for both subcategories. We discuss our findings in relation to the complex signal framework, and propose several hypotheses for this signal’s function.  相似文献   

2.
3.
Wavelet transform has been widely applied in extracting characteristic information in spike sorting. As the wavelet coefficients used to distinguish various spike shapes are often disorganized, they still lack in effective unsupervised methods still lacks to select the most discriminative features. In this paper, we propose an unsupervised feature selection method, employing kernel density estimation to select those wavelet coefficients with bimodal or multimodal distributions. This method is tested on a simulated spike data set, and the average misclassification rate after fuzzy C-means clustering has been greatly reduced, which proves this kernel density estimation-based feature selection approach is effective.  相似文献   

4.
BackgroundIt is assumed that different pain phenotypes are based on varying molecular pathomechanisms. Distinct ion channels seem to be associated with the perception of cold pain, in particular TRPM8 and TRPA1 have been highlighted previously. The present study analyzed the distribution of cold pain thresholds with focus at describing the multimodality based on the hypothesis that it reflects a contribution of distinct ion channels.MethodsCold pain thresholds (CPT) were available from 329 healthy volunteers (aged 18 – 37 years; 159 men) enrolled in previous studies. The distribution of the pooled and log-transformed threshold data was described using a kernel density estimation (Pareto Density Estimation (PDE)) and subsequently, the log data was modeled as a mixture of Gaussian distributions using the expectation maximization (EM) algorithm to optimize the fit.ResultsCPTs were clearly multi-modally distributed. Fitting a Gaussian Mixture Model (GMM) to the log-transformed threshold data revealed that the best fit is obtained when applying a three-model distribution pattern. The modes of the identified three Gaussian distributions, retransformed from the log domain to the mean stimulation temperatures at which the subjects had indicated pain thresholds, were obtained at 23.7 °C, 13.2 °C and 1.5 °C for Gaussian #1, #2 and #3, respectively.ConclusionsThe localization of the first and second Gaussians was interpreted as reflecting the contribution of two different cold sensors. From the calculated localization of the modes of the first two Gaussians, the hypothesis of an involvement of TRPM8, sensing temperatures from 25 – 24 °C, and TRPA1, sensing cold from 17 °C can be derived. In that case, subjects belonging to either Gaussian would possess a dominance of the one or the other receptor at the skin area where the cold stimuli had been applied. The findings therefore support a suitability of complex analytical approaches to detect mechanistically determined patterns from pain phenotype data.  相似文献   

5.
A continuous distribution approach, instead of the traditional mono- and multiexponential analysis, for determining quencher concentration in a heterogeneous system has been developed. A mathematical model of phosphorescence decay inside a volume with homogeneous concentration of phosphor and heterogeneous concentration of quencher was formulated to obtain pulse-response fitting functions for four different distributions of quencher concentration: rectangular, normal (Gaussian), gamma, and multimodal. The analysis was applied to parameter estimates of a heterogeneous distribution of oxygen tension (PO2) within a volume. Simulated phosphorescence decay data were randomly generated for different distributions and heterogeneity of PO2 inside the excitation/emission volume, consisting of 200 domains, and then fit with equations developed for the four models. Analysis using a monoexponential fit yielded a systematic error (underestimate) in mean PO2 that increased with the degree of heterogeneity. The fitting procedures based on the continuous distribution approach returned more accurate values for parameters of the generated PO2 distribution than did the monoexponential fit. The parameters of the fit (M = mean; sigma = standard deviation) were investigated as a function of signal-to-noise ratio (SNR = maximum signal amplitude/peak-to-peak noise). The best-fit parameter values were stable when SNR > or = 20. All four fitting models returned accurate values of M and sigma for different PO2 distributions. The ability of our procedures to resolve two different heterogeneous compartments was also demonstrated using a bimodal fitting model. An approximate scheme was formulated to allow calculation of the first moments of a spatial distribution of quencher without specifying the distribution. In addition, a procedure for the recovery of a histogram, representing the quencher concentration distribution, was developed and successfully tested.  相似文献   

6.
The median regression function is defined and demonstrated by examples. A lemma with sufficient conditions for continuity and differentiation of the median regression function is proved. Its estimation from a random sample is deduced firstly based on the empirical distribution function and secondly based on a kernel estimation with Gaussian kernels. Both estimations are demonstrated by GALTONS historical example. A comparison between the empirical median regression function and the empirical regression of the first kind is made by an example. A hint for curve fitting with the median estimation equation is also given by an example.  相似文献   

7.
Strategies to minimize dengue transmission commonly rely on vector control, which aims to maintain Ae. aegypti density below a theoretical threshold. Mosquito abundance is traditionally estimated from mark-release-recapture (MRR) experiments, which lack proper analysis regarding accurate vector spatial distribution and population density. Recently proposed strategies to control vector-borne diseases involve replacing the susceptible wild population by genetically modified individuals’ refractory to the infection by the pathogen. Accurate measurements of mosquito abundance in time and space are required to optimize the success of such interventions. In this paper, we present a hierarchical probabilistic model for the estimation of population abundance and spatial distribution from typical mosquito MRR experiments, with direct application to the planning of these new control strategies. We perform a Bayesian analysis using the model and data from two MRR experiments performed in a neighborhood of Rio de Janeiro, Brazil, during both low- and high-dengue transmission seasons. The hierarchical model indicates that mosquito spatial distribution is clustered during the winter (0.99 mosquitoes/premise 95% CI: 0.80–1.23) and more homogeneous during the high abundance period (5.2 mosquitoes/premise 95% CI: 4.3–5.9). The hierarchical model also performed better than the commonly used Fisher-Ford’s method, when using simulated data. The proposed model provides a formal treatment of the sources of uncertainty associated with the estimation of mosquito abundance imposed by the sampling design. Our approach is useful in strategies such as population suppression or the displacement of wild vector populations by refractory Wolbachia-infected mosquitoes, since the invasion dynamics have been shown to follow threshold conditions dictated by mosquito abundance. The presence of spatially distributed abundance hotspots is also formally addressed under this modeling framework and its knowledge deemed crucial to predict the fate of transmission control strategies based on the replacement of vector populations.  相似文献   

8.
The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager’s method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei’s matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei’s matrix, and matrices generated with the finite element method.  相似文献   

9.
Fast gating in time series of patch-clamp current demands powerful tools to reveal the rate constants of the adequate Hidden Markov model. Here, two approaches are presented to improve the temporal resolution of the direct fit of the time series. First, the prediction algorithm is extended to include intermediate currents between the nominal levels as caused by the anti-aliasing filter. This approach can reveal rate constants that are about 4 times higher than the corner frequency of the anti-aliasing filter. However, this approach is restricted to time series with very low noise. Second, the direct fit of the time series is combined with a beta fit, i.e., a fit of the deviations of the amplitude histogram from the Gaussian distribution. Since the “theoretical” amplitude histograms for higher-order Bessel filters cannot be calculated by analytical tools, they are generated from simulated time series. In a first approach, a simultaneous fit of the time series and of the Beta fit is tested. This simultaneous fit, however, inherits the drawbacks of both approaches, not the benefits. More successful is a subsequent fit: The fit of the time series yields a set of rate constants. The subsequent Beta fit uses the slow rate constants of the fit of the time series as fixed parameters and the optimization algorithm is restricted to the fast ones. The efficiency of this approach is illustrated by means of time series obtained from simulation and from the dominant K+ channel in Chara. This shows that temporal resolution can reach the microsecond range.  相似文献   

10.
Species sensitivity distributions (SSDs) are used globally to generate water quality guidelines (WQGs). In Canada, a suite of models has been endorsed for describing SSDs. However, these models may not be suitable for substances with multiple modes of toxic action such as pesticides. Pesticides can produce multimodal SSDs where sensitive target organisms comprise one mode of the SSD and non-target organisms comprise the remaining mode(s). Guidelines from this type of SSD might be estimated using only the most sensitive taxa or using a multimodal distribution. The multimodal method presented here uses all data meeting data quality criteria and is thus in keeping with the concept that data comprising an SSD are a random sample from the population of interest rather than a subset thereof. The bimodal method can simultaneously emphasize the more sensitive portion of the dataset by allowing estimation of WQGs using a statistical subset of the data. In the case of the atrazine dataset example, this allowed estimating a WQG emphasizing more sensitive taxa whereas no parametric models fit only the more sensitive data and the small sample size (5) precluded the use of nonparametric methods.  相似文献   

11.
A methodology is developed that determines age-specific transition rates between cell cycle phases during balanced growth by utilizing age-structured population balance equations. Age-distributed models are the simplest way to account for varied behavior of individual cells. However, this simplicity is offset by difficulties in making observations of age distributions, so age-distributed models are difficult to fit to experimental data. Herein, the proposed methodology is implemented to identify an age-structured model for human leukemia cells (Jurkat) based only on measurements of the total number density after the addition of bromodeoxyuridine partitions the total cell population into two subpopulations. Each of the subpopulations will temporarily undergo a period of unbalanced growth, which provides sufficient information to extract age-dependent transition rates, while the total cell population remains in balanced growth. The stipulation of initial balanced growth permits the derivation of age densities based on only age-dependent transition rates. In fitting the experimental data, a flexible transition rate representation, utilizing a series of cubic spline nodes, finds a bimodal G(0)/G(1) transition age probability distribution best fits the experimental data. This resolution may be unnecessary as convex combinations of more restricted transition rates derived from normalized Gaussian, lognormal, or skewed lognormal transition-age probability distributions corroborate the spline predictions, but require fewer parameters. The fit of data with a single log normal distribution is somewhat inferior suggesting the bimodal result as more likely. Regardless of the choice of basis functions, this methodology can identify age distributions, age-specific transition rates, and transition-age distributions during balanced growth conditions.  相似文献   

12.
The exact identification of individual seed sources through genetic analysis of seed tissue of maternal origin has recently brought the full analytical potential of parentage analysis to the study of seed dispersal. No specific statistical methodology has been described so far, however, for estimation of the dispersal kernel function from categorical maternity assignment. In this study, we introduce a maximum-likelihood procedure to estimate the seed dispersal kernel from exact identification of seed sources. Using numerical simulations, we show that the proposed method, unlike other approaches, is independent of seed fecundity variation, yielding accurate estimates of the shape and range of the seed dispersal kernel under varied sampling and dispersal conditions. We also demonstrate how an obvious estimator of the dispersal kernel, the maximum-likelihood fit of the observed distribution of dispersal distances to seed traps, can be strongly biased due to the spatial arrangement of seed traps relative to source plants. Finally, we illustrate the use of the proposed method with a previously published empirical example for the animal-dispersed tree species Prunus mahaleb.  相似文献   

13.
Anticipatory force planning during grasping is based on visual cues about the object’s physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object’s center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object’s center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object’s CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.  相似文献   

14.
15.
ABSTRACT The kernel density estimator is used commonly for estimating animal utilization distributions from location data. This technique requires estimation of a bandwidth, for which ecologists often use least-squares cross-validation (LSCV). However, LSCV has large variance and a tendency to under-smooth data, and it fails to generate a bandwidth estimate in some situations. We compared performance of 2 new bandwidth estimators (root-n) versus that of LSCV using simulated data and location data from sharp-shinned hawks (Accipter striatus) and red wolves (Canis rufus). With simulated data containing no repeat locations, LSCV often produced a better fit between estimated and true utilization distributions than did root-n estimators on a case-by-case basis. On average, LSCV also provided lower positive relative error in home-range areas with small sample sizes of simulated data. However, root-n estimators tended to produce a better fit than LSCV on average because of extremely poor estimates generated on occasion by LSCV. Furthermore, the relative performance of LSCV decreased substantially as the number of repeat locations in the data increased. Root-n estimators also generally provided a better fit between utilization distributions generated from subsamples of hawk data and the local densities of locations from the full data sets. Least-squares cross-validation generated more unrealistically disjointed estimates of home ranges using real location data from red wolf packs. Most importantly, LSCV failed to generate home-range estimates for >20% of red wolf packs due to presence of repeat locations. We conclude that root-n estimators are superior to LSCV for larger data sets with repeat locations or other extreme clumping of data. In contrast, LSCV may be superior where the primary interest is in generating animal home ranges (rather than the utilization distribution) and data sets are small with limited clumping of locations.  相似文献   

16.
Estimation of Citywide Air Pollution in Beijing   总被引:1,自引:0,他引:1  
There has been discrepancies between the daily air quality reports of the Beijing municipal government, observations recorded at the U.S. Embassy in Beijing, and Beijing residents’ perceptions of air quality. This study estimates Beijing’s daily area PM2.5 mass concentration by means of a novel technique SPA (Single Point Areal Estimation) that uses data from the single PM2.5 observation station of the U.S Embassy and the 18 PM10 observation stations of the Beijing Municipal Environmental Protection Bureau. The proposed technique accounts for empirical relationships between different types of observations, and generates best linear unbiased pollution estimates (in a statistical sense). The technique extends the daily PM2.5 mass concentrations obtained at a single station (U.S. Embassy) to a citywide scale using physical relations between pollutant concentrations at the embassy PM2.5 monitoring station and at the 18 official PM10 stations that are evenly distributed across the city. Insight about the technique’s spatial estimation accuracy (uncertainty) is gained by means of theoretical considerations and numerical validations involving real data. The technique was used to study citywide PM2.5 pollution during the 423-day period of interest (May 10, 2010 to December 6, 2011). Finally, a freely downloadable software library is provided that performs all relevant calculations of pollution estimation.  相似文献   

17.
Tracking moving objects, including one’s own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, “probabilistic population codes.” We show that a recurrent neural network—a modified form of an exponential family harmonium (EFH)—that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states.  相似文献   

18.
Capture-recapture studies are frequently used to monitor the status and trends of wildlife populations. Detection histories from individual animals are used to estimate probability of detection and abundance or density. The accuracy of abundance and density estimates depends on the ability to model factors affecting detection probability. Non-spatial capture-recapture models have recently evolved into spatial capture-recapture models that directly include the effect of distances between an animal’s home range centre and trap locations on detection probability. Most studies comparing non-spatial and spatial capture-recapture biases focussed on single year models and no studies have compared the accuracy of demographic parameter estimates from open population models. We applied open population non-spatial and spatial capture-recapture models to three years of grizzly bear DNA-based data from Banff National Park and simulated data sets. The two models produced similar estimates of grizzly bear apparent survival, per capita recruitment, and population growth rates but the spatial capture-recapture models had better fit. Simulations showed that spatial capture-recapture models produced more accurate parameter estimates with better credible interval coverage than non-spatial capture-recapture models. Non-spatial capture-recapture models produced negatively biased estimates of apparent survival and positively biased estimates of per capita recruitment. The spatial capture-recapture grizzly bear population growth rates and 95% highest posterior density averaged across the three years were 0.925 (0.786–1.071) for females, 0.844 (0.703–0.975) for males, and 0.882 (0.779–0.981) for females and males combined. The non-spatial capture-recapture population growth rates were 0.894 (0.758–1.024) for females, 0.825 (0.700–0.948) for males, and 0.863 (0.771–0.957) for both sexes. The combination of low densities, low reproductive rates, and predominantly negative population growth rates suggest that Banff National Park’s population of grizzly bears requires continued conservation-oriented management actions.  相似文献   

19.
PurposeQuantitative metrics in lung computed tomography (CT) images have been widely used, often without a clear connection with physiology. This work proposes a patient-independent model for the estimation of well-aerated volume of lungs in CT images (WAVE).MethodsA Gaussian fit, with mean (Mu.f) and width (Sigma.f) values, was applied to the lower CT histogram data points of the lung to provide the estimation of the well-aerated lung volume (WAVE.f). Independence from CT reconstruction parameters and respiratory cycle was analysed using healthy lung CT images and 4DCT acquisitions. The Gaussian metrics and first order radiomic features calculated for a third cohort of COVID-19 patients were compared with those relative to healthy lungs. Each lung was further segmented in 24 subregions and a new biomarker derived from Gaussian fit parameter Mu.f was proposed to represent the local density changes.ResultsWAVE.f resulted independent from the respiratory motion in 80% of the cases. Differences of 1%, 2% and up to 14% resulted comparing a moderate iterative strength and FBP algorithm, 1 and 3 mm of slice thickness and different reconstruction kernel. Healthy subjects were significantly different from COVID-19 patients for all the metrics calculated. Graphical representation of the local biomarker provides spatial and quantitative information in a single 2D picture.ConclusionsUnlike other metrics based on fixed histogram thresholds, this model is able to consider the inter- and intra-subject variability. In addition, it defines a local biomarker to quantify the severity of the disease, independently of the observer.  相似文献   

20.
Working memory is a key executive function for flying an aircraft. This function is particularly critical when pilots have to recall series of air traffic control instructions. However, working memory limitations may jeopardize flight safety. Since the functional near-infrared spectroscopy (fNIRS) method seems promising for assessing working memory load, our objective is to implement an on-line fNIRS-based inference system that integrates two complementary estimators. The first estimator is a real-time state estimation MACD-based algorithm dedicated to identifying the pilot’s instantaneous mental state (not-on-task vs. on-task). It does not require a calibration process to perform its estimation. The second estimator is an on-line SVM-based classifier that is able to discriminate task difficulty (low working memory load vs. high working memory load). These two estimators were tested with 19 pilots who were placed in a realistic flight simulator and were asked to recall air traffic control instructions. We found that the estimated pilot’s mental state matched significantly better than chance with the pilot’s real state (62% global accuracy, 58% specificity, and 72% sensitivity). The second estimator, dedicated to assessing single trial working memory loads, led to 80% classification accuracy, 72% specificity, and 89% sensitivity. These two estimators establish reusable blocks for further fNIRS-based passive brain computer interface development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号