首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 32 毫秒
1.
A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model’s utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.  相似文献   

2.
The goal of LCA is to identify the environmental impacts resulting from a product, process, or activity. While LCA is useful for evaluating environmental attributes, it stops short of providing information that business managers routinely utilize for decision-making — i.e., dollars. Thus, decisions regarding the processes used for manufacturing products and the materials comprising those products can be enhanced by weaving cost and environmental information into the decision-making process. Various approaches have been used during the past decade to supplement environmental information with cost information. One of these tools is environmental accounting, the identification, analysis, reporting, and use of environmental information, including environmental cost data. Environmental cost accounting provides information necessary for identifying the true costs of products and processes and for evaluating opportunities to minimize those costs. As demonstrated through two case studies, many companies are incorporating environmental cost information into their accounting systems to prioritize investments in new technologies and products.  相似文献   

3.
Range maps of thousands of species, compiled and made freely available by the International Union for Conservation of Nature, are being increasingly applied to support spatial conservation planning. However, their coarse nature makes them prone to commission and omission errors, and they lack information on the variations in abundance within species’ distributions, calling into question their value to inform decisions at the fine scales at which conservation often takes place. Here, we tested if species ranges can reliably be used to estimate the responsibility of sites for the global conservation of species. We defined ‘specific responsibility’ as the fraction of a species’ population within a given site, considering it useful for prioritising species within sites; and defined ‘overall responsibility’ as the sum of specific responsibility across species within a site, assuming it informative of priorities among sites. Taking advantage of an exceptionally detailed dataset on the distribution and abundance of bird species at a near‐continental scale – a level of information rarely available to local decision‐makers – we created a benchmark against which we tested estimates of responsibility derived from range maps. We investigated approaches for improving these estimates by complementing range maps with plausibly available local data. We found that despite their coarse nature, range maps provided good estimates of sites’ overall responsibility, but relatively poor estimates of specific responsibility. Estimates were improved by combining range maps with local species lists or local abundance data, easily available through local surveys on the sites of interest, or simulated expert knowledge. Our results suggest that combining range maps with local data is a promising route for improving the effectiveness of local conservation decisions at contributing to reducing global biodiversity losses. This is all the more urgent in hyper‐diverse poorly‐known regions where conservation‐relevant decisions must proceed despite a paucity of biodiversity data.  相似文献   

4.
5.
Context plays an important role in a discriminator's ability to make appropriate recognition decisions, such as accepting what is acceptable and rejecting what is not acceptable. Previously it was shown that in both honey bees and stingless bees, discriminating workers (guards) make more errors towards conspecific non‐nestmates when the guards are removed from the natural hive entrance. However, it may be that guards, in addition to making incorrect recognition decisions, also may adopt non‐guarding behaviours. Here, we tested honey bee guards in two contexts (natural versus unnatural) against five types of introduced arthropods (conspecific nestmates and non‐nestmates; allospecific wasps, beetles and woodlice), which should be rejected without error. We scored a guard's response as accept, reject, avoid and ignore. Total errors significantly increased from natural to unnatural contexts. Specifically, guards were significantly more likely to make an acceptance error, guarding and accepting both conspecific and allospecific non‐nestmates, in the unnatural context. Importantly, guards were significantly more likely to adopt a non‐guarding behaviour in the unnatural context, which usually involved ignoring or avoiding, where a guard makes contact but then immediately retreats, the introduced arthropod. Overall, these data demonstrate the context is important. Removing a guard from the home that it protects elicits either incorrect discrimination or, additionally, a complete lack of discriminator behaviour altogether.  相似文献   

6.
In a companion paper [1], we have presented a generic approach for inferring how subjects make optimal decisions under uncertainty. From a Bayesian decision theoretic perspective, uncertain representations correspond to "posterior" beliefs, which result from integrating (sensory) information with subjective "prior" beliefs. Preferences and goals are encoded through a "loss" (or "utility") function, which measures the cost incurred by making any admissible decision for any given (hidden or unknown) state of the world. By assuming that subjects make optimal decisions on the basis of updated (posterior) beliefs and utility (loss) functions, one can evaluate the likelihood of observed behaviour. In this paper, we describe a concrete implementation of this meta-Bayesian approach (i.e. a Bayesian treatment of Bayesian decision theoretic predictions) and demonstrate its utility by applying it to both simulated and empirical reaction time data from an associative learning task. Here, inter-trial variability in reaction times is modelled as reflecting the dynamics of the subjects' internal recognition process, i.e. the updating of representations (posterior densities) of hidden states over trials while subjects learn probabilistic audio-visual associations. We use this paradigm to demonstrate that our meta-Bayesian framework allows for (i) probabilistic inference on the dynamics of the subject's representation of environmental states, and for (ii) model selection to disambiguate between alternative preferences (loss functions) human subjects could employ when dealing with trade-offs, such as between speed and accuracy. Finally, we illustrate how our approach can be used to quantify subjective beliefs and preferences that underlie inter-individual differences in behaviour.  相似文献   

7.
Watanabe K  Hidaka A  Otsu N  Kurita T 《PloS one》2012,7(3):e32352
In time-resolved spectroscopy, composite signal sequences representing energy transfer in fluorescence materials are measured, and the physical characteristics of the materials are analyzed. Each signal sequence is represented by a sum of non-negative signal components, which are expressed by model functions. For analyzing the physical characteristics of a measured signal sequence, the parameters of the model functions are estimated. Furthermore, in order to quantitatively analyze real measurement data and to reduce the risk of improper decisions, it is necessary to obtain the statistical characteristics from several sequences rather than just a single sequence. In the present paper, we propose an automatic method by which to analyze composite signals using non-negative factorization and an information criterion. The proposed method decomposes the composite signal sequences using non-negative factorization subjected to parametric base functions. The number of components (i.e., rank) is also estimated using Akaike's information criterion. Experiments using simulated and real data reveal that the proposed method automatically estimates the acceptable ranks and parameters.  相似文献   

8.
Backache is common yet its routine medical assessment is imprecise, unreliable, and poorly interpreted. Reproducibility studies on 475 patients improved the reliability of clinical interview and examination in backache, while studies of 335 normal subjects defined the limits of normality. Assessment of nerve function was found to be reliable but assessment of the back itself had to be considerably modified and examination improved by incorporating actual measurements. The validity and clinical utility of the information were analysed to determine the minimum amount of information which should be collected to permit clear diagnosis and management.  相似文献   

9.
Recent decisions about actions and goals can have effects on future choices. Several studies have shown an effect of the previous trial history on neural activity in a subsequent trial. Often, but not always, these effects originate from task requirements that make it necessary to maintain access to previous trial information to make future decisions. Maintaining the information about recent decisions and their outcomes can play an important role in both adapting to new contingencies and learning. Previous goal decisions must be distinguished from goals that are currently being planned to avoid perseveration or more general errors. Output monitoring is probably based on this separation of accomplished past goals from pending future goals that are being pursued. Behaviourally, it has been shown that the history context can influence the location, error rate and latency of successive responses. We will review the neurophysiological studies in the literature, including data from our laboratory, which support a role for the frontal lobe in tracking previous goal selections and outputs when new goals need to be accomplished.  相似文献   

10.
A simulation study was performed to investigate the effects of missing values, typing errors and distorted segregation ratios in molecular marker data on the construction of genetic linkage maps, and to compare the performance of three locus-ordering criteria (weighted least squares, maximum likelihood and minimum sum of adjacent recombination fractions criteria) in the presence of such effects. The study was based upon three linkage groups of 10 loci at 2, 6, and 10 cM spacings simulated from a doubled-haploid population of size 150. Criteria performance were assessed using the number of replicates with correctly estimated orders, the mean rank correlation between the estimated and the true order and the mean total map length. Bootstrap samples from replicates in the maximum likelihood analysis produced a measure of confidence in the estimated locus order. The effects of missing values and/or typing errors in the data are to reduce the proportion of correctly ordered maps, and this problem worsens as the distances between loci decreases. The maximum likelihood criterion is most successful at ordering loci correctly, but gives estimated map lengths, which are substantially inflated when typing errors are present. The presence of missing values in the data produces shorter map lengths for more widely spaced markers, especially under the weighted least-squares criterion. Overall, the presence of segregation distortion has little effect on this population.  相似文献   

11.
Null hypothesis significance testing has been under attack in recent years, partly owing to the arbitrary nature of setting α (the decision-making threshold and probability of Type I error) at a constant value, usually 0.05. If the goal of null hypothesis testing is to present conclusions in which we have the highest possible confidence, then the only logical decision-making threshold is the value that minimizes the probability (or occasionally, cost) of making errors. Setting α to minimize the combination of Type I and Type II error at a critical effect size can easily be accomplished for traditional statistical tests by calculating the α associated with the minimum average of α and β at the critical effect size. This technique also has the flexibility to incorporate prior probabilities of null and alternate hypotheses and/or relative costs of Type I and Type II errors, if known. Using an optimal α results in stronger scientific inferences because it estimates and minimizes both Type I errors and relevant Type II errors for a test. It also results in greater transparency concerning assumptions about relevant effect size(s) and the relative costs of Type I and II errors. By contrast, the use of α = 0.05 results in arbitrary decisions about what effect sizes will likely be considered significant, if real, and results in arbitrary amounts of Type II error for meaningful potential effect sizes. We cannot identify a rationale for continuing to arbitrarily use α = 0.05 for null hypothesis significance tests in any field, when it is possible to determine an optimal α.  相似文献   

12.
Some problems of optimal screening are considered. A screening strategy is allowed to be nonperiodic. Two approaches to screening optimization are used: the minimum delay time approach and the minimum cost approach. Both approaches are applied to the analysis of an optimization problem when the natural history of the disease is known and when it is unknown (a minimax problem). The structure of optimal screening policies is investigated as well as the benefit they can provide compared to the periodic screening policy. The detection probability is assumed to depend only on the stage of the disease, though it may not be constant throughout each stage. It is shown that periodic screening appears to be optimal when one has no information on the natural history of the disease, the minimum delay time criterion being used for optimization. Some applications to lung cancer screening are presented.  相似文献   

13.
Partition-free congruence analysis: implications for sensitivity analysis   总被引:1,自引:0,他引:1  
A criterion is proposed to compare systematic hypotheses based on multiple sources of information under a diverse set of interpretive assumptions (i.e., sensitivity analysis of Wheeler, 1995 ). This metric, the Meta‐Retention Index (MRI), is the retention index (RI) of Farris calculated over the set of conventional homologous qualitative characters (ordered, unordered, Sankoff, etc.) and molecular fragment characters sensu Wheeler (1996, 1999 ). The superiority of this measure to other similar measures (e.g., incongruence length difference test) comes from its independence from partition information. The only values that participate in its calculation are the minimum, maximum and observed cost (= cladogram cost) of each character. The partition (morphology, gene locus) from which the variant may have come is irrelevant. In the special cases where there is only a single data partition, this measure is equivalent to the conventional RI; and in the case where there are single fragment characters per partition (contiguous molecular loci as data sets) the measure is identical to the complement of the Rescaled Incongruence Length Difference (RILD) of Wheeler and Hayashi (1998 ). The MRI can serve as an optimality criterion for deciding among systematic hypotheses based on the same data, but different sets of analysis assumptions (e.g., character weights, indel costs). The MRI may lose discriminatory power in situations where a minority of highly congruent characters is given high weight. This situation can be detected and seems unlikely to occur frequently in real data sets. © The Willi Hennig Society 2006.  相似文献   

14.
Acquiring and storing information in memory is constrained by the limited capacity of attentional and short-term memory systems. Therefore, processes that prioritize information for storage in memory according to its survival value to the organism are likely to have evolved. Information incurring energy or time costs to acquire, or if forgotten, should be stored and retrieved more effectively than information that is less costly to acquire or forget. We manipulated the costs of acquiring and forgetting information in an eight-arm radial maze memory task for pigs, Sus scrofa, by placing ropes at the entrance to arms, which pigs had to walk over incurring an extra 2-3-s time cost for each arm entry. Each day each pig (N=16) made a sampling visit to the maze to find food in four arms, followed by a 10-min retention interval, and a recall visit in which visits to the four previously empty arms were reinforced. Baited arms were varied daily. Pigs were divided into four groups and exposed to ropes (costs) either during sampling only, recall only, both or neither. Exposure to costs during sampling visits decreased errors during recall visits. This was probably the result of enhanced attention to and encoding of information during sampling. When all groups were performing equally well, retroactive interference treatments revealed hidden differences between them. Groups experiencing costs during recall tests were least susceptible to interference effects, probably because of more considered use of retrieved information. Both memory encoding and retrieval processes may thus be modulated by even small costs of obtaining or forgetting information.  相似文献   

15.
Translational research using evidence-based and comparative effectiveness research continues to evolve, becoming a useful tool in improving informed consent and decision-making in the clinical setting. While in development, emerging technologies, including cellular and molecular biology, are leading to establishing evidence-based dental practices. One emerging technology, which conjoins bench proteomic findings to clinical decision-making for treatment intervention, is the Translational Evidence Mechanism. This mechanism was developed to be a foundation for a compact between researcher, translational researcher, clinician, and patient. The output of such a mechanism is the clinical practice guideline (CPG), an interactive tool for dentists and patients to game evidence in reaching optimum clinical decisions that correspond to individual patient preferences and values. As such, the clinical practice guideline requires the vesting of decision, utility, and cost best evidence. Evidence-based research provides decision data, a first attempt at supporting decision-making by providing best outcome data. Since then comparative effectiveness research has emerged, using systematic review analysis to compare similar treatments or procedures in maximizing the choice of the most effective cost/benefit option within the context of best evidence. With innovation in the clinical practice guideline for optimizing efficacy and comparative effectiveness research, evidence-based practices will shape a new approach to health-based systems that adhere to shared decision-making between bench scientists, healthcare providers and patients.  相似文献   

16.
Translational research using evidence-based and comparative effectiveness research continues to evolve, becoming a useful tool in improving informed consent and decision-making in the clinical setting. While in development, emerging technologies, including cellular and molecular biology, are leading to establishing evidence-based dental practices. One emerging technology, which conjoins bench proteomic findings to clinical decision-making for treatment intervention, is the Translational Evidence Mechanism. This mechanism was developed to be a foundation for a compact between researcher, translational researcher, clinician, and patient. The output of such a mechanism is the clinical practice guideline (CPG), an interactive tool for dentists and patients to game evidence in reaching optimum clinical decisions that correspond to individual patient preferences and values. As such, the clinical practice guideline requires the vesting of decision, utility, and cost best evidence. Evidence-based research provides decision data, a first attempt at supporting decision-making by providing best outcome data. Since then comparative effectiveness research has emerged, using systematic review analysis to compare similar treatments or procedures in maximizing the choice of the most effective cost/benefit option within the context of best evidence. With innovation in the clinical practice guideline for optimizing efficacy and comparative effectiveness research, evidence-based practices will shape a new approach to health-based systems that adhere to shared decision-making between bench scientists, healthcare providers and patients.  相似文献   

17.
Eleven industrial carbon source nutrients were evaluated for their efficiency in supplying energy for biological denitrification of high nitrate (1259 mg liter) waters in single-stage continuous flow fermenters. The defined criterion for comparison was the minimum carbon-to-nitrogen ratio necessary to achieve at least 95% nitrate reduction and 90% total organic carbon (TOC) removal. Methanol was the most efficient carbon source of those evaluated. Some of the carbon sources studied failed to achieve a 90% reduction in TOC. The relative rankings in efficiency of the various carbon sources may change once consideration is given to cost, transportation, handling and availability.  相似文献   

18.
Fusion of medical images is a technique that permits the correlation of homologous anatomical structures in different imaging modalities on the basis of a spatial transformation of the data sets. CT and MRI of the spine provide complementary information of possible relevance for diagnostic and therapeutic decisions. Methods enabling a multisegmental CT-MRI fusion of the spine were developed. These solve the problem of altered spatial relationships of the individual anatomical structures due to differing patient positioning in successive data acquisitions. Routine clinical CT and MRI data of a thoracic section of the spine were obtained and transferred to a PC-workstation. Following segmentation of the CT-data, landmarks for each individual vertebra were defined in the CT and MRI data. For each individual vertebra the algorithm we developed then carried out a rigid registration of the CT information to the MR data. The fused data sets were presented as colour-coded images or on the basis of dynamic variation of transparency. To assess registration precision, fiducial registration errors (FRE) and target registration errors (TRE) were calculated. The algorithm permitted multi-segmental image fusion of the spine. The average time required for defining the landmarks was 22 seconds per landmark for CT, and 34 seconds per landmark for MR. The average FRE was 1.53 mm. The TRE for the vertebrae was less than 2 mm. The colour-coded images were particularly suitable for assessing the contours of the anatomical structures, whereas dynamic variation of the transparency of overlapping CT images enabled a better overall assessment of the spatial relationship of the anatomical structures. The algorithm permits precise multi-segmental fusion of CT and MR of the spine, which was not possible using current fusion-algorithms due to variations in the spatial orientation of the anatomical structures caused by different positioning of the axial skeleton in successive examinations.  相似文献   

19.
Monitoring programs of ion concentrations and fluxes in semi-natural ecosystems are confronted with the task to gain as much information as possible with simultaneously minimizing costs and efforts. The aim of this study was (i) to assess how much of the heterogeneity of solution concentrations is lost because of temporal integration of measurements and (ii) to estimate the error in ion fluxes due to temporal integration. High resolution measurements (daily interval) of ion concentrations (sulfate, nitrate, chloride, pH and EC) in throughfall, soil solutions and runoff at the catchment Lehstenbach (Fichtelgebirge, Northeast Bavaria, Germany) were compared over a two year period with the reference monitoring program (biweekly measurement interval). Evaluation of the maximum temporal heterogeneity of ion concentrations in throughfall, soil solution and runoff (expressed as minimum, maximum, median and 25–75% percentile) did not result in an overall higher heterogeneity of the high resolution measurements compared to the reference program. The calculation of runoff fluxes from the reference data (biweekly concentration) resulted in significant errors of up to 25% for time periods < 1 year (high resolution data was considered the "true" value and set as 100%). However, errors became minor (< 10%) if longer time periods were considered. The suitability of different interpolation methods to up-scale biweekly concentration data for the calculation of runoff fluxes was evaluated in this study. We concluded for the monitoring programs at the Lehstenbach catchment that a biweekly measurement interval seemed to be suitable to capture the heterogeneity of ion concentrations and fluxes (and thus temporal trends). In comparison, high resolution measurements with a daily measurement interval were higher in cost, work and time resources and had a relatively low information gain. While the introduced methods are applicable in all monitoring programs, conclusions on temporal resolution of measurements are most likely not valid for systems where ion concentrations have a low autocorrelation length (e.g., agricultural or urban systems with nitrate or pesticide treatment; tropical systems with extreme temperature or hydrological events).  相似文献   

20.
Cheung YK  Thall PF 《Biometrics》2002,58(1):89-97
In many phase II clinical trials, interim monitoring is based on the probability of a binary event, response, defined in terms of one or more time-to-event variables within a time period of fixed length. Such outcome-adaptive methods may require repeated interim suspension of accrual in order to follow each patient for the time period required to evaluate response. This may increase trial duration, and eligible patients arriving during such delays either must wait for accrual to reopen or be treated outside the trial. Alternatively, monitoring may be done continuously by ignoring censored data each time the stopping rule is applied, which wastes information. We propose an adaptive Bayesian method that eliminates these problems. At each patient's accrual time, an approximate posterior for the response probability based on all of the event-time data is used to compute an early stopping criterion. Application to a leukemia trial with a composite event shows that the method can reduce trial duration substantially while maintaining the reliability of interim decisions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号