首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To explore the relationship between category and perceptual learning, we examined both category and perceptual learning in patients with treated Wilson''s disease (WD), whose basal ganglia, known to be important in category learning, were damaged by the disease. We measured their learning rate and accuracy in rule-based and information-integration category learning, and magnitudes of perceptual learning in a wide range of external noise conditions, and compared the results with those of normal controls. The WD subjects exhibited deficits in both forms of category learning and in perceptual learning in high external noise. However, their perceptual learning in low external noise was relatively spared. There was no significant correlation between the two forms of category learning, nor between perceptual learning in low external noise and either form of category learning. Perceptual learning in high external noise was, however, significantly correlated with information-integration but not with rule-based category learning. The results suggest that there may be a strong link between information-integration category learning and perceptual learning in high external noise. Damage to brain structures that are important for information-integration category learning may lead to poor perceptual learning in high external noise, yet spare perceptual learning in low external noise. Perceptual learning in high and low external noise conditions may involve separate neural substrates.  相似文献   

2.

Background

Better understanding of acute stress responses is important for revision of DSM-5. However, the latent structure and relationship between different aspects of acute stress responses haven’t been clarified comprehensively. Bifactor item response model may help resolve this problem.

Objective

The purpose of this study is to develop a statistical model of acute stress responses, based on data from earthquake rescuers using Acute Stress Response Scale (ASRS). Through this model, we could better understand acute stress responses comprehensively, and provide preliminary information for computerized adaptive testing of stress responses.

Methods

Acute stress responses of earthquake rescuers were evaluated using ASRS, and state/trait anxiety were assessed using State-trait Anxiety Inventory (STAI). A hierarchical item response model (bifactor model) was used to analyze the data. Additionally, we tested this hierarchical model with model fit comparisons with one-dimensional and five-dimensional models. The correlations among acute stress responses and state/trait anxiety were compared, based on both the five-dimensional and bifactor models.

Results

Model fit comparisons showed bifactor model fit the data best. Item loadings on general and specific factors varied greatly between different aspects of stress responses. Many symptoms (40%) of physiological responses had positive loadings on general factor, and negative loadings on specific factor of physiological responses, while other stress responses had positive loadings on both general and specific factors. After extracting general factor of stress responses using bifactor analysis, significant positive correlations between physiological responses and state/trait anxiety (r = 0.185/0.112, p<0.01) changed into negative ones (r = −0.177/−0.38, p<0.01).

Conclusion

Our results demonstrated bifactor structure of acute stress responses, and positive and negative correlations between physiological responses and stress responses suggested physiological responses could have negative feedback on severity of stress responses. This finding has not been convincingly demonstrated in previous research.  相似文献   

3.
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie''s algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.  相似文献   

4.
5.
Correlated binary response data with covariates are ubiquitous in longitudinal or spatial studies. Among the existing statistical models, the most well-known one for this type of data is the multivariate probit model, which uses a Gaussian link to model dependence at the latent level. However, a symmetric link may not be appropriate if the data are highly imbalanced. Here, we propose a multivariate skew-elliptical link model for correlated binary responses, which includes the multivariate probit model as a special case. Furthermore, we perform Bayesian inference for this new model and prove that the regression coefficients have a closed-form unified skew-elliptical posterior with an elliptical prior. The new methodology is illustrated by an application to COVID-19 data from three different counties of the state of California, USA. By jointly modeling extreme spikes in weekly new cases, our results show that the spatial dependence cannot be neglected. Furthermore, the results also show that the skewed latent structure of our proposed model improves the flexibility of the multivariate probit model and provides a better fit to our highly imbalanced dataset.  相似文献   

6.
The traditional approach to the development of knowledge-based systems (KBS) has been rule-based, where heuristic knowledge is encoded in a set of production rules. A rule-based reasoning (RBR) system needs a well constructed domain theory as its reasoning basis, and it does not make substantial use of the knowledge embedded in previous cases. An RBR system performs relatively well in a knowledge-rich application environment. Although its capability may be limited when previous experiences are not a good representation of the whole population, a case-based reasoning (CBR) system is capable of using past experiences as problem solving tools, therefore, it is appropriate for an experience-rich domain. In recent years, both RBR and CBR have emerged as important and complementary reasoning methodologies in artificial intelligence. For problem solving in AIDS intervention and prevention, it is useful to integrate RBR and CBR. In this paper, a hybrid KBS which integrates a deductive RBR system and an inductive CRB system is proposed to assess AIDS-risky behaviors.  相似文献   

7.
Suicide is a leading cause of death worldwide. Although research has made strides in better defining suicidal behaviors, there has been less focus on accurate measurement. Currently, the widespread use of self-report, single-item questions to assess suicide ideation, plans and attempts may contribute to measurement problems and misclassification. We examined the validity of single-item measurement and the potential for statistical errors. Over 1,500 participants completed an online survey containing single-item questions regarding a history of suicidal behaviors, followed by questions with more precise language, multiple response options and narrative responses to examine the validity of single-item questions. We also conducted simulations to test whether common statistical tests are robust against the degree of misclassification produced by the use of single-items. We found that 11.3% of participants that endorsed a single-item suicide attempt measure engaged in behavior that would not meet the standard definition of a suicide attempt. Similarly, 8.8% of those who endorsed a single-item measure of suicide ideation endorsed thoughts that would not meet standard definitions of suicide ideation. Statistical simulations revealed that this level of misclassification substantially decreases statistical power and increases the likelihood of false conclusions from statistical tests. Providing a wider range of response options for each item reduced the misclassification rate by approximately half. Overall, the use of single-item, self-report questions to assess the presence of suicidal behaviors leads to misclassification, increasing the likelihood of statistical decision errors. Improving the measurement of suicidal behaviors is critical to increase understanding and prevention of suicide.  相似文献   

8.
Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES) data from 1999–2012 was used to evaluate the performance of detecting diabetes and pre-diabetes.American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC) of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes) and 47% (47% for pre-diabetes) persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734) and an average 34% (48%) persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47%) lies significantly lower for diabetes risk assessment in comparison to logistic regression (p < 0.001), with a significantly higher AUC (p < 0.001) of 0.774 (0.740) for the pre-diabetes group.Our results demonstrate a serious lack of predictive performance in four major online diabetes risk calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that were primarily developed as classical paper questionnaires.  相似文献   

9.
We explore a hierarchical generalized latent factor model for discrete and bounded response variables and in particular, binomial responses. Specifically, we develop a novel two-step estimation procedure and the corresponding statistical inference that is computationally efficient and scalable for the high dimension in terms of both the number of subjects and the number of features per subject. We also establish the validity of the estimation procedure, particularly the asymptotic properties of the estimated effect size and the latent structure, as well as the estimated number of latent factors. The results are corroborated by a simulation study and for illustration, the proposed methodology is applied to analyze a dataset in a gene–environment association study.  相似文献   

10.

Background

Knowledge of coping styles is useful in clinical diagnosis and suggesting specific therapeutic interventions. However, the latent structures and relationships between different aspects of coping styles have not been fully clarified. A full information item bifactor model will be beneficial to future research.

Objective

One goal of this study is identification of the best fit statistical model of coping styles. A second goal is entails extended analyses of latent relationships among different coping styles. In general, such research should offer greater understanding of the mechanisms of coping styles and provide insights into coping with stress.

Methods

Coping Styles Questionnaire (CSQ) and Generalized Self-Efficacy Scale (GSES) were administrated to officers suffering from military stress. Confirmatory Factor Analyses was performed to indentify the best fit model. A hierarchical item response model (bifactor model) was adopted to analyze the data. Additionally, correlations among coping styles and self-efficacy were compared using both original and bifactor models.

Results

Results showed a bifactor model best fit the data. Item loadings on general and specific factors varied among different coping styles. All items loaded significantly on the general factor, and most items also had moderate to large loadings on specific factors. The correlation between coping styles and self-efficacy and the correlation among different coping styles changed significantly after extracting the general factor of coping stress using bifactor analysis. This was seen in changes from positive (r = 0.714, p<0.01) correlation to negative (r = −0.335, p<0.01) and also from negative (r = −0.296, p<0.01) to positive (r = 0.331, p<0.01).

Conclusion

Our results reveal that coping styles have a bifactor structure. They also provide direct evidence of coexisting coping resources and styles. This further clarifies that dimensions of coping styles should include coping resources and specific coping styles. This finding has implications for measurement of coping mechanisms, health maintenance, and stress reduction.  相似文献   

11.
Hama Y  Chano T  Inui T  Matsumoto K  Okabe H 《PloS one》2012,7(3):e32052
RB1-inducible coiled-coil 1 (RB1CC1; also known as FIP200) plays important roles in several biological pathways such as cell proliferation and autophagy. Evaluation of RB1CC1 expression can provide useful clinical information on various cancers and neurodegenerative diseases. In order to realize the clinical applications, it is necessary to establish a stable supply of antibody and reproducible procedures for the laboratory examinations. In the present study, we have generated mouse monoclonal antibodies for RB1CC1, and four kinds of antibodies (N1-8, N1-216, N3-2, and N3-42) were found to be optimal for clinical applications such as ELISA and immunoblots and work as well as the pre-existing polyclonal antibodies. N1-8 monoclonal antibody provided the best recognition of RB1CC1 in the clinico-pathological examination of formalin-fixed paraffin-embedded tissues. These monoclonal antibodies will help to generate new opportunities in scientific examinations in biology and clinical medicine.  相似文献   

12.
13.
A pervasive case of cost-benefit problem is how to allocate effort over time, i.e. deciding when to work and when to rest. An economic decision perspective would suggest that duration of effort is determined beforehand, depending on expected costs and benefits. However, the literature on exercise performance emphasizes that decisions are made on the fly, depending on physiological variables. Here, we propose and validate a general model of effort allocation that integrates these two views. In this model, a single variable, termed cost evidence, accumulates during effort and dissipates during rest, triggering effort cessation and resumption when reaching bounds. We assumed that such a basic mechanism could explain implicit adaptation, whereas the latent parameters (slopes and bounds) could be amenable to explicit anticipation. A series of behavioral experiments manipulating effort duration and difficulty was conducted in a total of 121 healthy humans to dissociate implicit-reactive from explicit-predictive computations. Results show 1) that effort and rest durations are adapted on the fly to variations in cost-evidence level, 2) that the cost-evidence fluctuations driving the behavior do not match explicit ratings of exhaustion, and 3) that actual difficulty impacts effort duration whereas expected difficulty impacts rest duration. Taken together, our findings suggest that cost evidence is implicitly monitored online, with an accumulation rate proportional to actual task difficulty. In contrast, cost-evidence bounds and dissipation rate might be adjusted in anticipation, depending on explicit task difficulty.  相似文献   

14.
Laser‐capture microdissection (LCM) offers a reliable cell population enrichment tool and has been successfully coupled to MS analysis. Despite this, most proteomic studies employ whole tissue lysate (WTL) analysis in the discovery of disease biomarkers and in profiling analyses. Furthermore, the influence of tissue heterogeneity in WTL analysis, nor its impact in biomarker discovery studies have been completely elucidated. In order to address this, we compared previously obtained high resolution MS data from a cohort of 38 breast cancer tissues, of which both LCM enriched tumor epithelial cells and WTL samples were analyzed. Label‐free quantification (LFQ) analysis through MaxQuant software showed a significantly higher number of identified and quantified proteins in LCM enriched samples (3404) compared to WTLs (2837). Furthermore, WTL samples displayed a higher amount of missing data compared to LCM both at peptide and protein levels (p‐value < 0.001). 2D analysis on co‐expressed proteins revealed discrepant expression of immune system and lipid metabolisms related proteins between LCM and WTL samples. We hereby show that LCM better dissected the biology of breast tumor epithelial cells, possibly due to lower interference from surrounding tissues and highly abundant proteins. All data have been deposited in the ProteomeXchange with the dataset identifier PXD002381 ( http://proteomecentral.proteomexchange.org/dataset/PXD002381 ).  相似文献   

15.
In the present paper the linear logistic extension of latent class analysis is described. Thereby it is assumed that the item latent probabilities as well as the class sizes can be attributed to some explanatory variables. The basic equations of the model state the decomposition of the log-odds of the item latent probabilities and of the class sizes into weighted sums of basic parameters representing the effects of the predictor variables. Further, the maximum likelihood equations for these effect parameters and statistical tests for goodness-of-fit are given. Finally, an example illustrates the practical application of the model and the interpretation of the model parameters.  相似文献   

16.
We investigate models for animal feeding behaviour, with the aim of improving understanding of how animals organise their behaviour in the short term. We consider three classes of model: hidden Markov, latent Gaussian and semi-Markov. Each can predict the typical 'clustered' feeding behaviour that is generally observed, however they differ in the extent to which 'memory' of previous behaviour is allowed to affect future behaviour. The hidden Markov model has 'lack of memory', the current behavioural state being dependent on the previous state only. The latent Gaussian model assumes feeding/non-feeding periods to occur by the thresholding of an underlying continuous variable, thereby incorporating some 'short-term memory'. The semi-Markov model, by taking into account the duration of time spent in the previous state, can be said to incorporate 'longer-term memory'. We fit each of these models to a dataset of cow feeding behaviour. We find the semi-Markov model (longer-term memory) to have the best fit to the data and the hidden Markov model (lack of memory) the worst. We argue that in view of effects of satiety on short-term feeding behaviour of animal species in general, biologically suitable models should allow 'memory' to play a role. We conclude that our findings are equally relevant for the analysis of other types of short-term behaviour that are governed by satiety-like principles.  相似文献   

17.
The integro-differential growth model of Eakman, Fredriekson, and Tsuehiya has been employed to fit cell size distribution data for Schizosaccharomyces pombe grown in a chemostat under severe product inhibition by ethanol. The distributions were obtained with a Coulter aperture and an electronic system patterned after that of Harvey and Marr. Four parameters—mean cell division size, cell division size standard deviation, daughter cell size standard deviation, and a growth rate coefficient—were calculated for models where the cell growth rate was inversely proportional to size, constant, and proportional to size. A fourth model, one where sigmoidal growth behavior was simulated by two linear growth segments, was also investigated. Linear and sigmoidal models fit the distribution data best. While the mean cell division size remained relatively constant at all growth rates, standard deviation of division size distribution increased with increasing holding times. Standard deviation of the daughter size distribution remained small at all dilution rates. Unlike previous findings with other organisms, the average cell size of Schizosaccharomyces pobme increased at low growth rates.  相似文献   

18.
《Genomics》2019,111(6):1387-1394
To decipher the genetic architecture of human disease, various types of omics data are generated. Two common omics data are genotypes and gene expression. Often genotype data for a large number of individuals and gene expression data for a few individuals are generated due to biological and technical reasons, leading to unequal sample sizes for different omics data. Unavailability of standard statistical procedure for integrating such datasets motivates us to propose a two-step multi-locus association method using latent variables. Our method is powerful than single/separate omics data analysis and it unravels comprehensively deep-seated signals through a single statistical model. Extensive simulation confirms that it is robust to various genetic models as its power increases with sample size and number of associated loci. It provides p-values very fast. Application to real dataset on psoriasis identifies 17 novel SNPs, functionally related to psoriasis-associated genes, at much smaller sample size than standard GWAS.  相似文献   

19.

Background

Heart failure patients with reduced ejection fraction (HFREF) are heterogenous, and our ability to identify patients likely to respond to therapy is limited. We present a method of identifying disease subtypes using high-dimensional clinical phenotyping and latent class analysis that may be useful in personalizing prognosis and treatment in HFREF.

Methods

A total of 1121 patients with nonischemic HFREF from the β-blocker Evaluation of Survival Trial were categorized according to 27 clinical features. Latent class analysis was used to generate two latent class models, LCM A and B, to identify HFREF subtypes. LCM A consisted of features associated with HF pathogenesis, whereas LCM B consisted of markers of HF progression and severity. The Seattle Heart Failure Model (SHFM) Score was also calculated for all patients. Mortality, improvement in left ventricular ejection fraction (LVEF) defined as an increase in LVEF ≥5% and a final LVEF of 35% after 12 months, and effect of bucindolol on both outcomes were compared across HFREF subtypes. Performance of models that included a combination of LCM subtypes and SHFM scores towards predicting mortality and LVEF response was estimated and subsequently validated using leave-one-out cross-validation and data from the Multicenter Oral Carvedilol Heart Failure Assessment Trial.

Results

A total of 6 subtypes were identified using LCM A and 5 subtypes using LCM B. Several subtypes resembled familiar clinical phenotypes. Prognosis, improvement in LVEF, and the effect of bucindolol treatment differed significantly between subtypes. Prediction improved with addition of both latent class models to SHFM for both 1-year mortality and LVEF response outcomes.

Conclusions

The combination of high-dimensional phenotyping and latent class analysis identifies subtypes of HFREF with implications for prognosis and response to specific therapies that may provide insight into mechanisms of disease. These subtypes may facilitate development of personalized treatment plans.  相似文献   

20.
We put forward a new item response model which is an extension of the binomial error model first introduced by Keats and Lord. Like the binomial error model, the basic latent variable can be interpreted as a probability of responding in a certain way to an arbitrarily specified item. For a set of dichotomous items, this model gives predictions that are similar to other single parameter IRT models (such as the Rasch model) but has certain advantages in more complex cases. The first is that in specifying a flexible two-parameter Beta distribution for the latent variable, it is easy to formulate models for randomized experiments in which there is no reason to believe that either the latent variable or its distribution vary over randomly composed experimental groups. Second, the elementary response function is such that extensions to more complex cases (e.g., polychotomous responses, unfolding scales) are straightforward. Third, the probability metric of the latent trait allows tractable extensions to cover a wide variety of stochastic response processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号