首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

The clinically used methods of pain diagnosis do not allow for objective and robust measurement, and physicians must rely on the patient’s report on the pain sensation. Verbal scales, visual analog scales (VAS) or numeric rating scales (NRS) count among the most common tools, which are restricted to patients with normal mental abilities. There also exist instruments for pain assessment in people with verbal and / or cognitive impairments and instruments for pain assessment in people who are sedated and automated ventilated. However, all these diagnostic methods either have limited reliability and validity or are very time-consuming. In contrast, biopotentials can be automatically analyzed with machine learning algorithms to provide a surrogate measure of pain intensity.

Methods

In this context, we created a database of biopotentials to advance an automated pain recognition system, determine its theoretical testing quality, and optimize its performance. Eighty-five participants were subjected to painful heat stimuli (baseline, pain threshold, two intermediate thresholds, and pain tolerance threshold) under controlled conditions and the signals of electromyography, skin conductance level, and electrocardiography were collected. A total of 159 features were extracted from the mathematical groupings of amplitude, frequency, stationarity, entropy, linearity, variability, and similarity.

Results

We achieved classification rates of 90.94% for baseline vs. pain tolerance threshold and 79.29% for baseline vs. pain threshold. The most selected pain features stemmed from the amplitude and similarity group and were derived from facial electromyography.

Conclusion

The machine learning measurement of pain in patients could provide valuable information for a clinical team and thus support the treatment assessment.  相似文献   

2.
The theoretical underpinnings of the mechanisms of sociality, e.g. territoriality, hierarchy, and reciprocity, are based on assumptions of individual recognition. While behavioural evidence suggests individual recognition is widespread, the cues that animals use to recognise individuals are established in only a handful of systems. Here, we use digital models to demonstrate that facial features are the visual cue used for individual recognition in the social fish Neolamprologus pulcher. Focal fish were exposed to digital images showing four different combinations of familiar and unfamiliar face and body colorations. Focal fish attended to digital models with unfamiliar faces longer and from a further distance to the model than to models with familiar faces. These results strongly suggest that fish can distinguish individuals accurately using facial colour patterns. Our observations also suggest that fish are able to rapidly (≤ 0.5 sec) discriminate between familiar and unfamiliar individuals, a speed of recognition comparable to primates including humans.  相似文献   

3.

Background

Research suggests that interaction between humans and digital environments characterizes a form of companionship in addition to technical convenience. To this effect, humans have attempted to design computer systems able to demonstrably empathize with the human affective experience. Facial electromyography (EMG) is one such technique enabling machines to access to human affective states. Numerous studies have investigated the effects of valence emotions on facial EMG activity captured over the corrugator supercilii (frowning muscle) and zygomaticus major (smiling muscle). The arousal emotion, specifically, has not received much research attention, however. In the present study, we sought to identify intensive valence and arousal affective states via facial EMG activity.

Methods

Ten blocks of affective pictures were separated into five categories: neutral valence/low arousal (0VLA), positive valence/high arousal (PVHA), negative valence/high arousal (NVHA), positive valence/low arousal (PVLA), and negative valence/low arousal (NVLA), and the ability of each to elicit corresponding valence and arousal affective states was investigated at length. One hundred and thirteen participants were subjected to these stimuli and provided facial EMG. A set of 16 features based on the amplitude, frequency, predictability, and variability of signals was defined and classified using a support vector machine (SVM).

Results

We observed highly accurate classification rates based on the combined corrugator and zygomaticus EMG, ranging from 75.69% to 100.00% for the baseline and five affective states (0VLA, PVHA, PVLA, NVHA, and NVLA) in all individuals. There were significant differences in classification rate accuracy between senior and young adults, but there was no significant difference between female and male participants.

Conclusion

Our research provides robust evidences for recognition of intensive valence and arousal affective states in young and senior adults. These findings contribute to the successful future application of facial EMG for identifying user affective states in human machine interaction (HMI) or companion robotic systems (CRS).  相似文献   

4.

Background

The ‘broader autism phenotype’ (BAP) refers to the mild expression of autistic-like traits in the relatives of individuals with autism spectrum disorder (ASD). Establishing the presence of ASD traits provides insight into which traits are heritable in ASD. Here, the ability to recognise facial identity was tested in 33 parents of ASD children.

Methodology and Results

In experiment 1, parents of ASD children completed the Cambridge Face Memory Test (CFMT), and a questionnaire assessing the presence of autistic personality traits. The parents, particularly the fathers, were impaired on the CFMT, but there were no associations between face recognition ability and autistic personality traits. In experiment 2, parents and probands completed equivalent versions of a simple test of face matching. On this task, the parents were not impaired relative to typically developing controls, however the proband group was impaired. Crucially, the mothers'' face matching scores correlated with the probands'', even when performance on an equivalent test of matching non-face stimuli was controlled for.

Conclusions and Significance

Components of face recognition ability are impaired in some relatives of ASD individuals. Results suggest that face recognition skills are heritable in ASD, and genetic and environmental factors accounting for the pattern of heritability are discussed. In general, results demonstrate the importance of assessing the skill level in the proband when investigating particular characteristics of the BAP.  相似文献   

5.
Research has shown that adults’ recognition of a facial part can be disrupted if the part is learnt without a face context but tested in a whole face. This has been interpreted as the holistic interference effect. The present study investigated whether children of 6- and 9–10-year-olds would show a similar effect. Participants were asked to judge whether a probe part was the same as or different from a test part whereby the part was presented either in isolation or in a whole face. The results showed that while all the groups were susceptible to a holistic interference, the youngest group was most severely affected. Contrary to the view that piecemeal processing precedes holistic processing in the cognitive development, our findings demonstrate that holistic processing is already present at 6 years of age. It is the ability to inhibit the influence of holistic information on piecemeal processing that seems to require a longer period of development into at an older and adult age.  相似文献   

6.
吴云  胡涛  杨玲  查文  曹荔 《生物磁学》2013,(34):6719-6722
目的:探讨超声检查发现胎儿先天发育异常的规律及诊断价值。方法:随机选择480例经产后及引产后证实的单胎妊娠的畸形胎儿,回顾性分析产前的超声检查资料,孕妇平均年龄27.6岁,平均孕周27.1周。超声诊断畸形儿统计标准:参照《中国出生缺陷监测报告卡》的监测类型和《临床技术操作规范(超声医学分册)》的检测畸形类型确定。结果:459例为经超声检出的胎儿先天畸形,21例为超声漏诊,根据胎儿出生后情况反馈为畸形。459例畸形胎儿,孕周在12-40周,其中24.4%(112/459)的病例是在18-24孕周之间检出,56.9%(261/459)的病例是在12—28孕周之间检出,其中24周检出病例数最多(42/459)。35.7%(164/459)的胎儿畸形是在孕28-36周期间检出。结论:超声对胎儿畸形的诊断符合率为95.62%(459/480)。妊娠28周前超声能检出大多数可检的胎儿畸形。超声在胎儿产前诊断上具有极高的诊断价值,诊断率高,无损伤性,操作简便,重复性强。  相似文献   

7.
目的:探讨产前B超对胎儿完全性大动脉转位的临床诊断价值。方法:回顾性分析2012年2月至2015年11月我院收治的4例胎儿完全性大动脉转位的B超特征,并对比病理结果。结果:4例完全性大动脉转位胎儿中,1例室间隔缺损,3例四腔心切面正常。左右两室流出道切面情况:4例胎儿大动脉与心室连接关系存在异常,2例胎儿室间隔膜部存在缺损症状。三血管气管切面情况:4例胎儿均仅可见2条血管。结论:对左右两心室流出道切面及三血管气管切面进行观察可得,胎儿完全性大动脉转位具有较为明显的B超特征,产前对胎儿完全性大动脉转位进行B超诊断具有较好的临床价值,建议在医疗单位推广应用。  相似文献   

8.
Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.  相似文献   

9.

Background

Findings of behavioral studies on facial emotion recognition in Parkinson’s disease (PD) are very heterogeneous. Therefore, the present investigation additionally used functional magnetic resonance imaging (fMRI) in order to compare brain activation during emotion perception between PD patients and healthy controls.

Methods and Findings

We included 17 nonmedicated, nondemented PD patients suffering from mild to moderate symptoms and 22 healthy controls. The participants were shown pictures of facial expressions depicting disgust, fear, sadness, and anger and they answered scales for the assessment of affective traits. The patients did not report lowered intensities for the displayed target emotions, and showed a comparable rating accuracy as the control participants. The questionnaire scores did not differ between patients and controls. The fMRI data showed similar activation in both groups except for a generally stronger recruitment of somatosensory regions in the patients.

Conclusions

Since somatosensory cortices are involved in the simulation of an observed emotion, which constitutes an important mechanism for emotion recognition, future studies should focus on activation changes within this region during the course of disease.  相似文献   

10.
《Current biology : CB》2014,24(7):738-743
  1. Download : Download high-res image (263KB)
  2. Download : Download full-size image
  相似文献   

11.
Face recognition has emerged as the fastest growing biometric technology and has expanded a lot in the last few years. Many new algorithms and commercial systems have been proposed and developed. Most of them use Principal Component Analysis (PCA) as a base for their techniques. Different and even conflicting results have been reported by researchers comparing these algorithms. The purpose of this study is to have an independent comparative analysis considering both performance and computational complexity of six appearance based face recognition algorithms namely PCA, 2DPCA, A2DPCA, (2D)2PCA, LPP and 2DLPP under equal working conditions. This study was motivated due to the lack of unbiased comprehensive comparative analysis of some recent subspace methods with diverse distance metric combinations. For comparison with other studies, FERET, ORL and YALE databases have been used with evaluation criteria as of FERET evaluations which closely simulate real life scenarios. A comparison of results with previous studies is performed and anomalies are reported. An important contribution of this study is that it presents the suitable performance conditions for each of the algorithms under consideration.  相似文献   

12.
Sandhya  G.  Prakash  H. P.  Nayak  K. R  Behere  R. V.  Bhandary  P. R.  Chinmay  A. S. 《Neurophysiology》2019,51(1):43-50
Neurophysiology - In the identification of facial expressions related to certain emotions, certain parameters of event-related potentials (ERPs) can be interpreted as the respective indices....  相似文献   

13.
In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.  相似文献   

14.
摘要 目的:探讨超声联合染色体检测对胎儿心血管畸形的诊断价值。方法:2017年6月到2020年12月选择在本院诊治的高危孕妇117例作为研究对象,所有孕妇都给予胎儿心脏超声检查与羊膜穿刺染色体检查,判断胎儿心血管畸形情况。结果:在117例孕妇中,胎儿心脏超声检出胎儿心血管畸形37例,占比31.6%,前三位主要为室间隔缺损、左上腔静脉、右锁骨下动脉。羊膜腔穿刺术检出32例染色体异常胎儿,占比27.4%,其中染色体数目异常30例,染色体结构异常2例,前三位分别为21-三体、13-三体与18-三体。超声检查胎儿心血管畸形37例中,染色体异常30例;超声检查胎儿心血管正常80例中,染色体异常2例,对比差异有统计学 意义(P<0.05)。联合诊断为胎儿心血管畸形39例,随访后确诊为胎儿心血管畸形40例,超声联合染色体检测对胎儿心血管畸形的敏感性与特异性为100.0%(39/39)和98.7%(77/78)。结论:胎儿心脏超声联合染色体检测对胎儿心血管畸形的诊断具有很高敏感性与特异性,可尽最大可能提高出生缺陷儿的检出率,有很好的应用价值。  相似文献   

15.
A method is reported by which the presence or absence of fetal heart movement may be reliably detected by an abdominal approach from the 48th day of pregnancy onwards (menstrual age). The technique involves the use of commercially available diagnostic sonar apparatus using two display and time position modes in combination.A series of 106 examinations on 56 patients in early pregnancy is presented in which there were no false results.  相似文献   

16.
Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.  相似文献   

17.
超声图像处理中Snake模型研究   总被引:3,自引:0,他引:3  
Snake模型是一种基于高层信息的有效目标轮廓提取算法,其优点是作用过程及最后结果的目标轮廓是一条完整的曲线,从而引起广泛的关注。鉴于医学超声图像的信噪比较低,用经典的边缘提取算法无法得到较好的结果,因此人们将Snake模型进行了各种各样的改进,并且越来越多地将它运用到医学超声图像处理中来。本文对乳腺超声图像进行阈值分割、形态滤波等一系列预处理后,将改进的Snake模型对乳腺超声图像进行肿瘤的边缘提取,得到了比较好的结果。  相似文献   

18.
目的:探讨胎儿肢体畸形超声特征及诊断价值。方法:采用连续顺序追踪法对66342例妊娠12-40周孕妇行胎儿四肢畸形筛查。将产前超声诊断结果与引产或产后结果进行对比分析。结果:发生肢体畸形271例,发生率为0.41%(271/66342),包括四肢短小5例,桡骨发育不全1例,缺肢畸形5例,足内翻17例,手掌畸形3例,指趾畸形222例及骨骼多发畸形18例。其中产前诊断胎儿肢体畸形49例;漏诊222例,包括:足内翻3例、指趾畸形218例、多发骨骼畸形1例。胎儿肢体畸形的出现率和产前检出率分别为:四肢短小1.84%(5/271)、100%(5/5);桡骨发育不全0.36%(1/271)、100%(1/1);缺肢畸形1.84%(5/271)、100%(5/5);足内翻6.27%(17/271)、82.35%(14/17);手掌畸形1.10%(3/271)、100%(3/3);指趾畸形81.91%(222/217)、1.8%(4/222);多发骨骼畸形6.64%(18/271)、94.44%(17/18)。结论:超声对胎儿手掌、脚掌部位以上畸形的检出率较高。指趾畸形出现率最高,但检出率最低。  相似文献   

19.
目的:探讨胎儿肢体畸形超声特征及诊断价值。方法:采用连续顺序追踪法对66342 例妊娠12-40 周孕妇行胎儿四肢畸形筛查。将产前超声诊断结果与引产或产后结果进行对比分析。结果:发生肢体畸形271 例,发生率为0.41 %(271/66342),包括四肢短小5 例,桡骨发育不全1 例,缺肢畸形5 例,足内翻17 例,手掌畸形3 例,指趾畸形222 例及骨骼多发畸形18 例。其中产前诊断胎儿肢体畸形49 例;漏诊222 例,包括:足内翻3 例、指趾畸形218 例、多发骨骼畸形1 例。胎儿肢体畸形的出现率和产前检出率分别为:四肢短小1.84 %(5/271)、100 %(5/5);桡骨发育不全0.36 %(1/271)、100 %(1/1);缺肢畸形1.84 %(5/271)、100 %(5/5);足内翻6.27 %(17/271)、82.35 %(14/17);手掌畸形1.10 %(3/271)、100 %(3/3);指趾畸形81.91 %(222/217)、1. 8%(4/222);多发骨骼畸形6.64 %(18/271)、94.44 %(17/18)。结论:超声对胎儿手掌、脚掌部位以上畸形的检出率较高。指趾畸形出现率最高,但检出率最低。  相似文献   

20.
徐颖  宋晓梅  张锐利  王丹  沈娟  许媛媛 《生物磁学》2013,(34):6715-6718
目的:总结肢体畸形胎儿的超声诊断与染色体核型诊断结果,分析胎儿肢体畸形时超声诊断与染色体异常的相关性。方法:以孕18—32周的健康自愿者作为对照组(A组),以同期超声诊断异常者作为观察组(B组),两组均行超声及染色体检查。对比其染色体核型的检出情况;并以胎儿的超声诊断结果为自变量,以各染色体核型为因变量进行logistic回归分析。结果:A组染色体异常的检出率为2.1,与之相比,B组染色体异常检出率有明显升高,且染色体核型异常以47,XY,+18、47,XX+21和47,XXX为主,差异有统计学意义,P〈0.05;logistic分析结果显示,染色体核型异常与胎儿肢体比例失调、四肢短小、肢体非功能位的OR值分别为6.332、7.404、5.98l,P〈0.05,差异显著。结论:胎儿肢体畸形患者染色体异常的诊出率较健康人群明显增高,染色体核型异常与超声诊断为胎儿肢体比例失调、四肢短小、肢体非功能位的比例呈正相关。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号