首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.

Background  

The molecular basis for the genetic risk of ischemic stroke is likely to be multigenic and influenced by environmental factors. Several small case-control studies have suggested associations between ischemic stroke and polymorphisms of genes that code for coagulation cascade proteins and platelet receptors. Our aim is to investigate potential associations between hemostatic gene polymorphisms and ischemic stroke, with particular emphasis on detailed characterization of the phenotype.  相似文献   

2.

Background

Observational studies have reported higher mortality for patients admitted on weekends. It is not known whether this “weekend effect” is modified by clinical staffing levels on weekends. We aimed to test the hypotheses that rounds by stroke specialist physicians 7 d per week and the ratio of registered nurses to beds on weekends are associated with mortality after stroke.

Methods and Findings

We conducted a prospective cohort study of 103 stroke units (SUs) in England. Data of 56,666 patients with stroke admitted between 1 June 2011 and 1 December 2012 were extracted from a national register of stroke care in England. SU characteristics and staffing levels were derived from cross-sectional survey. Cox proportional hazards models were used to estimate hazard ratios (HRs) of 30-d post-admission mortality, adjusting for case mix, organisational, staffing, and care quality variables. After adjusting for confounders, there was no significant difference in mortality risk for patients admitted to a stroke service with stroke specialist physician rounds fewer than 7 d per week (adjusted HR [aHR] 1.04, 95% CI 0.91–1.18) compared to patients admitted to a service with rounds 7 d per week. There was a dose–response relationship between weekend nurse/bed ratios and mortality risk, with the highest risk of death observed in stroke services with the lowest nurse/bed ratios. In multivariable analysis, patients admitted on a weekend to a SU with 1.5 nurses/ten beds had an estimated adjusted 30-d mortality risk of 15.2% (aHR 1.18, 95% CI 1.07–1.29) compared to 11.2% for patients admitted to a unit with 3.0 nurses/ten beds (aHR 0.85, 95% CI 0.77–0.93), equivalent to one excess death per 25 admissions. The main limitation is the risk of confounding from unmeasured characteristics of stroke services.

Conclusions

Mortality outcomes after stroke are associated with the intensity of weekend staffing by registered nurses but not 7-d/wk ward rounds by stroke specialist physicians. The findings have implications for quality improvement and resource allocation in stroke care. Please see later in the article for the Editors'' Summary  相似文献   

3.
The ubiquity of the internet and computer-based technologies has an increasing impact on higher education and the way students access information for learning. Moreover, there is a paucity of information about the quantitative and qualitative use of learning media by the current student generation. In this study we systematically analyzed the use of digital and non-digital learning resources by undergraduate medical students. Daily online surveys and semi-structured interviews were conducted with a cohort of 338 third year medical students enrolled in a general pharmacology course. Our data demonstrate a predominant use of digital over non-digital learning resources (69 ± 7% vs. 31 ± 7%; p < 0.01) by students. Most used media for learning were lecture slides (26.8 ± 3.0%), apps (22.0 ± 3.7%) and personal notes (15.5 ± 2.7%), followed by textbooks (> 300 pages) (10.6 ± 3.3%), internet search (7.9 ± 1.6%) and e-learning cases (7.6 ± 3.0%). When comparing learning media use of teaching vs. pre-exam self-study periods, textbooks were used significantly less during self-study (-55%; p < 0.01), while exam questions (+334%; p < 0.01) and e-learning cases (+176%; p < 0.01) were utilized more. Taken together, our study revealed a high prevalence and acceptance of digital learning resources by undergraduate medical students, in particular mobile applications.  相似文献   

4.

Background

Pre-eclampsia/eclampsia are leading causes of maternal mortality and morbidity, particularly in low- and middle- income countries (LMICs). We developed the miniPIERS risk prediction model to provide a simple, evidence-based tool to identify pregnant women in LMICs at increased risk of death or major hypertensive-related complications.

Methods and Findings

From 1 July 2008 to 31 March 2012, in five LMICs, data were collected prospectively on 2,081 women with any hypertensive disorder of pregnancy admitted to a participating centre. Candidate predictors collected within 24 hours of admission were entered into a step-wise backward elimination logistic regression model to predict a composite adverse maternal outcome within 48 hours of admission. Model internal validation was accomplished by bootstrapping and external validation was completed using data from 1,300 women in the Pre-eclampsia Integrated Estimate of RiSk (fullPIERS) dataset. Predictive performance was assessed for calibration, discrimination, and stratification capacity. The final miniPIERS model included: parity (nulliparous versus multiparous); gestational age on admission; headache/visual disturbances; chest pain/dyspnoea; vaginal bleeding with abdominal pain; systolic blood pressure; and dipstick proteinuria. The miniPIERS model was well-calibrated and had an area under the receiver operating characteristic curve (AUC ROC) of 0.768 (95% CI 0.735–0.801) with an average optimism of 0.037. External validation AUC ROC was 0.713 (95% CI 0.658–0.768). A predicted probability ≥25% to define a positive test classified women with 85.5% accuracy. Limitations of this study include the composite outcome and the broad inclusion criteria of any hypertensive disorder of pregnancy. This broad approach was used to optimize model generalizability.

Conclusions

The miniPIERS model shows reasonable ability to identify women at increased risk of adverse maternal outcomes associated with the hypertensive disorders of pregnancy. It could be used in LMICs to identify women who would benefit most from interventions such as magnesium sulphate, antihypertensives, or transportation to a higher level of care. Please see later in the article for the Editors'' Summary  相似文献   

5.

Objective

Consultations occur frequently in the emergency department (ED) of tertiary care centres and pose a threat for patient safety as they contribute to ED lengths of stay (LOS) and overcrowding. The aim of this study was to investigate reasons and appropriateness of consultations, and the relative impact of specialty and patient characteristics on the probability of a consultation, because this could help to improve efficiency of ED patient care.

Methods

This prospective cohort study included ED patients presenting to a Dutch tertiary care centre in a setting where ED physicians mostly treat self-referred and undifferentiated patients and other specialists treat referred patients. Consultations were defined as appropriate if the reason of consultation corresponded with the final advice, conclusion or policy of the consulted specialty. Multivariable logistic regression analysis was used to assess the relative contribution of specialty and patient characteristics on consultation.

Results

In the 344 (24% (95% CI 22 to 26%)) of the 1434 inclusions another specialty was consulted, resulting in a 55% increase of ED LOS. ED physicians more often consulted another specialty with a corrected odds ratio (OR) of 5.6 (4.0 to 7.8), mostly because consultations were mandatory in case of hospitalization or outpatient follow-up. Limited expertise of ED physicians was the reason for consultation in 7% (5 to 9%). The appropriateness of consultations was 84% (81 to 88%), similar between ED physicians and other specialists (P = 0.949). The patient characteristics age, comorbidity, and triage category and complaint predicted consultation.

Conclusion

In a Dutch tertiary care centre another specialty was consulted in 24% of the patients, mostly for an appropriate reason, and rarely because of lack of expertise. The impact of consultations on ED LOS could be reduced if mandatory consultations are abolished and predictors of a consultation are used to facilitate timely consultation.  相似文献   

6.

Background

Early diagnostic and prognostic stratification of patients with suspected infection is a difficult clinical challenge. We studied plasma pentraxin 3 (PTX3) upon admission to the emergency department in patients with suspected infection.

Methods

The study comprised 537 emergency room patients with suspected infection: 59 with no systemic inflammatory response syndrome (SIRS) and without bacterial infection (group 1), 67 with bacterial infection without SIRS (group 2), 54 with SIRS without bacterial infection (group 3), 308 with sepsis (SIRS and bacterial infection) without organ failure (group 4) and 49 with severe sepsis (group 5). Plasma PTX3 was measured on admission using a commercial solid-phase enzyme-linked immunosorbent assay (ELISA).

Results

The median PTX3 levels in groups 1–5 were 2.6 ng/ml, 4.4 ng/ml, 5.0 ng/ml, 6.1 ng/ml and 16.7 ng/ml, respectively (p<0.001). The median PTX3 concentration was higher in severe sepsis patients compared to others (16.7 vs. 4.9 ng/ml, p<0.001) and in non-survivors (day 28 case fatality) compared to survivors (14.1 vs. 5.1 ng/ml, p<0.001). A high PTX3 level predicted the need for ICU stay (p<0.001) and hypotension (p<0.001). AUCROC in the prediction of severe sepsis was 0.73 (95% CI 0.66–0.81, p<0.001) and 0.69 in case fatality (95% CI 0.58–0.79, p<0.001). PTX3 at a cut-off level for 14.1 ng/ml (optimal cut-off value for severe sepsis) showed 63% sensitivity and 80% specificity. At a cut-off level 7.7 ng/ml (optimal cut-off value for case fatality) showed 70% sensitivity and 63% specificity in predicting case fatality on day 28.In multivariate models, high PTX3 remained an independent predictor of severe sepsis and case fatality after adjusting for potential confounders.

Conclusions

A high PTX3 level on hospital admission predicts severe sepsis and case fatality in patients with suspected infection.  相似文献   

7.

Objectives

This multicenter study examines the performance of the Manchester Triage System (MTS) after changing discriminators, and with the addition use of abnormal vital sign in patients presenting to pediatric emergency departments (EDs).

Design

International multicenter study

Settings

EDs of two hospitals in The Netherlands (2006–2009), one in Portugal (November–December 2010), and one in UK (June–November 2010).

Patients

Children (<16years) triaged with the MTS who presented at the ED.

Methods

Changes to discriminators (MTS 1) and the value of including abnormal vital signs (MTS 2) were studied to test if this would decrease the number of incorrect assignment. Admission to hospital using the new MTS was compared with those in the original MTS. Likelihood ratios, diagnostic odds ratios (DORs), and c-statistics were calculated as measures for performance and compared with the original MTS. To calculate likelihood ratios and DORs, the MTS had to be dichotomized in low urgent and high urgent.

Results

60,375 patients were included, of whom 13% were admitted. When MTS 1 was used, admission to hospital increased from 25% to 29% for MTS ‘very urgent’ patients and remained similar in lower MTS urgency levels. The diagnostic odds ratio improved from 4.8 (95%CI 4.5–5.1) to 6.2 (95%CI 5.9–6.6) and the c-statistic remained 0.74. MTS 2 did not improve the performance of the MTS.

Conclusions

MTS 1 performed slightly better than the original MTS. The use of vital signs (MTS 2) did not improve the MTS performance.  相似文献   

8.

Objectives

To determine which early modifiable factors are associated with younger stroke survivors'' ability to return to paid work in a cohort study with 12-months of follow-up conducted in 20 stroke units in the Stroke Services NSW clinical network.

Participants

Were aged >17 and <65 years, recent (within 28 days) stroke, able to speak English sufficiently to respond to study questions, and able to provide written informed consent. Participants with language or cognitive impairment were eligible to participate if their proxy provided consent and completed assessments on the participants'' behalf. The main outcome measure was return to paid work during the 12 months following stroke.

Results

Of 441 consented participants (average age 52 years, 68% male, 83% with ischemic stroke), 218 were in paid full-time and 53 in paid part-time work immediately before their stroke, of whom 202 (75%) returned to paid part- or full-time work within 12 months. Being male, female without a prior activity restricting illness, younger, independent in activities of daily living (ADL) at 28 days after stroke, and having private health insurance was associated with return to paid work, following adjustment for other illnesses and a history of depression before stroke (C statistic 0·81). Work stress and post stroke depression showed no such independent association.

Conclusions

Given that independence in ADL is the strongest predictor of return to paid work within 12 months of stroke, these data reinforce the importance of reducing stroke-related disability and increasing independence for younger stroke survivors.

Trial Registration

Australian New Zealand Clinical Trials Registry ANZCTRN 12608000459325  相似文献   

9.

Background and Purpose

The risk of stroke after a transient ischemic attack (TIA) for patients with a positive diffusion-weighted image (DWI), i.e., transient symptoms with infarction (TSI), is much higher than for those with a negative DWI. The aim of this study was to validate the predictive value of a web-based recurrence risk estimator (RRE; http://www.nmr.mgh.harvard.edu/RRE/) of TSI.

Methods

Data from the prospective hospital-based TIA database of the First Affiliated Hospital of Zhengzhou University were analyzed. The RRE and ABCD2 scores were calculated within 7 days of symptom onset. The predictive outcome was ischemic stroke occurrence at 90 days. The receiver-operating characteristics curves were plotted, and the predictive value of the two models was assessed by computing the C statistics.

Results

A total of 221 eligible patients were prospectively enrolled, of whom 46 (20.81%) experienced a stroke within 90 days. The 90-day stroke risk in high-risk TSI patients (RRE ≥4) was 3.406-fold greater than in those at low risk (P <0.001). The C statistic of RRE (0.681; 95% confidence interval [CI], 0.592–0.771) was statistically higher than that of ABCD2 score (0.546; 95% CI, 0.454–0.638; Z = 2.115; P = 0.0344) at 90 days.

Conclusion

The RRE score had a higher predictive value than the ABCD2 score for assessing the 90-day risk of stroke after TSI.  相似文献   

10.

Objective

To explore the associations between waist-to-height ratio (WHtR), body mass index (BMI) and waist circumference (WC) and risk of ischemic stroke among Mongolian men in China.

Methods

A population-based prospective cohort study was conducted from June 2003 to July 2012 in Inner Mongolia, an autonomous region in north China. A total of 1034 men aged 20 years and older free of cardiovascular disease were included in the cohort and followed up for an average of 9.2 years. The subjects were divided into four groups by WHtR levels (WHtR<0.40, 0.40≤WHtR≤0.50, 0.50<WHtR≤0.60, WHtR>0.60). The cumulative survival rates of ischemic stroke among the four groups were estimated with the Kaplan-Meier curves and compared by log-rank test. Cox proportional hazards models and Receiver Operating Characteristic (ROC) curves were employed to evaluate the associations between obesity indices and ischemic stroke.

Results

A total of 47 ischemic stroke patients were observed during the follow-up period. The cumulative incidence and incidence density of ischemic stroke were 4.55% and 507.61/100 000 person-years, respectively. After the major risk factors were adjusted, individuals with WHtR>0.60 had a 3.56-fold increased risk of ischemic stroke compared with those with 0.40≤WHtR≤0.50. Hazard ratio (HR) and 95% confidence intervals (CI) of ischemic stroke for a 1-SD increase in WHtR was 1.34(95% CI: 1.00–1.81). After adding BMI or WC to models, higher WHtR remained significantly associated with increased risk of ischemic stroke. The Kaplan-Meier survival curves showed that the cumulative survival rate in the group with WHtR>0.60 was significantly lower than in the group with 0.40≤WHtR≤0.50 (log-rank test, P = 0.025). The areas under the curve for each index were as follows: 0.586 for WHtR, 0.543 for WC; 0.566 for BMI.

Conclusions

Higher WHtR is associated with risk of ischemic stroke in Mongolian males. WHtR may be useful in predicting ischemic stroke incidence in males.  相似文献   

11.
Measurement of both calprotectin and lactoferrin in faeces has successfully been used to discriminate between functional and inflammatory bowel conditions, but evidence is limited for Clostridium difficile infection (CDI). We prospectively recruited a cohort of 164 CDI cases and 52 controls with antibiotic-associated diarrhoea (AAD). Information on disease severity, duration of symptoms, 30-day mortality and 90-day recurrence as markers of complicated CDI were recorded. Specimens were subject to microbiological culture and PCR-ribotyping. Levels of faecal calprotectin (FC) and lactoferrin (FL) were measured by ELISA. Statistical analysis was conducted using percentile categorisation. ROC curve analysis was employed to determine optimal cut-off values. Both markers were highly correlated with each other (r2 = 0.74) and elevated in cases compared to controls (p<0.0001; ROC>0.85), although we observed a large amount of variability across both groups. The optimal case-control cut-off point was 148 mg/kg for FC and 8.1 ng/µl for FL. Median values for FL in CDI cases were significantly greater in patients suffering from severe disease compared to non-severe disease (104.6 vs. 40.1 ng/µl, p = 0.02), but were not significant for FC (969.3 vs. 512.7 mg/kg, p = 0.09). Neither marker was associated with 90-day recurrence, prolonged CDI symptoms, positive culture results and colonisation by ribotype 027. Both FC and FL distinguished between CDI cases and AAD controls. Although FL was associated with disease severity in CDI patients, this showed high inter-individual variability and was an isolated finding. Thus, FC and FL are unlikely to be useful as biomarkers of complicated CDI disease.  相似文献   

12.

Background

Geriatric Assessment is an appropriate method for identifying older cancer patients at risk of life-threatening events during therapy. Yet, it is underused in practice, mainly because it is time- and resource-consuming. This study aims to identify the best screening tool to identify older cancer patients requiring geriatric assessment by comparing the performance of two short assessment tools the G8 and the Vulnerable Elders Survey (VES-13).

Patients and Methods

The diagnostic accuracy of the G8 and the (VES-13) were evaluated in a prospective cohort study of 1674 cancer patients accrued before treatment in 23 health care facilities. 1435 were eligible and evaluable. Outcome measures were multidimensional geriatric assessment (MGA), sensitivity (primary), specificity, negative and positive predictive values and likelihood ratios of the G8 and VES-13, and predictive factors of 1-year survival rate.

Results

Patient median age was 78.2 years (70-98) with a majority of females (69.8%), various types of cancer including 53.9% breast, and 75.8% Performance Status 0-1. Impaired MGA, G8, and VES-13 were 80.2%, 68.4%, and 60.2%, respectively. Mean time to complete G8 or VES-13 was about five minutes. Reproducibility of the two questionnaires was good. G8 appeared more sensitive (76.5% versus 68.7%, P =  0.0046) whereas VES-13 was more specific (74.3% versus 64.4%, P<0.0001). Abnormal G8 score (HR = 2.72), advanced stage (HR = 3.30), male sex (HR = 2.69) and poor Performance Status (HR = 3.28) were independent prognostic factors of 1-year survival.

Conclusion

With good sensitivity and independent prognostic value on 1-year survival, the G8 questionnaire is currently one of the best screening tools available to identify older cancer patients requiring geriatric assessment, and we believe it should be implemented broadly in daily practice. Continuous research efforts should be pursued to refine the selection process of older cancer patients before potentially life-threatening therapy.  相似文献   

13.

Background

Prospective evidence on the association between secondhand-smoke exposure and tuberculosis is limited.

Methods

We included 23,827 never smokers from two rounds (2001 and 2005) of Taiwan National Health Interview Survey. Information on exposure to secondhand smoke at home as well as other sociodemographic and behavioral factors was collected through in-person interview. The participants were prospectively followed for incidence of tuberculosis through cross-matching the survey database to the national tuberculosis registry of Taiwan.

Results

A total of 85 cases of active tuberculosis were identified after a median follow-up of 7.0 years. The prevalence of exposure to secondhand smoke at home was 41.8% in the study population. In the multivariable Cox proportional hazards analysis, secondhand smoke was not associated with active tuberculosis (adjusted hazard ratio [HR], 1.03; 95% CI, 0.64 to 1.64). In the subgroup analysis, the association between secondhand smoke and tuberculosis decreased with increasing age; the adjusted HR for those <18, > = 18 and <40, > = 40 and <60, and > = 60 years old was 8.48 (0.77 to 93.56), 2.29 (0.75 to 7.01), 1.33 (0.58 to 3.01), and 0.66 (0.35 to 1.23) respectively. Results from extensive sensitivity analyses suggested that potential misclassification of secondhand-smoke exposure would not substantially affect the observed associations.

Conclusions

The results from this prospective cohort study did not support an overall association between secondhand smoke and tuberculosis. However, the finding that adolescents might be particularly susceptible to secondhand smoke''s effect warrants further investigation.  相似文献   

14.

Background

The effectiveness of prenatal treatment to prevent serious neurological sequelae (SNSD) of congenital toxoplasmosis is not known.

Methods and Findings

Congenital toxoplasmosis was prospectively identified by universal prenatal or neonatal screening in 14 European centres and children were followed for a median of 4 years. We evaluated determinants of postnatal death or SNSD defined by one or more of functional neurological abnormalities, severe bilateral visual impairment, or pregnancy termination for confirmed congenital toxoplasmosis. Two-thirds of the cohort received prenatal treatment (189/293; 65%). 23/293 (8%) fetuses developed SNSD of which nine were pregnancy terminations. Prenatal treatment reduced the risk of SNSD. The odds ratio for prenatal treatment, adjusted for gestational age at maternal seroconversion, was 0.24 (95% Bayesian credible intervals 0.07–0.71). This effect was robust to most sensitivity analyses. The number of infected fetuses needed to be treated to prevent one case of SNSD was three (95% Bayesian credible intervals 2–15) after maternal seroconversion at 10 weeks, and 18 (9–75) at 30 weeks of gestation. Pyrimethamine-sulphonamide treatment did not reduce SNSD compared with spiramycin alone (adjusted odds ratio 0.78, 0.21–2.95). The proportion of live-born infants with intracranial lesions detected postnatally who developed SNSD was 31.0% (17.0%–38.1%).

Conclusion

The finding that prenatal treatment reduced the risk of SNSD in infected fetuses should be interpreted with caution because of the low number of SNSD cases and uncertainty about the timing of maternal seroconversion. As these are observational data, policy decisions about screening require further evidence from a randomized trial of prenatal screening and from cost-effectiveness analyses that take into account the incidence and prevalence of maternal infection. Please see later in the article for the Editors'' Summary  相似文献   

15.

Background and Purpose

The quality of after-hour emergency care of patients with acute ischemic stroke is debatable. We therefore, sought to analyze the performance measures, quality of care and clinical outcomes in these patients admitted during off-hours.

Methods

Our study included 4493 patients from a selected cohort of patients admitted to the hospitals with ischemic stroke in the China National Stroke Registry (CNSR) from September 2007 to August 2008. On-hour presentation was defined as arrival at the emergency department from the scene between 8AM and 5PM from Monday through Friday. Off-hours included the remainder of the on-hours and statutory holidays. The association between off-hour presentation and outcome was analyzed using multivariate logistic-regression models.

Results

Off-hour presentation was identified in 2672 (59.5%) patients with ischemic stroke. Comparison of patients admitted during off-hours with those admitted during on-hours revealed an unadjusted odds ratio of in-hospital mortality of 1.38 (95% confidence interval, 1.04–1.85), which declined to 1.34 (95% confidence interval, 0.93–1.93) after adjusting for patient characteristics (especially, pre-hospital delay). No difference in 30-day mortality, total death or dependence at three, six and 12 months between two groups was observed. No association between off-hour admission and quality of care was found.

Conclusions

In the CNSR database, compared with on-hour patients, off-hour patients with acute ischemic stroke admitted to the emergency departments from scene manifested a higher incidence of in-hospital mortality. However, the difference in incidence and quality of care between the groups disappeared after adjusting for pre-hospital delay and other variables.  相似文献   

16.
Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.  相似文献   

17.

Background

There exist several risk stratification systems for predicting mortality of emergency patients. However, some are complex in clinical use and others have been developed using suboptimal methodology. The objective was to evaluate the capability of the staff at a medical admission unit (MAU) to use clinical intuition to predict in-hospital mortality of acutely admitted patients.

Methods

This is an observational prospective cohort study of adult patients (15 years or older) admitted to a MAU at a regional teaching hospital. The nursing staff and physicians predicted in-hospital mortality upon the patients'' arrival. We calculated discriminatory power as the area under the receiver-operating-characteristic curve (AUROC) and accuracy of prediction (calibration) by Hosmer-Lemeshow goodness-of-fit test.

Results

We had a total of 2,848 admissions (2,463 patients). 89 (3.1%) died while admitted. The nursing staff assessed 2,404 admissions and predicted mortality in 1,820 (63.9%). AUROC was 0.823 (95% CI: 0.762–0.884) and calibration poor. Physicians assessed 738 admissions and predicted mortality in 734 (25.8% of all admissions). AUROC was 0.761 (95% CI: 0.657–0.864) and calibration poor. AUROC and calibration increased with experience. When nursing staff and physicians were in agreement (±5%), discriminatory power was very high, 0.898 (95% CI: 0.773–1.000), and calibration almost perfect. Combining an objective risk prediction score with staff predictions added very little.

Conclusions

Using only clinical intuition, staff in a medical admission unit has a good ability to identify patients at increased risk of dying while admitted. When nursing staff and physicians agreed on their prediction, discriminatory power and calibration were excellent.  相似文献   

18.

Background

Self-reported data are often used for estimates on healthcare utilization in cost-effectiveness studies.

Objective

To analyze older adults’ self-report of healthcare utilization compared to data obtained from the general practitioners’ (GP) electronic medical record (EMR) and to study the differences in healthcare utilization between those who completed the study, those who did not respond, and those lost to follow-up.

Methods

A prospective cohort study was conducted among community-dwelling persons aged 70 years and above, without dementia and not living in a nursing home. Self-reporting questionnaires were compared to healthcare utilization data extracted from the EMR at the GP-office.

Results

Overall, 790 persons completed questionnaires at baseline, median age 75 years (IQR 72–80), 55.8% had no disabilities in (instrumental) activities of daily living. Correlations between self-report data and EMR data on healthcare utilization were substantial for ‘hospitalizations’ and ‘GP home visits’ at 12 months intraclass correlation coefficient 0.63 (95% CI; 0.58–0.68). Compared to the EMR, self-reported healthcare utilization was generally slightly over-reported. Non-respondents received more GP home visits (p<0.05). Of the participants who died or were institutionalized 62.2% received 2 or more home visits (p<0.001) and 18.9% had 2 or more hospital admissions (p<0.001) versus respectively 18.6% and 3.9% of the participants who completed the study. Of the participants lost to follow-up for other reasons 33.0% received 2 or more home visits (p<0.01) versus 18.6 of the participants who completed the study.

Conclusions

Self-report of hospitalizations and GP home visits in a broadly ‘healthy’ community-dwelling older population seems adequate and efficient. However, as people become older and more functionally impaired, collecting healthcare utilization data from the EMR should be considered to avoid measurement bias, particularly if the data will be used to support economic evaluation.  相似文献   

19.

Background

The “fitness” of an infectious pathogen is defined as the ability of the pathogen to survive, reproduce, be transmitted, and cause disease. The fitness of multidrug-resistant tuberculosis (MDRTB) relative to drug-susceptible tuberculosis is cited as one of the most important determinants of MDRTB spread and epidemic size. To estimate the relative fitness of drug-resistant tuberculosis cases, we compared the incidence of tuberculosis disease among the household contacts of MDRTB index patients to that among the contacts of drug-susceptible index patients.

Methods and Findings

This 3-y (2010–2013) prospective cohort household follow-up study in South Lima and Callao, Peru, measured the incidence of tuberculosis disease among 1,055 household contacts of 213 MDRTB index cases and 2,362 household contacts of 487 drug-susceptible index cases.A total of 35/1,055 (3.3%) household contacts of 213 MDRTB index cases developed tuberculosis disease, while 114/2,362 (4.8%) household contacts of 487 drug-susceptible index patients developed tuberculosis disease. The total follow-up time for drug-susceptible tuberculosis contacts was 2,620 person-years, while the total follow-up time for MDRTB contacts was 1,425 person-years. Using multivariate Cox regression to adjust for confounding variables including contact HIV status, contact age, socio-economic status, and index case sputum smear grade, the hazard ratio for tuberculosis disease among MDRTB household contacts was found to be half that for drug-susceptible contacts (hazard ratio 0.56, 95% CI 0.34–0.90, p = 0.017). The inference of transmission in this study was limited by the lack of genotyping data for household contacts. Capturing incident disease only among household contacts may also limit the extrapolation of these findings to the community setting.

Conclusions

The low relative fitness of MDRTB estimated by this study improves the chances of controlling drug-resistant tuberculosis. However, fitter multidrug-resistant strains that emerge over time may make this increasingly difficult.  相似文献   

20.
ObjectivesAlthough tonsillectomy is one of the most frequent and painful surgeries, the association between baseline and process parameters and postoperative pain are not fully understood.MethodsA multicentre prospective cohort study using a web-based registry enrolled 1,527 women and 1,008 men aged 4 to 85 years from 52 German hospitals between 2006 and 2015. Maximal pain (MP) score the first day after surgery on a numeric rating scale (NRS) from 0 (no pain) to 10 (MP) was the main outcome parameter.ResultsThe mean maximal pain score was 5.8±2.2 (median 6). Multivariable analysis revealed that female gender (Odds ratio [OR] = 1.33; 95% confidence interval [CI] = 1.12 to 1.56; p = 0.001), age <20 years (OR = 1.56; CI = 1.27 to 1.91; p<0.0001), no pain counselling (OR = 1.78; CI = 1.370 to 2.316; p<0.001), chronic pain (OR = 1.34; CI = 1.107 to 1.64; p = 0.004), and receiving opioids in recovery room (OR = 1.89; CI = 1.55 to 2.325; p<0.001) or on ward (OR = 1.79; CI = 1.42 to 2.27; p<0.001) were independently associated with higher experienced maximal postoperative pain (greater the median of 6). The effect of age on pain was not linear. Maximal pain increased in underage patients to a peak at the age of 18 to 20 years. From the age of ≥20 years on, maximal pain after tonsillectomy continuously decreased. Even after adjustment to all statistically important baseline and process parameters, there was substantial variability of maximal pain between hospitals with a heterogeneity variance of 0.31.ConclusionMany patients seem to receive insufficient or ineffective analgesia after tonsillectomy. Further research should address if populations at risk of higher postoperative pain such as females, younger patients or those with preexisting pain might profit from a special pain management protocol. Beyond classical demographical and process parameters the large variability between different hospitals is striking and indicates the existence of other unknown factors influencing postoperative pain after tonsillectomy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号