首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Delirium is one of the main causes of increased length of intensive care unit (ICU) stay among patients who have undergone living donor liver transplantation (LDLT). We aimed to evaluate risk factors for delirium after LDLT as well as to investigate whether delirium impacts the length of ICU and hospital stay.

Methods

Seventy-eight patients who underwent LDLT during the period January 2010 to December 2012 at a single medical center were enrolled. The Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) scale was used to diagnose delirium. Preoperative, postoperative, and hematologic factors were included as potential risk factors for developing delirium.

Results

During the study period, delirium was diagnosed in 37 (47.4%) patients after LDLT. The mean onset of symptoms occurred 7.0±5.5 days after surgery and the mean duration of symptoms was 5.0±2.6 days. The length of stay in the ICU for patients with delirium (39.8±28.1 days) was significantly longer than that for patients without delirium (29.3±19.0 days) (p<0.05). Risk factors associated with delirium included history of alcohol abuse [odds ratio (OR) = 6.40, 95% confidence interval (CI): 1.85–22.06], preoperative hepatic encephalopathy (OR = 4.45, 95% CI: 1.36–14.51), APACHE II score ≥16 (OR = 1.73, 95% CI: 1.71–2.56), and duration of endotracheal intubation ≥5 days (OR = 1.81, 95% CI: 1.52–2.23).

Conclusions

History of alcohol abuse, preoperative hepatic encephalopathy, APACHE II scores ≥16 and endotracheal intubation ≥5 days were predictive of developing delirium in the ICU following liver transplantation surgery and were associated with increased length of ICU and hospital stay.  相似文献   

2.

Background

No current validated survey instrument allows a comprehensive assessment of both physical activity and travel behaviours for use in interdisciplinary research on walking and cycling. This study reports on the test-retest reliability and validity of physical activity measures in the transport and physical activity questionnaire (TPAQ).

Methods

The TPAQ assesses time spent in different domains of physical activity and using different modes of transport for five journey purposes. Test-retest reliability of eight physical activity summary variables was assessed using intra-class correlation coefficients (ICC) and Kappa scores for continuous and categorical variables respectively. In a separate study, the validity of three survey-reported physical activity summary variables was assessed by computing Spearman correlation coefficients using accelerometer-derived reference measures. The Bland-Altman technique was used to determine the absolute validity of survey-reported time spent in moderate-to-vigorous physical activity (MVPA).

Results

In the reliability study, ICC for time spent in different domains of physical activity ranged from fair to substantial for walking for transport (ICC = 0.59), cycling for transport (ICC = 0.61), walking for recreation (ICC = 0.48), cycling for recreation (ICC = 0.35), moderate leisure-time physical activity (ICC = 0.47), vigorous leisure-time physical activity (ICC = 0.63), and total physical activity (ICC = 0.56). The proportion of participants estimated to meet physical activity guidelines showed acceptable reliability (k = 0.60). In the validity study, comparison of survey-reported and accelerometer-derived time spent in physical activity showed strong agreement for vigorous physical activity (r = 0.72, p<0.001), fair but non-significant agreement for moderate physical activity (r = 0.24, p = 0.09) and fair agreement for MVPA (r = 0.27, p = 0.05). Bland-Altman analysis showed a mean overestimation of MVPA of 87.6 min/week (p = 0.02) (95% limits of agreement −447.1 to +622.3 min/week).

Conclusion

The TPAQ provides a more comprehensive assessment of physical activity and travel behaviours and may be suitable for wider use. Its physical activity summary measures have comparable reliability and validity to those of similar existing questionnaires.  相似文献   

3.
4.

Background

Risk adjusted mortality for intensive care units (ICU) is usually estimated via logistic regression. Random effects (RE) or hierarchical models have been advocated to estimate provider risk-adjusted mortality on the basis that standard estimators increase false outlier classification. The utility of fixed effects (FE) estimators (separate ICU-specific intercepts) has not been fully explored.

Methods

Using a cohort from the Australian and New Zealand Intensive Care Society Adult Patient Database, 2009–2010, the model fit of different logistic estimators (FE, random-intercept and random-coefficient) was characterised: Bayesian Information Criterion (BIC; lower values better), receiver-operator characteristic curve area (AUC) and Hosmer-Lemeshow (H-L) statistic. ICU standardised hospital mortality ratios (SMR) and 95%CI were compared between models. ICU site performance (FE), relative to the grand observation-weighted mean (GO-WM) on odds ratio (OR), risk ratio (RR) and probability scales were assessed using model-based average marginal effects (AME).

Results

The data set consisted of 145355 patients in 128 ICUs, years 2009 (47.5%) & 2010 (52.5%), with mean(SD) age 60.9(18.8) years, 56% male and ICU and hospital mortalities of 7.0% and 10.9% respectively. The FE model had a BIC = 64058, AUC = 0.90 and an H-L statistic P-value = 0.22. The best-fitting random-intercept model had a BIC = 64457, AUC = 0.90 and H-L statistic P-value = 0.32 and random-coefficient model, BIC = 64556, AUC = 0.90 and H-L statistic P-value = 0.28. Across ICUs and over years no outliers (SMR 95% CI excluding null-value = 1) were identified and no model difference in SMR spread or 95%CI span was demonstrated. Using AME (OR and RR scale), ICU site-specific estimates diverged from the GO-WM, and the effect spread decreased over calendar years. On the probability scale, a majority of ICUs demonstrated calendar year decrease, but in the for-profit sector, this trend was reversed.

Conclusions

The FE estimator had model advantage compared with conventional RE models. Using AME, between and over-year ICU site-effects were easily characterised.  相似文献   

5.

Background

We sought to examine whether type 2 diabetes increases the risk of acute organ dysfunction and of hospital mortality following severe sepsis that requires admission to an intensive care unit (ICU).

Methods

Nationwide population-based retrospective cohort study of 16,497 subjects with severe sepsis who had been admitted for the first time to an ICU during the period of 1998–2008. A diabetic cohort (n = 4573) and a non-diabetic cohort (n = 11924) were then created. Relative risk (RR) of organ dysfunctions, length of hospital stay (LOS), 90-days hospital mortality, ICU resource utilization and hazard ratio (HR) of mortality adjusted for age, gender, Charlson-Deyo comorbidity index score, surgical condition and number of acute organ dysfunction, were compared across patients with severe sepsis with or without diabetes.

Results

Diabetic patients with sepsis had a higher risk of developing acute kidney injury (RR, 1.54; 95% confidence interval (CI), 1.44–1.63) and were more likely to be undergoing hemodialysis (15.55% vs. 7.24%) in the ICU. However, the diabetic cohort had a lower risk of developing acute respiratory dysfunction (RR = 0.96, 0.94–0.97), hematological dysfunction (RR = 0.70, 0.56–0.89), and hepatic dysfunction (RR = 0.77, 0.63–0.93). In terms of adjusted HR for 90-days hospital mortality, the diabetic patients with severe sepsis did not fare significantly worse when afflicted with cardiovascular, respiratory, hepatic, renal and/or neurologic organ dysfunction and by numbers of organ dysfunction. There was no statistically significant difference in LOS between the two cohorts (median 17 vs. 16 days, interquartile range (IQR) 8–30 days, p = 0.11). Multiple logistic regression analysis to predict the occurrence of mortality shows that being diabetic was not a predictive factor with an odds ratio of 0.972, 95% CI 0.890–1.061, p = 0.5203.

Interpretation

This large nationwide population-based cohort study suggests that diabetic patients do not fare worse than non-diabetic patients when suffering from severe sepsis that requires ICU admission.  相似文献   

6.

Introduction

The growing number of renal transplant recipients in a sustained immunosuppressive state is a factor that can contribute to increased incidence of sepsis. However, relatively little is known about sepsis in this population. The aim of this single-center study was to evaluate the factors associated with hospital mortality in renal transplant patients admitted to the intensive care unit (ICU) with severe sepsis and septic shock.

Methods

Patient demographics and transplant-related and ICU stay data were retrospectively collected. Multiple logistic regression was conducted to identify the independent risk factors associated with hospital mortality.

Results

A total of 190 patients were enrolled, 64.2% of whom received kidneys from deceased donors. The mean patient age was 51±13 years (males, 115 [60.5%]), and the median APACHE II was 20 (16–23). The majority of patients developed sepsis late after the renal transplantation (2.1 [0.6–2.3] years). The lung was the most common infection site (59.5%). Upon ICU admission, 16.4% of the patients had ≤1 systemic inflammatory response syndrome criteria. Among the patients, 61.5% presented with ≥2 organ failures at admission, and 27.9% experienced septic shock within the first 24 hours of ICU admission. The overall hospital mortality rate was 38.4%. In the multivariate analysis, the independent determinants of hospital mortality were male gender (OR = 5.9; 95% CI, 1.7–19.6; p = 0.004), delta SOFA 24 h (OR = 1.7; 95% CI, 1.2–2.3; p = 0.001), mechanical ventilation (OR = 30; 95% CI, 8.8–102.2; p<0.0001), hematologic dysfunction (OR = 6.8; 95% CI, 2.0–22.6; p = 0.002), admission from the ward (OR = 3.4; 95% CI, 1.2–9.7; p = 0.02) and acute kidney injury stage 3 (OR = 5.7; 95% CI,1.9–16.6; p = 0.002).

Conclusions

Hospital mortality in renal transplant patients with severe sepsis and septic shock was associated with male gender, admission from the wards, worse SOFA scores on the first day and the presence of hematologic dysfunction, mechanical ventilation or advanced graft dysfunction.  相似文献   

7.

Objectives

Hypertension is one of the major cardiovascular diseases. It affects nearly 1.56 billion people worldwide. The present study is about a particular genetic polymorphism (A1166C), gene expression and protein expression of the angiotensin II type I receptor (AT1R) (SNP ID: rs5186) and its association with essential hypertension in a Northern Indian population.

Methods

We analyzed the A1166C polymorphism and expression of AT1R gene in 250 patients with essential hypertension and 250 normal healthy controls.

Results

A significant association was found in the AT1R genotypes (AC+CC) with essential hypertension (χ2 = 22.48, p = 0.0001). Individuals with CC genotypes were at 2.4 times higher odds (p = 0.0001) to develop essential hypertension than individuals with AC and AA genotypes. The statistically significant intergenotypic variation in the systolic blood pressure was found higher in the patients with CC (169.4±36.3 mmHg) as compared to that of AA (143.5±28.1 mmHg) and AC (153.9±30.5 mmHg) genotypes (p = 0.0001). We found a significant difference in the average delta-CT value (p = 0.0001) wherein an upregulated gene expression (approximately 16 fold) was observed in case of patients as compared to controls. Furthermore, higher expression of AT1R gene was observed in patients with CC genotype than with AC and AA genotypes. A significant difference (p = 0.0001) in the protein expression of angiotensin II Type 1 receptor was also observed in the plasma of patients (1.49±0.27) as compared to controls (0.80±0.24).

Conclusion

Our findings suggest that C allele of A1166C polymorphism in the angiotensin II type 1 receptor gene is associated with essential hypertension and its upregulation could play an important role in essential hypertension.  相似文献   

8.

Background

Accurate hospital costs are required for policy-makers, hospital managers and clinicians to improve efficiency and transparency. However, different methods are used to allocate direct costs, and their agreement is poorly understood. The aim of this study was to assess the agreement between bottom-up and top-down unit costs of a large sample of surgical operations in a French tertiary centre.

Methods

Two thousand one hundred and thirty consecutive procedures performed between January and October 2010 were analysed. Top-down costs were based on pre-determined weights, while bottom-up costs were calculated through an activity-based costing (ABC) model. The agreement was assessed using correlation coefficients and the Bland and Altman method. Variables associated with the difference between methods were identified with bivariate and multivariate linear regressions.

Results

The correlation coefficient amounted to 0.73 (95%CI: 0.72; 0.76). The overall agreement between methods was poor. In a multivariate analysis, the cost difference was independently associated with age (Beta = −2.4; p = 0.02), ASA score (Beta = 76.3; p<0.001), RCI (Beta = 5.5; p<0.001), staffing level (Beta = 437.0; p<0.001) and intervention duration (Beta = −10.5; p<0.001).

Conclusions

The ability of the current method to provide relevant information to managers, clinicians and payers is questionable. As in other European countries, a shift towards time-driven activity-based costing should be advocated.  相似文献   

9.

Background and Aim

The genotype-phenotype interaction in drug-induced liver injury (DILI) is a subject of growing interest. Previous studies have linked amoxicillin-clavulanate (AC) hepatotoxicity susceptibility to specific HLA alleles. In this study we aimed to examine potential associations between HLA class I and II alleles and AC DILI with regards to phenotypic characteristics, severity and time to onset in Spanish AC hepatotoxicity cases.

Methods

High resolution genotyping of HLA loci A, B, C, DRB1 and DQB1 was performed in 75 AC DILI cases and 885 controls.

Results

The distributions of class I alleles A*3002 (P/Pc = 2.6E-6/5E-5, OR 6.7) and B*1801 (P/Pc = 0.008/0.22, OR 2.9) were more frequently found in hepatocellular injury cases compared to controls. In addition, the presence of the class II allele combination DRB1*1501-DQB1*0602 (P/Pc = 5.1E-4/0.014, OR 3.0) was significantly increased in cholestatic/mixed cases. The A*3002 and/or B*1801 carriers were found to be younger (54 vs 65 years, P = 0.019) and were more frequently hospitalized than the DRB1*1501-DQB1*0602 carriers. No additional alleles outside those associated with liver injury patterns were found to affect potential severity as measured by Hy’s Law criteria. The phenotype frequencies of B*1801 (P/Pc = 0.015/0.42, OR 5.2) and DRB1*0301-DQB1*0201 (P/Pc = 0.0026/0.07, OR 15) were increased in AC DILI cases with delayed onset compared to those corresponding to patients without delayed onset, while the opposite applied to DRB1*1302-DQB1*0604 (P/Pc = 0.005/0.13, OR 0.07).

Conclusions

HLA class I and II alleles influence the AC DILI signature with regards to phenotypic expression, latency presentation and severity in Spanish patients.  相似文献   

10.

Introduction

Information about sepsis in mainland China remains scarce and incomplete. The purpose of this study was to describe the epidemiology and outcome of severe sepsis and septic shock in mixed ICU in mainland China, as well as the independent predictors of mortality.

Methods

We performed a 2-month prospective, observational cohort study in 22 closed multi-disciplinary intensive care units (ICUs). All admissions into those ICUs during the study period were screened and patients with severe sepsis or septic shock were included.

Results

A total of 484 patients, 37.3 per 100 ICU admissions were diagnosed with severe sepsis (n = 365) or septic shock (n = 119) according to clinical criteria and included into this study. The most frequent sites of infection were the lung and abdomen. The overall ICU and hospital mortality rates were 28.7% (n = 139) and 33.5% (n = 162), respectively. In multivariate analyses, APACHE II score (odds ratio[OR], 1.068; 95% confidential interval[CI], 1.027–1.109), presence of ARDS (OR, 2.676; 95%CI, 1.691–4.235), bloodstream infection (OR, 2.520; 95%CI, 1.142–5.564) and comorbidity of cancer (OR, 2.246; 95%CI, 1.141–4.420) were significantly associated with mortality.

Conclusions

Our results indicated that severe sepsis and septic shock were common complications in ICU patients and with high mortality in China, and can be of help to know more about severe sepsis and septic shock in China and to improve characterization and risk stratification in these patients.  相似文献   

11.

Objective

Higher values of red blood cell distribution width (RDW) have been found in non-surviving than in surviving septic patients. However, it is unknown whether RDW during the first week of sepsis evolution is associated with sepsis severity and early mortality, oxidative stress and inflammation states, and these were the aims of the study.

Methods

We performed a prospective, observational, multicenter study in six Spanish Intensive Care Units with 297 severe septic patients. We measured RDW, serum levels of malondialdehyde (MDA) to assess oxidative stress, and tumour necrosis factor (TNF)-α to assess inflammation at days 1, 4, and 8. The end-point was 30-day mortality.

Results

We found higher RDW in non-surviving (n = 104) than in surviving (n = 193) septic patients at day 1 (p = 0.001), day 4 (p = 0.001), and day 8 (p = 0.002) of ICU admission. Cox regression analyses showed that RDW at day 1 (p<0.001), 4 (p = 0.005) and 8 (p = 0.03) were associated with 30-day mortality. Receiver operating characteristic curves showed that RDW at day 1 (p<0.001), 4 (p<0.001), and 8 (p<0.001) could be used to predict 30-day mortality. RDW showed a positive correlation with serum MDA levels at day 1 and day 4, with serum TNF-α levels at days 4 and 8, and with SOFA score at days 1, 4 and 8.

Conclusions

The major findings of our study were that non-surviving septic patients showed persistently higher RDW during the first week of ICU stay than survivors, that RDW during the first week was associated with sepsis severity and mortality, that RDW during the first week could be used as biomarker of outcome in septic patients, and that there was an association between RDW, serum MDA levels, and serum TNF-α levels during the first week.  相似文献   

12.

Background

Thyroid Imaging Reporting and Data System (TIRADS) was developed to improve patient management and cost-effectiveness by avoiding unnecessary fine needle aspiration biopsy (FNAB) in patients with thyroid nodules. However, its clinical use is still very limited. Strain elastography (SE) enables the determination of tissue elasticity and has shown promising results for the differentiation of thyroid nodules.

Methods

The aim of the present study was to evaluate the interobserver agreement (IA) of TIRADS developed by Horvath et al. and SE. Three blinded observers independently scored stored images of TIRADS and SE in 114 thyroid nodules (114 patients). Cytology and/or histology was available for all benign (n = 99) and histology for all malignant nodules (n = 15).

Results

The IA between the 3 observers was only fair for TIRADS categories 2–5 (Coheńs kappa = 0.27,p = 0.000001) and TIRADS categories 2/3 versus 4/5 (ck = 0.25,p = 0.0020). The IA was substantial for SE scores 1–4 (ck = 0.66,p<0.000001) and very good for SE scores 1/2 versus 3/4 (ck = 0.81,p<0.000001). 92–100% of patients with TIRADS-2 had benign lesions, while 28–42% with TIRADS-5 had malignant cytology/histology. The negative-predictive-value (NPV) was 92–100% for TIRADS using TIRADS-categories 4&5 and 96–98% for SE using score ES-3&4 for the diagnosis of malignancy, respectively. However, only 11–42% of nodules were in TIRADS-categories 2&3, as compared to 58–60% with ES-1&2.

Conclusions

IA of TIRADS developed by Horvath et al. is only fair. TIRADS and SE have high NPV for excluding malignancy in the diagnostic work-up of thyroid nodules.  相似文献   

13.

Introduction

Residual inflammation at ICU discharge may have impact upon long-term mortality. However, the significance of ongoing inflammation on mortality after ICU discharge is poorly described. C-reactive protein (CRP) and albumin are measured frequently in the ICU and exhibit opposing patterns during inflammation. Since infection is a potent trigger of inflammation, we hypothesized that CRP levels at discharge would correlate with long-term mortality in septic patients and that the CRP/albumin ratio would be a better marker of prognosis than CRP alone.

Methods

We evaluated 334 patients admitted to the ICU as a result of severe sepsis or septic shock who were discharged alive after a minimum of 72 hours in the ICU. We evaluated the performance of both CRP and CRP/albumin to predict mortality at 90 days after ICU discharge. Two multivariate logistic models were generated based on measurements at discharge: one model included CRP (Model-CRP), and the other included the CRP/albumin ratio (Model-CRP/albumin).

Results

There were 229 (67%) and 111 (33%) patients with severe sepsis and septic shock, respectively. During the 90 days of follow-up, 73 (22%) patients died. CRP/albumin ratios at admission and at discharge were associated with a poor outcome and showed greater accuracy than CRP alone at these time points (p = 0.0455 and p = 0.0438, respectively). CRP levels and the CRP/albumin ratio were independent predictors of mortality at 90 days (Model-CRP: adjusted OR 2.34, 95% CI 1.14–4.83, p = 0.021; Model-CRP/albumin: adjusted OR 2.18, 95% CI 1.10–4.67, p = 0.035). Both models showed similar accuracy (p = 0.2483). However, Model-CRP was not calibrated.

Conclusions

Residual inflammation at ICU discharge assessed using the CRP/albumin ratio is an independent risk factor for mortality at 90 days in septic patients. The use of the CRP/albumin ratio as a long-term marker of prognosis provides more consistent results than standard CRP values alone.  相似文献   

14.

Rationale

Natural killer cells, as a major source of interferon-γ, contribute to the amplification of the inflammatory response as well as to mortality during severe sepsis in animal models.

Objective

We studied the phenotype and functions of circulating NK cells in critically-ill septic patients.

Methods

Blood samples were taken <48 hours after admission from 42 ICU patients with severe sepsis (n = 15) or septic shock (n = 14) (Sepsis group), non-septic SIRS (n = 13) (SIRS group), as well as 21 healthy controls. The immuno-phenotype and functions of NK cells were studied by flow cytometry.

Results

The absolute number of peripheral blood CD3–CD56+ NK cells was similarly reduced in all groups of ICU patients, but with a normal percentage of NK cells. When NK cell cytotoxicity was evaluated with degranulation assays (CD107 expression), no difference was observed between Sepsis patients and healthy controls. Under antibody-dependent cell cytotoxicity (ADCC) conditions, SIRS patients exhibited increased CD107 surface expression on NK cells (62.9[61.3–70]%) compared to healthy controls (43.5[32.1–53.1]%) or Sepsis patients (49.2[37.3–62.9]%) (p = 0.002). Compared to healthy (10.2[6.3–13.1]%), reduced interferon-γ production by NK cells (K562 stimulation) was observed in Sepsis group (6.2[2.2–9.9]%, p<0.01), and especially in patients with septic shock. Conversely, SIRS patients exhibited increased interferon-γ production (42.9[30.1–54.7]%) compared to Sepsis patients (18.4[11.7–35.7]%, p<0.01) or healthy controls (26.8[19.3–44.9]%, p = 0.09) in ADCC condition.

Conclusions

Extensive monitoring of the NK-cell phenotype and function in critically-ill septic patients revealed early decreased NK-cell function with impaired interferon-γ production. These results may aid future NK-based immuno-interventions.

Trial Registration

NTC00699868.  相似文献   

15.

Purpose

We assessed the accuracy of communication between doctors and patients by evaluating the consistency between patient perception of cancer stage and the medical records, and analyzed the most influential factors of incongruence among cancer patients at 10 cancer centers across Korea.

Methods

Information was gathered from cancer patients at the National Cancer Center and nine regional cancer centers located in every province of Korea between 1 July 2008 and 31 August 2008. Data were analyzed using Pearson''s χ2 test and multivariate logistic regression analysis.

Results

The stages of cancer reported by the 1,854 patients showed a low degree of congruence with the stages given in medical records (k = 0.35, P<0.001). Only 57.1% of the patients had accurate knowledge of their cancer stage. In total, 18.5% underestimated their stage of disease, and the more advanced the cancer stage, the more likely they were to underestimate it, in order of local (14.2%), regional (23.7%), and distant (51.6%). Logistic regression analysis showed that congruence was lower in patients with cervical cancer (odds ratio [OR] = 0.51, 95% confidence interval [CI] = 0.30–0.87), recurrence (OR = 0.64, 95% CI = 0.50–0.83), and treatment at the National Cancer Center (OR = 0.53, 95% CI = 0.39–0.72).

Conclusion

There are knowledge gaps between patients'' perceived and actual stage of cancer. Patients with cervical cancer, recurrence, and who received treatment at a regional cancer center showed less understanding of their cancer stage.  相似文献   

16.

Objectives

To study the determinants of health-related quality of life (HRQoL) in Irish patients with diabetes using the Centres for Disease Controls'' (CDC''s) ‘Unhealthy Days’ summary measure and to assesses the agreement between this generic HRQoL measure and the disease-specific Audit of Diabetes Dependant Quality of Life (ADDQoL) measure.

Research Design and Methods

Data were analysed from the Diabetes Quality of Life Study, a cross-sectional study of 1,456 people with diabetes in Ireland (71% response rate). Unhealthy days were assessed using the CDC''s ‘Unhealthy days’ summary measure. Quality of life (QoL) was also assessed using the ADDQoL measure. Analyses were conducted primarily using logistic regression. The agreement between the two QoL instruments was measured using the kappa co-efficient.

Results

Participants reported a median of 2 unhealthy days per month. In multivariate analyses, female gender (P = 0.001), insulin use (P = 0.030), diabetes complications (P = <0.001) were significantly associated with more unhealthy days. Older patients had fewer unhealthy days per month (P = 0.003). Agreement between the two measures of QoL (unhealthy days measure and ADDQoL) was poor, Kappa = 0.234

Conclusions

The findings highlight the determinants of HRQoL in patients with diabetes using a generic HRQoL summary measure. The ‘Unhealthy Days’ and the ADDQoL have poor agreement, therefore the ‘Unhealthy Days’ summary measure may be assessing a different construct. Nonetheless, this study demonstrates that the generic ‘Unhealthy Days’ summary measure can be used to detect determinants of HRQoL in patients with diabetes.  相似文献   

17.

Background

During 2007 and 2008 it is likely that millions of patients in the US received heparin contaminated (CH) with oversulfated chondroitin sulfate, which was associated with anaphylactoid reactions. We tested the hypothesis that CH was associated with serious morbidity, mortality, intensive care unit (ICU) stay and heparin-induced thrombocytopenia following adult cardiac surgery.

Methods and Findings

We conducted a single center, retrospective, propensity-matched cohort study during the period of CH and the equivalent time frame in the three preceding or the two following years. Perioperative data were obtained from the institutional record of the Society of Thoracic Surgeons National Database, for which the data collection is prospective, standardized and performed by independent investigators. After matching, logistic regression was performed to evaluate the independent effect of CH on the composite adverse outcome (myocardial infarction, stroke, pneumonia, dialysis, cardiac arrest) and on mortality. Cox regression was used to determine the association between CH and ICU length of stay. The 1∶5 matched groups included 220 patients potentially exposed to CH and 918 controls. There were more adverse outcomes in the exposed cohort (20.9% versus 12.0%; difference = 8.9%; 95% CI 3.6% to 15.1%, P<0.001) with an odds ratio for CH of 2.0 (95% CI, 1.4 to 3.0, P<0.001). In the exposed group there was a non-significant increase in mortality (5.9% versus 3.5%, difference = 2.4%; 95% CI, −0.4 to 3.5%, P = 0.1), the median ICU stay was longer by 14.1 hours (interquartile range −26.6 to 79.8, S = 3299, P = 0.0004) with an estimated hazard ratio for CH of 1.2 (95% CI, 1.0 to 1.4, P = 0.04). There was no difference in nadir platelet counts between cohorts.

Conclusions

The results from this single center study suggest the possibility that contaminated heparin might have contributed to serious morbidity following cardiac surgery.  相似文献   

18.

Background

Mitochondrial DNA (mtDNA) is a critical activator of inflammation and the innate immune system. However, mtDNA level has not been tested for its role as a biomarker in the intensive care unit (ICU). We hypothesized that circulating cell-free mtDNA levels would be associated with mortality and improve risk prediction in ICU patients.

Methods and Findings

Analyses of mtDNA levels were performed on blood samples obtained from two prospective observational cohort studies of ICU patients (the Brigham and Women''s Hospital Registry of Critical Illness [BWH RoCI, n = 200] and Molecular Epidemiology of Acute Respiratory Distress Syndrome [ME ARDS, n = 243]). mtDNA levels in plasma were assessed by measuring the copy number of the NADH dehydrogenase 1 gene using quantitative real-time PCR. Medical ICU patients with an elevated mtDNA level (≥3,200 copies/µl plasma) had increased odds of dying within 28 d of ICU admission in both the BWH RoCI (odds ratio [OR] 7.5, 95% CI 3.6–15.8, p = 1×10−7) and ME ARDS (OR 8.4, 95% CI 2.9–24.2, p = 9×10−5) cohorts, while no evidence for association was noted in non-medical ICU patients. The addition of an elevated mtDNA level improved the net reclassification index (NRI) of 28-d mortality among medical ICU patients when added to clinical models in both the BWH RoCI (NRI 79%, standard error 14%, p<1×10−4) and ME ARDS (NRI 55%, standard error 20%, p = 0.007) cohorts. In the BWH RoCI cohort, those with an elevated mtDNA level had an increased risk of death, even in analyses limited to patients with sepsis or acute respiratory distress syndrome. Study limitations include the lack of data elucidating the concise pathological roles of mtDNA in the patients, and the limited numbers of measurements for some of biomarkers.

Conclusions

Increased mtDNA levels are associated with ICU mortality, and inclusion of mtDNA level improves risk prediction in medical ICU patients. Our data suggest that mtDNA could serve as a viable plasma biomarker in medical ICU patients. Please see later in the article for the Editors'' Summary  相似文献   

19.

Background

Traditional electronic medical record (EMR) interfaces mark laboratory tests as abnormal based on standard reference ranges derived from healthy, middle-aged adults. This yields many false positive alerts with subsequent alert-fatigue when applied to complex populations like hospitalized, critically ill patients. Novel EMR interfaces using adjusted reference ranges customized for specific patient populations may ameliorate this problem.

Objective

To compare accuracy of abnormal laboratory value indicators in a novel vs traditional EMR interface.

Methods

Laboratory data from intensive care unit (ICU) patients consecutively admitted during a two-day period were recorded. For each patient, available laboratory results and the problem list were sent to two mutually blinded critical care experts, who marked the values about which they would like to be alerted. All disagreements were resolved by an independent super-reviewer. Based on this gold standard, we calculated and compared the sensitivity, specificity, positive and negative predictive values (PPV, NPV) of customized vs traditional abnormal value indicators.

Results

Thirty seven patients with a total of 1341 laboratory results were included. Experts’ agreement was fair (kappa = 0.39). Compared to the traditional EMR, custom abnormal laboratory value indicators had similar sensitivity (77% vs 85%, P = 0.22) and NPV (97.1% vs 98.6%, P = 0.06) but higher specificity (79% vs 61%, P<0.001) and PPV (28% vs 11%, P<0.001).

Conclusions

Reference ranges for laboratory values customized for an ICU population decrease false positive alerts. Disagreement among clinicians about which laboratory values should be indicated as abnormal limits the development of customized reference ranges.  相似文献   

20.

Objective

We elected to analyze the correlation between the pre-treatment apparent diffusion coefficient (ADC) and the clinical, histological, and immunohistochemical status of rectal cancers.

Materials and Methods

Forty-nine rectal cancer patients who received surgical resection without neoadjuvant therapy were selected that underwent primary MRI and diffusion-weighted imaging (DWI). Tumor ADC values were determined and analyzed to identify any correlations between these values and pre-treatment CEA or CA19-9 levels, and/or the histological and immunohistochemical properties of the tumor.

Results

Inter-observer agreement of confidence levels from two separate observers was suitable for ADC measurement (k  =  0.775). The pre-treatment ADC values of different T stage tumors were not equal (p  =  0.003). The overall trend was that higher T stage values correlated with lower ADC values. ADC values were also significantly lower for the following conditions: tumors with the presence of extranodal tumor deposits (p  =  0.006) and tumors with CA19-9 levels ≥ 35 g/ml (p  =  0.006). There was a negative correlation between Ki-67 LI and the ADC value (r  =  −0.318, p  =  0.026) and between the AgNOR count and the ADC value (r  =  −0.310, p  =  0.030).

Conclusion

Significant correlations were found between the pre-treatment ADC values and T stage, extranodal tumor deposits, CA19-9 levels, Ki-67 LI, and AgNOR counts in our study. Lower ADC values were associated with more aggressive tumor behavior. Therefore, the ADC value may represent a useful biomarker for assessing the biological features and possible relationship to the status of identified rectal cancers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号