首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

Diagnosing pediatric pneumonia is challenging in low-resource settings. The World Health Organization (WHO) has defined primary end-point radiological pneumonia for use in epidemiological and vaccine studies. However, radiography requires expertise and is often inaccessible. We hypothesized that plasma biomarkers of inflammation and endothelial activation may be useful surrogates for end-point pneumonia, and may provide insight into its biological significance.

Methods

We studied children with WHO-defined clinical pneumonia (n = 155) within a prospective cohort of 1,005 consecutive febrile children presenting to Tanzanian outpatient clinics. Based on x-ray findings, participants were categorized as primary end-point pneumonia (n = 30), other infiltrates (n = 31), or normal chest x-ray (n = 94). Plasma levels of 7 host response biomarkers at presentation were measured by ELISA. Associations between biomarker levels and radiological findings were assessed by Kruskal-Wallis test and multivariable logistic regression. Biomarker ability to predict radiological findings was evaluated using receiver operating characteristic curve analysis and Classification and Regression Tree analysis.

Results

Compared to children with normal x-ray, children with end-point pneumonia had significantly higher C-reactive protein, procalcitonin and Chitinase 3-like-1, while those with other infiltrates had elevated procalcitonin and von Willebrand Factor and decreased soluble Tie-2 and endoglin. Clinical variables were not predictive of radiological findings. Classification and Regression Tree analysis generated multi-marker models with improved performance over single markers for discriminating between groups. A model based on C-reactive protein and Chitinase 3-like-1 discriminated between end-point pneumonia and non-end-point pneumonia with 93.3% sensitivity (95% confidence interval 76.5–98.8), 80.8% specificity (72.6–87.1), positive likelihood ratio 4.9 (3.4–7.1), negative likelihood ratio 0.083 (0.022–0.32), and misclassification rate 0.20 (standard error 0.038).

Conclusions

In Tanzanian children with WHO-defined clinical pneumonia, combinations of host biomarkers distinguished between end-point pneumonia, other infiltrates, and normal chest x-ray, whereas clinical variables did not. These findings generate pathophysiological hypotheses and may have potential research and clinical utility.  相似文献   

2.

Background and Purpose

The risk of stroke after a transient ischemic attack (TIA) for patients with a positive diffusion-weighted image (DWI), i.e., transient symptoms with infarction (TSI), is much higher than for those with a negative DWI. The aim of this study was to validate the predictive value of a web-based recurrence risk estimator (RRE; http://www.nmr.mgh.harvard.edu/RRE/) of TSI.

Methods

Data from the prospective hospital-based TIA database of the First Affiliated Hospital of Zhengzhou University were analyzed. The RRE and ABCD2 scores were calculated within 7 days of symptom onset. The predictive outcome was ischemic stroke occurrence at 90 days. The receiver-operating characteristics curves were plotted, and the predictive value of the two models was assessed by computing the C statistics.

Results

A total of 221 eligible patients were prospectively enrolled, of whom 46 (20.81%) experienced a stroke within 90 days. The 90-day stroke risk in high-risk TSI patients (RRE ≥4) was 3.406-fold greater than in those at low risk (P <0.001). The C statistic of RRE (0.681; 95% confidence interval [CI], 0.592–0.771) was statistically higher than that of ABCD2 score (0.546; 95% CI, 0.454–0.638; Z = 2.115; P = 0.0344) at 90 days.

Conclusion

The RRE score had a higher predictive value than the ABCD2 score for assessing the 90-day risk of stroke after TSI.  相似文献   

3.
Some global models to predict the risk of diabetes may not be applicable to local populations. We aimed to develop and validate a score to predict type 2 diabetes mellitus (T2DM) in a rural adult Chinese population. Data for a cohort of 12,849 participants were randomly divided into derivation (n = 11,564) and validation (n = 1285) datasets. A questionnaire interview and physical and blood biochemical examinations were performed at baseline (July to August 2007 and July to August 2008) and follow-up (July to August 2013 and July to October 2014). A Cox regression model was used to weigh each variable in the derivation dataset. For each significant variable, a score was calculated by multiplying β by 100 and rounding to the nearest integer. Age, body mass index, triglycerides and fasting plasma glucose (scores 3, 12, 24 and 76, respectively) were predictors of incident T2DM. The model accuracy was assessed by the area under the receiver operating characteristic curve (AUC), with optimal cut-off value 936. With the derivation dataset, sensitivity, specificity and AUC of the model were 66.7%, 74.0% and 0.768 (95% CI 0.760–0.776), respectively. With the validation dataset, the performance of the model was superior to the Chinese (simple), FINDRISC, Oman and IDRS models of T2DM risk but equivalent to the Framingham model, which is widely applicable in a variety of populations. Our model for predicting 6-year risk of T2DM could be used in a rural adult Chinese population.  相似文献   

4.

Background

Accurately predicting the probability of a live birth after in vitro fertilisation (IVF) is important for patients, healthcare providers and policy makers. Two prediction models (Templeton and IVFpredict) have been previously developed from UK data and are widely used internationally. The more recent of these, IVFpredict, was shown to have greater predictive power in the development dataset. The aim of this study was external validation of the two models and comparison of their predictive ability.

Methods and Findings

130,960 IVF cycles undertaken in the UK in 2008–2010 were used to validate and compare the Templeton and IVFpredict models. Discriminatory power was calculated using the area under the receiver-operator curve and calibration assessed using a calibration plot and Hosmer-Lemeshow statistic. The scaled modified Brier score, with measures of reliability and resolution, were calculated to assess overall accuracy. Both models were compared after updating for current live birth rates to ensure that the average observed and predicted live birth rates were equal. The discriminative power of both methods was comparable: the area under the receiver-operator curve was 0.628 (95% confidence interval (CI): 0.625–0.631) for IVFpredict and 0.616 (95% CI: 0.613–0.620) for the Templeton model. IVFpredict had markedly better calibration and higher diagnostic accuracy, with calibration plot intercept of 0.040 (95% CI: 0.017–0.063) and slope of 0.932 (95% CI: 0.839–1.025) compared with 0.080 (95% CI: 0.044–0.117) and 1.419 (95% CI: 1.149–1.690) for the Templeton model. Both models underestimated the live birth rate, but this was particularly marked in the Templeton model. Updating the models to reflect improvements in live birth rates since the models were developed enhanced their performance, but IVFpredict remained superior.

Conclusion

External validation in a large population cohort confirms IVFpredict has superior discrimination and calibration for informing patients, clinicians and healthcare policy makers of the probability of live birth following IVF.  相似文献   

5.

Background and Purpose

Stroke-associated pneumonia (SAP) is a common complication and an important cause of death during hospitalization. The A2DS2 (Age, Atrial fibrillation, Dysphagia, Sex, Stroke Severity) score was developed from the Berlin Stroke Registry and showed good predictive value for predicting SAP. We sought to identify the association between the A2DS2 score and SAP, and, furthermore, to identify whether the A2DS2 score was a predictor for in-hospital death after acute ischemic stroke in a Chinese population.

Methods

This was a retrospective study. 1239 acute ischemic stroke patients were classified to low A2DS2 group (0–4) and high A2DS2 score (5–10) group. Primary outcome was in-hospital SAP. Logistic regression analyses were performed to identify the association between the A2DS2 score and SAP, and also the association between the A2DS2 score and in-hospital death.

Results

The overall incidence rates of SAP and in-hospital mortality after acute ischemic stroke were 7.3% and 2.4%, respectively. The incidence rate of SAP in low and high A2DS2 score groups was separately 3.3% and 24.7% (P<0.001). During hospitalization, 1.2% patients in low score group and 7.8% patients in high score group died (P<0.001). Multivariate regression demonstrated that patients in high score group had a higher risk of SAP (OR = 8.888, 95%CI: 5.552–14.229) and mortality (OR = 7.833, 95%CI: 3.580–17.137) than patients in low score group.

Conclusions

The A2DS2 score was a strong predictor for SAP and in-hospital death of Chinese acute ischemic stroke patients. The A2DS2 score might be a useful tool for the identification of patients with a high risk of SAP and death during hospitalization.  相似文献   

6.

Background

Chronic kidney disease (CKD) is a major health issue for HIV-positive individuals, associated with increased morbidity and mortality. Development and implementation of a risk score model for CKD would allow comparison of the risks and benefits of adding potentially nephrotoxic antiretrovirals to a treatment regimen and would identify those at greatest risk of CKD. The aims of this study were to develop a simple, externally validated, and widely applicable long-term risk score model for CKD in HIV-positive individuals that can guide decision making in clinical practice.

Methods and Findings

A total of 17,954 HIV-positive individuals from the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study with ≥3 estimated glomerular filtration rate (eGFR) values after 1 January 2004 were included. Baseline was defined as the first eGFR > 60 ml/min/1.73 m2 after 1 January 2004; individuals with exposure to tenofovir, atazanavir, atazanavir/ritonavir, lopinavir/ritonavir, other boosted protease inhibitors before baseline were excluded. CKD was defined as confirmed (>3 mo apart) eGFR ≤ 60 ml/min/1.73 m2. Poisson regression was used to develop a risk score, externally validated on two independent cohorts.In the D:A:D study, 641 individuals developed CKD during 103,185 person-years of follow-up (PYFU; incidence 6.2/1,000 PYFU, 95% CI 5.7–6.7; median follow-up 6.1 y, range 0.3–9.1 y). Older age, intravenous drug use, hepatitis C coinfection, lower baseline eGFR, female gender, lower CD4 count nadir, hypertension, diabetes, and cardiovascular disease (CVD) predicted CKD. The adjusted incidence rate ratios of these nine categorical variables were scaled and summed to create the risk score. The median risk score at baseline was −2 (interquartile range –4 to 2). There was a 1:393 chance of developing CKD in the next 5 y in the low risk group (risk score < 0, 33 events), rising to 1:47 and 1:6 in the medium (risk score 0–4, 103 events) and high risk groups (risk score ≥ 5, 505 events), respectively. Number needed to harm (NNTH) at 5 y when starting unboosted atazanavir or lopinavir/ritonavir among those with a low risk score was 1,702 (95% CI 1,166–3,367); NNTH was 202 (95% CI 159–278) and 21 (95% CI 19–23), respectively, for those with a medium and high risk score. NNTH was 739 (95% CI 506–1462), 88 (95% CI 69–121), and 9 (95% CI 8–10) for those with a low, medium, and high risk score, respectively, starting tenofovir, atazanavir/ritonavir, or another boosted protease inhibitor.The Royal Free Hospital Clinic Cohort included 2,548 individuals, of whom 94 individuals developed CKD (3.7%) during 18,376 PYFU (median follow-up 7.4 y, range 0.3–12.7 y). Of 2,013 individuals included from the SMART/ESPRIT control arms, 32 individuals developed CKD (1.6%) during 8,452 PYFU (median follow-up 4.1 y, range 0.6–8.1 y). External validation showed that the risk score predicted well in these cohorts. Limitations of this study included limited data on race and no information on proteinuria.

Conclusions

Both traditional and HIV-related risk factors were predictive of CKD. These factors were used to develop a risk score for CKD in HIV infection, externally validated, that has direct clinical relevance for patients and clinicians to weigh the benefits of certain antiretrovirals against the risk of CKD and to identify those at greatest risk of CKD.  相似文献   

7.
8.

Introduction

Pneumonia is the most frequent type of infection in cancer patients and a frequent cause of ICU admission. The primary aims of this study were to describe the clinical and microbiological characteristics and outcomes in critically ill cancer patients with severe pneumonia.

Methods

Prospective cohort study in 325 adult cancer patients admitted to three ICUs with severe pneumonia not acquired in the hospital setting. Demographic, clinical and microbiological data were collected.

Results

There were 229 (71%) patients with solid tumors and 96 (29%) patients with hematological malignancies. 75% of all patients were in septic shock and 81% needed invasive mechanical ventilation. ICU and hospital mortality rates were 45.8% and 64.9%. Microbiological confirmation was present in 169 (52%) with a predominance of Gram negative bacteria [99 (58.6%)]. The most frequent pathogens were methicillin-sensitive S. aureus [42 (24.9%)], P. aeruginosa [41(24.3%)] and S. pneumonia [21 (12.4%)]. A relatively low incidence of MR [23 (13.6%)] was observed. Adequate antibiotics were prescribed for most patients [136 (80.5%)]. In multivariate analysis, septic shock at ICU admission [OR 5.52 (1.92–15.84)], the use of invasive MV [OR 12.74 (3.60–45.07)] and poor Performance Status [OR 3.00 (1.07–8.42)] were associated with increased hospital mortality.

Conclusions

Severe pneumonia is associated with high mortality rates in cancer patients. A relatively low rate of MR pathogens is observed and severity of illness and organ dysfunction seems to be the best predictors of outcome in this population.  相似文献   

9.

Background

Soluble tumor necrosis factor receptors 1 (sTNFR1) and 2 (sTNFR2) have been associated to progression of renal failure, end stage renal disease and mortality in early stages of chronic kidney disease (CKD), mostly in the context of diabetic nephropathy. The predictive value of these markers in advanced stages of CKD irrespective of the specific causes of kidney disease has not yet been defined. In this study, the relationship between sTNFR1 and sTNFR2 and the risk for adverse cardiovascular events (CVE) and all-cause mortality was investigated in a population with CKD stage 4-5, not yet on dialysis, to minimize the confounding by renal function.

Patients and methods

In 131 patients, CKD stage 4-5, sTNFR1, sTNFR2 were analysed for their association to a composite endpoint of all-cause mortality or first non-fatal CVE by univariate and multivariate Cox proportional hazards models. In the multivariate models, age, gender, CRP, eGFR and significant comorbidities were included as covariates.

Results

During a median follow-up of 33 months, 40 events (30.5%) occurred of which 29 deaths (22.1%) and 11 (8.4%) first non-fatal CVE. In univariate analysis, the hazard ratios (HR) of sTNFR1 and sTNFR2 for negative outcome were 1.49 (95% confidence interval (CI): 1.28-1.75) and 1.13 (95% CI: 1.06-1.20) respectively. After adjustment for clinical covariables (age, CRP, diabetes and a history of cardiovascular disease) both sTNFRs remained independently associated to outcomes (HR: sTNFR1: 1.51, 95% CI: 1.30-1.77; sTNFR2: 1.13, 95% CI: 1.06-1.20). A subanalysis of the non-diabetic patients in the study population confirmed these findings, especially for sTNFR1.

Conclusion

sTNFR1 and sTNFR2 are independently associated to all-cause mortality or an increased risk for cardiovascular events in advanced CKD irrespective of the cause of kidney disease.  相似文献   

10.

Objectives

To explore the relationship between non-alcoholic fatty liver disease (NAFLD) and the metabolic syndrome (MetS), and evaluate the value of NAFLD as a marker for predicting the risk of MetS in a large scale prospective cohort from northern urban Han Chinese population.

Materials and Methods

A total of 17,920 MetS-free at baseline cohort members was included in the current study between 2005 and 2011. The baseline characteristics of the cohort were compared by NAFLD status at baseline, MetS status after follow-up. Cox proportional hazards models were used to estimate the unadjusted or adjusted hazard ratios (HRs) for NAFLD at baseline predicting the risk of MetS.

Results

2,183 (12.18%) new cases of MetS occurred between 2005 and 2011. In unadjusted model, HRs (95% CIs) for NAFLD predicting MetS was 3.65 (3.35, 3.97). After adjusting the confounding factors of age, gender, the metabolic factors, smoke and exercise, the HRs (95% CIs) was 1.70 (1.55, 1.87). Gender difference was observed, adjusted HRs (95% CIs) of NAFLD for predicting MetS were 2.06(1.72, 2.46) and 1.55(1.39, 1.72) in female and male population, respectively. Moreover, 163 participants developed MetS among participants without any MetS component at baseline, and its adjusted HRs was still significant, 1.87 (1.12, 3.13).

Conclusion

The present study indicates that NAFLD is an independent risk factor for predicting the risk of MetS in northern urban Han Chinese population, and the people with NAFLD should initiate weight and dietary control to prevent the occurrence of MetS.  相似文献   

11.

Background

The increasing burden of pneumonia in adults is an emerging health issue in the era of global population aging. This study was conducted to elucidate the burden of community-onset pneumonia (COP) and its etiologic fractions in Japan, the world’s most aged society.

Methods

A multicenter prospective surveillance for COP was conducted from September 2011 to January 2013 in Japan. All pneumonia patients aged ≥15 years, including those with community-acquired pneumonia (CAP) and health care-associated pneumonia (HCAP), were enrolled at four community hospitals on four major islands. The COP burden was estimated based on the surveillance data and national statistics.

Results

A total of 1,772 COP episodes out of 932,080 hospital visits were enrolled during the surveillance. The estimated overall incidence rates of adult COP, hospitalization, and in-hospital death were 16.9 (95% confidence interval, 13.6 to 20.9), 5.3 (4.5 to 6.2), and 0.7 (0.6 to 0.8) per 1,000 person-years (PY), respectively. The incidence rates sharply increased with age; the incidence in people aged ≥85 years was 10-fold higher than that in people aged 15-64 years. The estimated annual number of adult COP cases in the entire Japanese population was 1,880,000, and 69.4% were aged ≥65 years. Aspiration-associated pneumonia (630,000) was the leading etiologic category, followed by Streptococcus pneumoniae-associated pneumonia (530,000), Haemophilus influenzae-associated pneumonia (420,000), and respiratory virus-associated pneumonia (420,000), including influenza-associated pneumonia (30,000).

Conclusions

A substantial portion of the COP burden occurs among elderly members of the Japanese adult population. In addition to the introduction of effective vaccines for S. pneumoniae and influenza, multidimensional approaches are needed to reduce the pneumonia burden in an aging society.  相似文献   

12.

Background

There exist several risk stratification systems for predicting mortality of emergency patients. However, some are complex in clinical use and others have been developed using suboptimal methodology. The objective was to evaluate the capability of the staff at a medical admission unit (MAU) to use clinical intuition to predict in-hospital mortality of acutely admitted patients.

Methods

This is an observational prospective cohort study of adult patients (15 years or older) admitted to a MAU at a regional teaching hospital. The nursing staff and physicians predicted in-hospital mortality upon the patients'' arrival. We calculated discriminatory power as the area under the receiver-operating-characteristic curve (AUROC) and accuracy of prediction (calibration) by Hosmer-Lemeshow goodness-of-fit test.

Results

We had a total of 2,848 admissions (2,463 patients). 89 (3.1%) died while admitted. The nursing staff assessed 2,404 admissions and predicted mortality in 1,820 (63.9%). AUROC was 0.823 (95% CI: 0.762–0.884) and calibration poor. Physicians assessed 738 admissions and predicted mortality in 734 (25.8% of all admissions). AUROC was 0.761 (95% CI: 0.657–0.864) and calibration poor. AUROC and calibration increased with experience. When nursing staff and physicians were in agreement (±5%), discriminatory power was very high, 0.898 (95% CI: 0.773–1.000), and calibration almost perfect. Combining an objective risk prediction score with staff predictions added very little.

Conclusions

Using only clinical intuition, staff in a medical admission unit has a good ability to identify patients at increased risk of dying while admitted. When nursing staff and physicians agreed on their prediction, discriminatory power and calibration were excellent.  相似文献   

13.
Vitamin C may reduce risk of hypertension, either in itself or by marking a healthy diet pattern. We assessed whether plasma ascorbic acid and the a priori diet quality score relate to incident hypertension and whether they explain each other’s predictive abilities. Data were from 2884 black and white adults (43% black, mean age 35 years) initially hypertension-free in the Coronary Artery Risk Development in Young Adults Study (study year 10, 1995–1996). Plasma ascorbic acid was assessed at year 10 and the diet quality score at year 7. Eight-hundred-and-forty cases of hypertension were documented between years 10 and 25. After multiple adjustments, each 12-point (1 SD) higher diet quality score at year 7 related to mean 3.7 μmol/L (95% CI 2.9 to 4.6) higher plasma ascorbic acid at year 10. In separate multiple-adjusted Cox regression models, the hazard ratio of hypertension per 19.6-μmol/L (1 SD) higher ascorbic acid was 0.85 (95% CI 0.79–0.92) and per 12-points higher diet score 0.86 (95% CI 0.79–0.94). These hazard ratios changed little with mutual adjustment of ascorbic acid and diet quality score for each other, or when adjusted for anthropometric variables, diabetes, and systolic blood pressure at year 10. Intake of dietary vitamin C and several food groups high in vitamin C content were inversely related to hypertension, whereas supplemental vitamin C was not. In conclusion, plasma ascorbic acid and the a priori diet quality score independently predict hypertension. This suggests that hypertension risk is reduced by improving overall diet quality and/or vitamin C status. The inverse association seen for dietary but not for supplemental vitamin C suggests that vitamin C status is preferably improved by eating foods rich in vitamin C, in addition to not smoking and other dietary habits that prevent ascorbic acid from depletion.  相似文献   

14.

Introduction

Nomograms are statistical predictive models that can provide the probability of a clinical event. Nomograms have better performance for the estimation of individual risks because of their increased accuracy and objectivity relative to physicians’ personal experiences. Recently, a nomogram for predicting the likelihood that a thyroid nodule is malignant was introduced by Nixon. The aim of this study was to determine whether Nixon’s nomogram can be validated in a Chinese population.

Materials and Methods

All consecutive patients with thyroid nodules who underwent surgery between January and June 2012 in our hospital were enrolled to validate Nixon’s nomogram. Univariate and multivariate analyses were used to identify the risk factors for thyroid carcinoma. Discrimination and calibration were employed to evaluate the performance of Nixon’s model in our population.

Results

A total of 348 consecutive patients with 409 thyroid nodules were enrolled. Thyroid ultrasonographic characteristics, including shape, echo texture, calcification, margins, vascularity and number (solitary vs. multiple nodules), were associated with malignance in the multivariate analysis. The discrimination of all nodules group, the group with a low risk of malignancy (predictive proportion <50%) and the group with a high risk of malignancy (predictive proportion ≥50%) using Nixon’s nomogram was satisfactory, and the area under the receiver operating characteristic curve of the three groups were 0.87, 0.75 and 0.72, respectively. However, the calibration was significant (p = 0.55) only in the high-risk group.

Conclusion

Nixon’s nomogram is a valuable predictive model for the Chinese population and has been externally validated. It has good performance for patients with a high risk of malignancy and may be more suitable for use with these patients in China.  相似文献   

15.

Backgrounds/objectives

Evidence on the association between physical activity and lung function in children is sparse. The aim of this study was to evaluate children’s lung function growth in relation to their physical activity level in Chinese children.

Methods

A total of 1713 school children aged 9.89±0.86 years who were asthma-free at baseline were followed-up for 18 months from 2006 to 2008 in Guangzhou, China. Information on physical activity and other socio-economic status were obtained from self-administered questionnaires. Lung function tests were performed with a standard procedure.

Results

At the baseline survey, physically active girls had significantly higher forced vital capacity (FVC) than inactive girls (1.79 l vs. 1.75 l, p<0.05). The growth rates for lung function indices were significantly higher for girls who were physically active at either or both follow-up surveys than those inactive at both surveys during the follow-up period forced expiratory flows at 25% (FEF25) difference per year (dpy) (0.20 l/s vs. 0.15 l/s), forced expiratory flows at 75% (FEF75) dpy (0.57 l/s vs. 0.45 l/s) and forced expiratory flows between 25% and 75% (FEF25-75) dpy (0.36 l/s vs. 0.28 l/s) (all p<0.05).

Conclusions

Physical activity is positively associated with lung function growth among Chinese school-aged girls. Promotion of physical activity among children is of great importance.  相似文献   

16.

Objectives

This study aimed to update and validate a prediction rule for respiratory syncytial virus (RSV) hospitalization in preterm infants 33–35 weeks gestational age (WGA).

Study Design

The RISK study consisted of 2 multicenter prospective birth cohorts in 41 hospitals. Risk factors were assessed at birth among healthy preterm infants 33–35 WGA. All hospitalizations for respiratory tract infection were screened for proven RSV infection by immunofluorescence or polymerase chain reaction. Multivariate logistic regression analysis was used to update an existing prediction model in the derivation cohort (n = 1,227). In the validation cohort (n = 1,194), predicted versus actual RSV hospitalization rates were compared to determine validity of the model.

Results

RSV hospitalization risk in both cohorts was comparable (5.7% versus 4.9%). In the derivation cohort, a prediction rule to determine probability of RSV hospitalization was developed using 4 predictors: family atopy (OR 1.9; 95%CI, 1.1–3.2), birth period (OR 2.6; 1.6–4.2), breastfeeding (OR 1.7; 1.0–2.7) and siblings or daycare attendance (OR 4.7; 1.7–13.1). The model showed good discrimination (c-statistic 0.703; 0.64–0.76, 0.702 after bootstrapping). External validation showed good discrimination and calibration (c-statistic 0.678; 0.61–0.74).

Conclusions

Our prospectively validated prediction rule identifies infants at increased RSV hospitalization risk, who may benefit from targeted preventive interventions. This prediction rule can facilitate country-specific, cost-effective use of RSV prophylaxis in late preterm infants.  相似文献   

17.

Background

We have investigated predictors of 90-day-mortality in a large cohort of non-specific cancer of unknown primary patients.

Methods

Predictors have been identified by univariate and then logistic regression analysis in a single-center cohort comprising 429 patients (development cohort). We identified four predictors that produced a predictive score that has been applied to an independent multi-institutional cohort of 409 patients (validation cohort). The score was the sum of predictors for each patient (0 to 4).

Results

The 90-day-mortality-rate was 33 and 26% in both cohorts. Multivariate analysis has identified 4 predictors for 90-day-mortality: performance status>1 (OR = 3.03, p = 0.001), at least one co-morbidity requiring treatment (OR = 2.68, p = 0.004), LDH>1.5×the upper limit of normal (OR = 2.88, p = 0.007) and low albumin or protein levels (OR = 3.05, p = 0.007). In the development cohort, 90-day-mortality-rates were 12.5%, 32% and 64% when the score was [0–1], 2 and [3][4], respectively. In the validation cohort, risks were 13%, 25% and 62% according to the same score values.

Conclusions

We have validated a score that is easily calculated at the beside that estimates the 90-days mortality rate in non-specific CUP patients. This could be helpful to identify patients who would be better served with palliative care rather than aggressive chemotherapy.  相似文献   

18.
The evidence about the effect of dietary patterns on blood cholesterol from cohort studies was very scarce. The study was to identify the association of dietary patterns with lipid profile, especially cholesterol, in a cohort in north China. Using a 1-year food frequency questionnaire, we assessed the dietary intake of 4515 adults from the Harbin People’s Health Study in 2008, aged 20-74 years. Principle component analysis was used to identify dietary patterns. The follow-up was completed in 2012. Fasting blood samples were collected for the determination of blood lipid concentrations. Logistic regression models were used to evaluate the association of dietary patterns with the incidence of hypercholesterolemia, hypertriglyceridemia, and low-HDL cholesterolemia. Five dietary patterns were identified (“staple food”, “vegetable, fruit and milk”, “potato, soybean and egg”, “snack”, and “meat”). The relative risk (RR) between the extreme tertiles of the snack dietary pattern scores was 1.72 (95% CI = 1.14, 2.59, P = 0.004) for hypercholesterolemia, 1.39 (1.13, 1.75, P = 0.036) for hypertriglyceridemia, after adjustment for age, sex, education, body mass index, smoking, alcohol consumption, energy intake, exercise and baseline lipid concentrations. There was a significant positive association between the snack dietary pattern scores and fasting serum total cholesterol (SRC (standardized regression coefficient) = 0.262, P = 0.025), LDL-c (SRC = 0.324, P = 0.002) and triglycerides (SRC = 0.253, P = 0.035), after adjustment for the multiple variables above. Moreover, the adjusted RR of hypertriglyceridemia between the extreme tertiles was 0.73 (0.56, 0.94, P = 0.025) for the vegetable, fruit and milk dietary pattern, and 1.86 (1.33, 2.41, P = 0.005) for the meat dietary pattern. The snack dietary pattern was a newly emerged dietary pattern in northern Chinese adults. It appears conceivable that the risk of hypercholesterolemia can be reduced by changing the snack dietary pattern.  相似文献   

19.

Background

Most existing risk stratification systems predicting mortality in emergency departments or admission units are complex in clinical use or have not been validated to a level where use is considered appropriate. We aimed to develop and validate a simple system that predicts seven-day mortality of acutely admitted medical patients using routinely collected variables obtained within the first minutes after arrival.

Methods and Findings

This observational prospective cohort study used three independent cohorts at the medical admission units at a regional teaching hospital and a tertiary university hospital and included all adult (≥15 years) patients. Multivariable logistic regression analysis was used to identify the clinical variables that best predicted the endpoint. From this, we developed a simplified model that can be calculated without specialized tools or loss of predictive ability. The outcome was defined as seven-day all-cause mortality. 76 patients (2.5%) met the endpoint in the development cohort, 57 (2.0%) in the first validation cohort, and 111 (4.3%) in the second. Systolic blood Pressure, Age, Respiratory rate, loss of Independence, and peripheral oxygen Saturation were associated with the endpoint (full model). Based on this, we developed a simple score (range 0–5), ie, the PARIS score, by dichotomizing the variables. The ability to identify patients at increased risk (discriminatory power and calibration) was excellent for all three cohorts using both models. For patients with a PARIS score ≥3, sensitivity was 62.5–74.0%, specificity 85.9–91.1%, positive predictive value 11.2–17.5%, and negative predictive value 98.3–99.3%. Patients with a score ≤1 had a low mortality (≤1%); with 2, intermediate mortality (2–5%); and ≥3, high mortality (≥10%).

Conclusions

Seven-day mortality can be predicted upon admission with high sensitivity and specificity and excellent negative predictive values.  相似文献   

20.

Background

Prediction of disease-specific survival (DSS) for individual patient with gastric cancer after R0 resection remains a clinical concern. Since the clinicopathologic characteristics of gastric cancer vary widely between China and western countries, this study is to evaluate a nomogram from Memorial Sloan-Kettering Cancer Center (MSKCC) for predicting the probability of DSS in patients with gastric cancer from a Chinese cohort.

Methods

From 1998 to 2007, clinical data of 979 patients with gastric cancer who underwent R0 resection were retrospectively collected from Peking University Cancer Hospital & Institute and used for external validation. The performance of the MSKCC nomogram in our population was assessed using concordance index (C-index) and calibration plot.

Results

The C-index for the MSKCC predictive nomogram was 0.74 in the Chinese cohort, compared with 0.69 for American Joint Committee on Cancer (AJCC) staging system (P<0.0001). This suggests that the discriminating value of MSKCC nomogram is superior to AJCC staging system for prognostic prediction in the Chinese population. Calibration plots showed that the actual survival of Chinese patients corresponded closely to the MSKCC nonogram-predicted survival probabilities. Moreover, MSKCC nomogram predictions demonstrated the heterogeneity of survival in stage IIA/IIB/IIIA/IIIB disease of the Chinese patients.

Conclusion

In this study, we externally validated MSKCC nomogram for predicting the probability of 5- and 9-year DSS after R0 resection for gastric cancer in a Chinese population. The MSKCC nomogram performed well with good discrimination and calibration. The MSKCC nomogram improved individualized predictions of survival, and may assist Chinese clinicians and patients in individual follow-up scheduling, and decision making with regard to various treatment options.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号