首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Natural Language Processing (NLP) has been shown effective to analyze the content of radiology reports and identify diagnosis or patient characteristics. We evaluate the combination of NLP and machine learning to detect thromboembolic disease diagnosis and incidental clinically relevant findings from angiography and venography reports written in French. We model thromboembolic diagnosis and incidental findings as a set of concepts, modalities and relations between concepts that can be used as features by a supervised machine learning algorithm. A corpus of 573 radiology reports was de-identified and manually annotated with the support of NLP tools by a physician for relevant concepts, modalities and relations. A machine learning classifier was trained on the dataset interpreted by a physician for diagnosis of deep-vein thrombosis, pulmonary embolism and clinically relevant incidental findings. Decision models accounted for the imbalanced nature of the data and exploited the structure of the reports.

Results

The best model achieved an F measure of 0.98 for pulmonary embolism identification, 1.00 for deep vein thrombosis, and 0.80 for incidental clinically relevant findings. The use of concepts, modalities and relations improved performances in all cases.

Conclusions

This study demonstrates the benefits of developing an automated method to identify medical concepts, modality and relations from radiology reports in French. An end-to-end automatic system for annotation and classification which could be applied to other radiology reports databases would be valuable for epidemiological surveillance, performance monitoring, and accreditation in French hospitals.  相似文献   

2.

Background

This study aimed to assess whether Chinese men who have sex with men (MSM) had a significantly elevated prevalence of psychiatric disorders compared to urban males in China.

Methods

807 MSM were recruited using a respondent-driven sampling (RDS) method in urban area of northeast China. Psychiatric disorders were assessed employing the Composite International Diagnostic Interview (CIDI. Version 1.0) according to the criteria of the DSM-III-R.

Results

Chinese MSM had a significantly elevated standardized prevalence ratios (SPR) for lifetime prevalence of any disorder (SPR = 2.8; 95%CI: 2.5–3.2), mood disorder (SPR = 3.0; 95%CI: 2.3–3.7), anxiety disorder (SPR = 5.5; 95% CI: 4.6–6.5), alcohol use disorder (SPR = 2.4, 95%CI: 2.0–2.8), and combination of disorders (SPR = 4.2; 95%CI: 3.4–5.1).

Conclusions

Chinese MSM had significantly elevated prevalence and comorbidity of psychiatric disorders. RDS is a suitable sampling method for psychiatric epidemiological survey in MSM population.  相似文献   

3.

Background

Only few longitudinal studies on the course of asthma among adults have been carried out.

Objective

The aim of the present prospective study, carried out between 2000 and 2009 in Italy, is to assess asthma remission and control in adults with asthma, as well as their determinants.

Methods

All the subjects with current asthma (21–47 years) identified in 2000 in the Italian Study on Asthma in Young Adults in 6 Italian centres were followed up. Asthma remission was assessed at follow-up in 2008–2009 (n = 214), asthma control at baseline and follow-up. Asthma remission and control were related to potential determinants by a binomial logistic and a multinomial logistic model. Separate models for remission were used for men and women.

Results

The estimate of the proportion of subjects who were in remission was 29.7% (95%CI: 14.4%;44.9%). Men who were not under control at baseline had a very low probability of being in remission at follow-up (OR = 0.06; 95%CI:0.01;0.33) when compared to women (OR = 0.40; 95%CI:0.17;0.94). The estimates of the proportion of subjects who were under control, partial control or who were not under control in our sample were 26.3% (95%CI: 21.2;31.3%), 51.6% (95%CI: 44.6;58.7%) and 22.1% (95%CI: 16.6;27.6%), respectively. Female gender, increasing age, the presence of chronic cough and phlegm and partial or absent asthma control at baseline increased the risk of uncontrolled asthma at follow-up.

Conclusion

Asthma remission was achieved in nearly 1/3 of the subjects with active asthma in the Italian adult population, whereas the proportion of the subjects with controlled asthma among the remaining subjects was still low.  相似文献   

4.

Background

The reoperation rate remains high after liver transplantation and the impact of reoperation on graft and recipient outcome is unclear. The aim of our study is to evaluate the impact of early reoperation following living-donor liver transplantation (LDLT) on graft and recipient survival.

Methods

Recipients that underwent LDLT (n = 111) at the University of Tokyo Hospital between January 2007 and December 2012 were divided into two groups, a reoperation group (n = 27) and a non-reoperation group (n = 84), and case-control study was conducted.

Results

Early reoperation was performed in 27 recipients (24.3%). Mean time [standard deviation] from LDLT to reoperation was 10 [9.4] days. Female sex, Child-Pugh class C, Non-HCV etiology, fulminant hepatitis, and the amount of intraoperative fresh frozen plasma administered were identified as possibly predictive variables, among which females and the amount of FFP were identified as independent risk factors for early reoperation by multivariable analysis. The 3-, and 6- month graft survival rates were 88.9% (95%confidential intervals [CI], 70.7–96.4), and 85.2% (95%CI, 66.5–94.3), respectively, in the reoperation group (n = 27), and 95.2% (95%CI, 88.0–98.2), and 92.9% (95%CI, 85.0–96.8), respectively, in the non-reoperation group (n = 84) (the log-rank test, p = 0.31). The 12- and 36- month overall survival rates were 96.3% (95%CI, 77.9–99.5), and 88.3% (95%CI, 69.3–96.2), respectively, in the reoperation group, and 89.3% (95%CI, 80.7–94.3) and 88.0% (95%CI, 79.2–93.4), respectively, in the non-reoperation group (the log-rank test, p = 0.59).

Conclusions

Observed graft survival for the recipients who underwent reoperation was lower compared to those who did not undergo reoperation, though the result was not significantly different. Recipient overall survival with reoperation was comparable to that without reoperation. The present findings enhance the importance of vigilant surveillance for postoperative complication and surgical rescue at an early postoperative stage in the LDLT setting.  相似文献   

5.

Background

With the widespread use of anti-retroviral therapy (ART), individuals infected with human immune deficiency virus (HIV) are increasingly experiencing morbidity and mortality from respiratory disorders. However, the prevalence or the risk factors associated with emphysema and bronchiolitis are largely unknown.

Methods

Thoracic computed tomography (CT) scans were performed in 1,446 patients infected with HIV who were on ART and who attended a tertiary care metabolic clinic (average age 48 years and 29% females). Detailed history and physical examination including anthropometric measurements were performed. Complete pulmonary function tests were performed in a subset of these patients (n = 364). No subjects were acutely ill with a respiratory condition at the time of CT scanning.

Findings

Nearly 50% of the subjects had CT evidence for emphysema, bronchiolitis or both with 13% (n = 195) showing bronchiolitis, 19% (n = 274) showing emphysema and 16% (n = 238) revealing both. These phenotypes were synergistically associated with reduced regular physical activity (p for interaction <.0001). The most significant risk factors for both phenotypes were cigarette smoking, intravenous drug use and peripheral leucocytosis. Together, the area-under-the curve statistics was 0.713 (p = 0.0037) for discriminating those with and without these phenotypes. There were no significant changes in lung volumes or flow rates related to these phenotypes, though the carbon monoxide diffusion capacity was reduced for the emphysema phenotype.

Interpretation

Emphysema and bronchiolitis are extremely common in HIV-infected patients who are treated with ART and can be identified by use of thoracic CT scanning.  相似文献   

6.
T Cheung  S Oberoi 《PloS one》2012,7(8):e43405

Introduction

Children with cleft lip and palate (CLP) are known to have airway problems. Previous studies have shown that individuals with CLP have a 30% reduction in nasal airway size compared to non-cleft controls. No reports have been found on cross-sectional area and volume of the pharyngeal airway in clefts. Introduction of Cone-Beam CT (CBCT) and imaging software has facilitated generation of 3D images for assessment of the cross-sectional area and volume of the airway.

Objective

To assess the pharyngeal airway in individuals with CLP using CBCT by measuring volume and smallest cross-sectional areas and compare with 19 age- and sex-matched non-cleft controls.

Methods

Retrospective study of CBCT data of pre-adolescent individuals (N = 19, Mean age = 10.6, 7 females, 12 males, UCLP = 6, BCLP = 3) from the Center for Craniofacial Anomalies. Volumetric analysis was performed using image segmentation features in CB Works 3.0. Volume and smallest cross-sectional were studied in both groups. Seven measurements were repeated to verify reliability using Pearson correlation coefficient. Volume and cross-sectional area differences were analyzed using paired t-tests.

Results

The method was found to be reliable. Individuals with CLP did not exhibit smaller total airway volume and cross sectional area than non-CLP controls.

Conclusion

3D imaging using CBCT and CB Works is reliable for assessing airway volume. Previous studies have shown that the nasal airway is restricted in individuals with CLP. In our study, we found that the pharyngeal airway is not compromised in these individuals.  相似文献   

7.

Objectives

The relationship between disability and comorbidity on mortality is widely perceived as additive in clinical models of frailty.

Design

National data were retrospectively extracted from medical records of community hospital.

Data Sources

There were of 12,804 acutely-disabled patients admitted for inpatient rehabilitation in Singapore rehabilitation community hospitals from 1996 through 2005 were followed up for death till 31 December 2011.

Outcome Measure

Cox proportional-hazards regression to assess the interaction of comorbidity and disability at discharge on all-cause mortality.

Results

During a median follow-up of 10.9 years, there were 8,565 deaths (66.9%). The mean age was 73.0 (standard deviation: 11.5) years. Independent risk factors of mortality were higher comorbidity (p<0.001), severity of disability at discharge (p<0.001), being widowed (adjusted hazard ratio [aHR]: 1.38, 95% confidence interval [CI]:1.25–1.53), low socioeconomic status (aHR:1.40, 95%CI:1.29–1.53), discharge to nursing home (aHR:1.14, 95%CI:1.05–1.22) and re-admission into acute care (aHR:1.54, 95%CI:1.45–1.65). In the main effects model, those with high comorbidity had an aHR = 2.41 (95%CI:2.13–2.72) whereas those with total disability had an aHR = 2.28 (95%CI:2.12–2.46). In the interaction model, synergistic interaction existed between comorbidity and disability (p<0.001) where those with high comorbidity and total disability had much higher aHR = 6.57 (95%CI:5.15–8.37).

Conclusions

Patients with greater comorbidity and disability at discharge, discharge to nursing home or re-admission into acute care, lower socioeconomic status and being widowed had higher mortality risk. Our results identified predictive variables of mortality that map well onto the frailty cascade model. Increasing comorbidity and disability interacted synergistically to increase mortality risk.  相似文献   

8.

Background

The deleterious health effects of sedentary behaviors, independent of physical activity, are increasingly being recognized. However, associations with cognitive performance are not known.

Purpose

To estimate the associations between different sedentary behaviors and cognitive performance in healthy older adults.

Methods

Computer use, time spent watching television (TV), time spent reading and habitual physical activity levels were self-reported twice (in 2001 and 2007) by participants in the SUpplémentation en Vitamines et MinérauX (SU.VI.MAX and SU.VI.MAX2) study. Cognitive performance was assessed at follow-up (in 2007–2009) via a battery of 6 neuropsychological tests used to derive verbal memory and executive functioning scores. Analyses (ANCOVA) were performed among 1425 men and 1154 women aged 65.6±4.5 at the time of the neuropsychological evaluation. We estimated mean differences with 95% confidence intervals (95%CI) in cognitive performance across categories of each type of sedentary behavior.

Results

In multivariable cross-sectional models, compared to non-users, participants using the computer for >1 h/day displayed better verbal memory (mean difference = 1.86; 95%CI: 0.95, 2.77) and executive functioning (mean difference = 2.15; 95%CI: 1.22, 3.08). A negative association was also observed between TV viewing and executive functioning. Additionally, participants who increased their computer use by more than 30 min between 2001 and 2007 showed better performance on both verbal memory (mean difference = 1.41; 95%CI: 0.55, 2.27) and executive functioning (mean difference = 1.41; 95%CI: 0.53, 2.28) compared to those who decreased their computer use during that period.

Conclusion

Specific sedentary behaviors are differentially associated with cognitive performance. In contrast to TV viewing, regular computer use may help maintain cognitive function during the aging process.

Clinical Trial Registration

clinicaltrial.gov (number NCT00272428).  相似文献   

9.

Background and Aims

Individuals with Lynch syndrome have a high lifetime risk of developing colorectal tumors. In this prospective cohort study of individuals with Lynch syndrome, we examined associations between use of dietary supplements and occurrence of colorectal adenomas.

Materials and Methods

Using data of 470 individuals with Lynch syndrome in a prospective cohort study, associations between dietary supplement use and colorectal adenoma risk were evaluated by calculating hazard ratios (HR) and 95% confidence intervals (CI) using cox regression models adjusted for age, sex, and number of colonoscopies during person time. Robust sandwich covariance estimation was used to account for dependency within families.

Results

Of the 470 mismatch repair gene mutation carriers, 122 (26.0%) developed a colorectal adenoma during an overall median person time of 39.1 months. 40% of the study population used a dietary supplement. Use of any dietary supplement was not statistically significantly associated with colorectal adenoma risk (HR = 1.18; 95%CI 0.80–1.73). Multivitamin supplement use (HR = 1.15; 95%CI 0.72–1.84), vitamin C supplement use (HR = 1.57; 95%CI 0.93–2.63), calcium supplement use (HR = 0.69; 95%CI 0.25–1.92), and supplements containing fish oil (HR = 1.60; 95%CI 0.79–3.23) were also not associated with occurrence of colorectal adenomas.

Conclusion

This prospective cohort study does not show inverse associations between dietary supplement use and occurrence of colorectal adenomas among individuals with Lynch syndrome. Further research is warranted to determine whether or not dietary supplement use is associated to colorectal adenoma and colorectal cancer risk in MMR gene mutation carriers.  相似文献   

10.

Purpose

To evaluate the usefulness of 2-[18F] fluoro-2-deoxy-D-glucose-positron emission tomography/computed tomography (FDG-PET/CT) in the early detection of breast cancer tumor recurrences and its role in post-therapy surveillance.

Methods

FDG-PET/CT was performed on patients with increased serum CA 15-3 levels and/or clinical/radiologic suspicion of recurrence. A group of asymptomatic patients who underwent FDG-PET/CT in the post-therapy surveillance of breast cancer served as the controls. The results were analyzed based on the patients'' histological data, other imaging modalities and/or clinical follow-up. Recurrence was defined as evidence of recurrent lesions within 12 months of the FDG-PET/CT scan.

Results

Based on elevated serum CA15-3 levels (n = 31) and clinical/radiologic suspicion (n = 40), 71 scans were performed due to suspected recurrence, whereas 69 scans were performed for asymptomatic follow-up. The sensitivity and specificity of FDG-PET/CT were 87.5% and 87.1% in the patients with suspected recurrence and 77.8% and 91.7% in the asymptomatic patients. The positive predictive value in the patients with suspected recurrence (mainly due to elevated serum CA 15-3 levels) was higher than that in asymptomatic patients (P = 0.013). Recurrences were proven in 56.3% (40/71) of the patients with suspected recurrence and in 13% (9/69) of the asymptomatic patients (P<0.001). FDG-PET/CT resulted in changes in the planned management in 49.3% (35/71) of the patients with suspected recurrence and 10.1% (7/69) of the asymptomatic patients (P<0.001). After follow-up, 77.5% (55/71) of the patients with suspicious recurrences and 97.1% (67/69) of the asymptomatic patients were surviving at the end of the study (P<0.001).

Conclusions

FDG-PET/CT was able to detect recurrence, and the results altered the intended patient management in the post-therapy surveillance of breast cancer. FDG-PET/CT should be used as a priority in patients with increased serum CA 15-3 levels, or with clinical/radiologic suspicion of recurrence, and might be useful for asymptomatic patients.  相似文献   

11.

Objectives

To investigate the frequency of aortic calcifications at the outer edge of the false lumen and the frequency of fully circular aortic calcifications in a consecutive series of patients with aortic dissection who underwent contrast-enhanced CT.

Methods

The study population compromised of 69 consecutive subjects aged 60 years and older with a contrast-enhanced CT scan demonstrating an aortic dissection. All CT scans were evaluated for the frequency of aortic calcifications at the outer edge of the false lumen and the frequency of fully circular aortic calcifications by two experienced observers. Between observer reliability was evaluated by using Cohen’s Kappa. Differences between groups were tested using unpaired T test and Chi-square test.

Results

Presumed media calcifications were observed in 22 (32%) patients of 60 years and older and were found more frequently in chronic aortic dissection (N = 12/23, 52%) than in acute aortic dissection (N = 10/46, 22%).

Conclusion

As the intima has been torn away by the aortic dissection it is highly likely that CT scans can visualize the calcifications in the tunica media of the aorta.  相似文献   

12.

Background

Timely information about disease severity can be central to the detection and management of outbreaks of acute respiratory infections (ARI), including influenza. We asked if two resources: 1) free text, and 2) structured data from an electronic medical record (EMR) could complement each other to identify patients with pneumonia, an ARI severity landmark.

Methods

A manual EMR review of 2747 outpatient ARI visits with associated chest imaging identified x-ray reports that could support the diagnosis of pneumonia (kappa score  = 0.88 (95% CI 0.82∶0.93)), along with attendant cases with Possible Pneumonia (adds either cough, sputum, fever/chills/night sweats, dyspnea or pleuritic chest pain) or with Pneumonia-in-Plan (adds pneumonia stated as a likely diagnosis by the provider). The x-ray reports served as a reference to develop a text classifier using machine-learning software that did not require custom coding. To identify pneumonia cases, the classifier was combined with EMR-based structured data and with text analyses aimed at ARI symptoms in clinical notes.

Results

370 reference cases with Possible Pneumonia and 250 with Pneumonia-in-Plan were identified. The x-ray report text classifier increased the positive predictive value of otherwise identical EMR-based case-detection algorithms by 20–70%, while retaining sensitivities of 58–75%. These performance gains were independent of the case definitions and of whether patients were admitted to the hospital or sent home. Text analyses seeking ARI symptoms in clinical notes did not add further value.

Conclusion

Specialized software development is not required for automated text analyses to help identify pneumonia patients. These results begin to map an efficient, replicable strategy through which EMR data can be used to stratify ARI severity.  相似文献   

13.

Background

Measurement of intra-abdominal pressure (IAP) is an important parameter in the surveillance of intensive care unit patients. Standard values of IAP during pregnancy have not been well defined. The aim of this study was to assess IAP values in pregnant women before and after cesarean delivery.

Methods

This prospective study, carried out from January to December 2011 in a French tertiary care centre, included women with an uneventful pregnancy undergoing elective cesarean delivery at term. IAP was measured through a Foley catheter inserted in the bladder under spinal anaesthesia before cesarean delivery, and every 30 minutes during the first two hours in the immediate postoperative period.

Results

The study included 70 women. Mean IAP before cesarean delivery was 14.2 mmHg (95%CI: 6.3–23). This value was significantly higher than in the postoperative period: 11.5 mmHg (95%CI: 5–19.7) for the first measurement (p = 0.002). IAP did not significantly change during the following two postoperative hours (p = 0.2). Obese patients (n = 25) had a preoperative IAP value significantly higher than non-obese patients: 15.7 vs. 12.4; p = 0.02.

Conclusion

In term pregnancies, IAP values are significantly higher before delivery than in the post-partum period, where IAP values remain elevated for at least two hours at the level of postoperative classical abdominal surgery. The knowledge of these physiological changes in IAP values may help prevent organ dysfunction/failure when abdominal compartment syndrome occurs after cesarean delivery.  相似文献   

14.

Objective

To evaluate how the country of origin affects the probability of being delivered by cesarean section when giving birth at public Portuguese hospitals.

Study Design

Women delivered of a singleton birth (n = 8228), recruited from five public level III maternities (April 2005–August 2006) during the procedure of assembling a birth cohort, were classified according to the country of origin and her migration status as Portuguese (n = 7908), non-Portuguese European (n = 84), African (n = 77) and Brazilian (n = 159). A Poisson model was used to evaluate the association between country of birth and cesarean section that was measured by adjusted prevalence ratio (PR) and respective 95% confidence intervals (95%CI).

Results

The cesarean section rate varied from 32.1% in non-Portuguese European to 48.4% in Brazilian women (p = 0.008). After adjustment for potential confounders and compared to Portuguese women as a reference, Brazilian women presented significantly higher prevalence of cesarean section (PR = 1.26; 95%CI: 1.08–1.47). The effect was more evident among multiparous women (PR = 1.39; 95%CI: 1.12–1.73) and it was observed when cesarean section was performed either before labor (PR = 1.43; 95%CI: 0.99–2.06) or during labor (PR = 1.30; 95%CI: 1.07–1.58).

Conclusions

The rate of cesarean section was significantly higher among Brazilian women and it was independent of the presence of any known risk factors or usual clinical indications, suggesting that cultural background influences the mode of delivery overcoming the expected standard of care and outcomes in public health services.  相似文献   

15.

Background

Incident reporting systems (IRS) are used to identify medical errors in order to learn from mistakes and improve patient safety in hospitals. However, IRS contain only a small fraction of occurring incidents. A more comprehensive overview of medical error in hospitals may be obtained by combining information from multiple sources. The WHO has developed the International Classification for Patient Safety (ICPS) in order to enable comparison of incident reports from different sources and institutions.

Methods

The aim of this paper was to provide a more comprehensive overview of medical error in hospitals using a combination of different information sources. Incident reports collected from IRS, patient complaints and retrospective chart review in an academic acute care hospital were classified using the ICPS. The main outcome measures were distribution of incidents over the thirteen categories of the ICPS classifier “Incident type”, described as odds ratios (OR) and proportional similarity indices (PSI).

Results

A total of 1012 incidents resulted in 1282 classified items. Large differences between data from IRS and patient complaints (PSI = 0.32) and from IRS and retrospective chart review (PSI = 0.31) were mainly attributable to behaviour (OR = 6.08), clinical administration (OR = 5.14), clinical process (OR = 6.73) and resources (OR = 2.06).

Conclusions

IRS do not capture all incidents in hospitals and should be combined with complementary information about diagnostic error and delayed treatment from patient complaints and retrospective chart review. Since incidents that are not recorded in IRS do not lead to remedial and preventive action in response to IRS reports, healthcare centres that have access to different incident detection methods should harness information from all sources to improve patient safety.  相似文献   

16.
17.

Background

Reduced estimated glomerular filtration rate (eGFR) using the cystatin-C derived equations might be a better predictor of cardiovascular disease (CVD) mortality compared with the creatinine-derived equations, but this association remains unclear in elderly individuals.

Aim

The aims of this study were to compare the predictive values of the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI)-creatinine, CKD-EPI-cystatin C and CKD-EPI-creatinine-cystatin C eGFR equations for all-cause mortality and CVD events (hospitalizations±mortality).

Methods

Prospective cohort study of 1165 elderly women aged>70 years. Associations between eGFR and outcomes were examined using Cox regression analysis. Test accuracy of eGFR equations for predicting outcomes was examined using Receiver Operating Characteristic (ROC) analysis and net reclassification improvement (NRI).

Results

Risk of all-cause mortality for every incremental reduction in eGFR determined using CKD-EPI-creatinine, CKD-EPI-cystatin C and the CKD-EPI-creatinine-cystatic C equations was similar. Areas under the ROC curves of CKD-EPI-creatinine, CKD-EPI-cystatin C and CKD-EPI-creatinine-cystatin C equations for all-cause mortality were 0.604 (95%CI 0.561–0.647), 0.606 (95%CI 0.563–0.649; p = 0.963) and 0.606 (95%CI 0.563–0.649; p = 0.894) respectively. For all-cause mortality, there was no improvement in the reclassification of eGFR categories using the CKD-EPI-cystatin C (NRI -4.1%; p = 0.401) and CKD-EPI-creatinine-cystatin C (NRI -1.2%; p = 0.748) compared with CKD-EPI-creatinine equation. Similar findings were observed for CVD events.

Conclusion

eGFR derived from CKD-EPI cystatin C and CKD-EPI creatinine-cystatin C equations did not improve the accuracy or predictive ability for clinical events compared to CKD-EPI-creatinine equation in this cohort of elderly women.  相似文献   

18.

Background

Serum lens culinaris agglutinin-reactive fraction of α-fetoprotein (AFP-L3%) has been widely used for HCC diagnosis and follow-up surveillance as tumor serologic marker. However, the prognostic value of high pre-treatment serum AFP-L3% in patients with hepatocellular carcinoma (HCC) remains controversial. We therefore conduct a meta-analysis to assess the relationship between high pre-treatment serum AFP-L3% and clinical outcome of HCC.

Methods

Eligible studies were identified through systematic literature searches. A meta-analysis of fifteen studies (4,465 patients) was carried out to evaluate the association between high pre-treatment serum AFP-L3% and overall survival (OS) and disease-free survival (DFS) in HCC patients. Sensitivity and subgroup analyses were also conducted in this meta-analysis.

Results

Our analysis results showed that high pre-treatment serum AFP-L3% implied poor OS (HR: 1.65, 95%CI: 1.45–1.89 p<0.00001) and DFS (HR: 1.80, 95% CI: 1.49–2.17 p<0.00001) of HCC. Subgroup analysis revealed that there was association between pre-treatment serum AFP-L3% and endpoint (OS and DFS) in low AFP concentration HCC patients (HR: 1.96, 95% CI: 1.24–3.10, p = 0.004; HR: 2.53, 95% CI: 1.09–5.89, p = 0.03, respectively).

Conclusion

The current evidence suggests that high pre-treatment serum AFP-L3% levels indicated a poor prognosis for patients with HCC and AFP-L3% may have significant prognostic value in HCC patients with low AFP concentration.  相似文献   

19.

Background

Interval cancers are primary breast cancers diagnosed in women after a negative screening test and before the next screening invitation. Our aim was to evaluate risk factors for interval cancer and their subtypes and to compare the risk factors identified with those associated with incident screen-detected cancers.

Methods

We analyzed data from 645,764 women participating in the Spanish breast cancer screening program from 2000–2006 and followed-up until 2009. A total of 5,309 screen-detected and 1,653 interval cancers were diagnosed. Among the latter, 1,012 could be classified on the basis of findings in screening and diagnostic mammograms, consisting of 489 true interval cancers (48.2%), 235 false-negatives (23.2%), 172 minimal-signs (17.2%) and 114 occult tumors (11.3%). Information on the screening protocol and women''s characteristics were obtained from the screening program registry. Cause-specific Cox regression models were used to estimate the hazard ratios (HR) of risks factors for interval cancer and incident screen-detected cancer. A multinomial regression model, using screen-detected tumors as a reference group, was used to assess the effect of breast density and other factors on the occurrence of interval cancer subtypes.

Results

A previous false-positive was the main risk factor for interval cancer (HR = 2.71, 95%CI: 2.28–3.23); this risk was higher for false-negatives (HR = 8.79, 95%CI: 6.24–12.40) than for true interval cancer (HR = 2.26, 95%CI: 1.59–3.21). A family history of breast cancer was associated with true intervals (HR = 2.11, 95%CI: 1.60–2.78), previous benign biopsy with a false-negatives (HR = 1.83, 95%CI: 1.23–2.71). High breast density was mainly associated with occult tumors (RRR = 4.92, 95%CI: 2.58–9.38), followed by true intervals (RRR = 1.67, 95%CI: 1.18–2.36) and false-negatives (RRR = 1.58, 95%CI: 1.00–2.49).

Conclusion

The role of women''s characteristics differs among interval cancer subtypes. This information could be useful to improve effectiveness of breast cancer screening programmes and to better classify subgroups of women with different risks of developing cancer.  相似文献   

20.

Objectives

To determine the prevalence of vitamin D deficiency (VDD) in adult medical, non-tuberculous (non-TB) patients. To investigate associations with VDD. To compare the results with a similar study in TB patients at the same hospital.

Design

Cross-sectional sample.

Setting

Central hospital in Malawi.

Participants

Adult non-TB patients (n = 157), inpatients and outpatients.

Outcome Measures

The primary outcome was the prevalence of VDD. Potentially causal associations sought included nutritional status, in/outpatient status, HIV status, anti-retroviral therapy (ART) and, by comparison with a previous study, a diagnosis of tuberculosis (TB).

Results

Hypovitaminosis D (≤75 nmol/L) occurred in 47.8% (75/157) of patients, 16.6% (26/157) of whom had VDD (≤50 nmol/L). None had severe VDD (≤25 nmol/L). VDD was found in 22.8% (23/101) of in-patients and 5.4% (3/56) of out-patients. In univariable analysis in-patient status, ART use and low dietary vitamin D were significant predictors of VDD. VDD was less prevalent than in previously studied TB patients in the same hospital (68/161 = 42%). In multivariate analysis of the combined data set from both studies, having TB (OR 3.61, 95%CI 2.02–6.43) and being an in-patient (OR 2.70, 95%CI 1.46–5.01) were significant independent predictors of VDD.

Conclusions

About half of adult medical patients without TB have suboptimal vitamin D status, which is more common in in-patients. VDD is much more common in TB patients than non-TB patients, even when other variables are controlled for, suggesting that vitamin D deficiency is associated with TB.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号