首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Objective

Although surgical-site infection (SSI) rates are advocated as a major evaluation criterion, the reproducibility of SSI diagnosis is unknown. We assessed agreement in diagnosing SSI among specialists involved in SSI surveillance in Europe.

Methods

Twelve case-vignettes based on suspected SSI were submitted to 100 infection-control physicians (ICPs) and 86 surgeons in 10 European countries. Each participant scored eight randomly-assigned case-vignettes on a secure online relational database. The intra-class correlation coefficient (ICC) was used to assess agreement for SSI diagnosis on a 7-point Likert scale and the kappa coefficient to assess agreement for SSI depth on a three-point scale.

Results

Intra-specialty agreement for SSI diagnosis ranged across countries and specialties from 0.00 (95%CI, 0.00–0.35) to 0.65 (0.45–0.82). Inter-specialty agreement varied from 0.04 (0.00–0.62) in to 0.55 (0.37–0.74) in Germany. For all countries pooled, intra-specialty agreement was poor for surgeons (0.24, 0.14–0.42) and good for ICPs (0.41, 0.28–0.61). Reading SSI definitions improved agreement among ICPs (0.57) but not surgeons (0.09). Intra-specialty agreement for SSI depth ranged across countries and specialties from 0.05 (0.00–0.10) to 0.50 (0.45–0.55) and was not improved by reading SSI definition.

Conclusion

Among ICPs and surgeons evaluating case-vignettes of suspected SSI, considerable disagreement occurred regarding the diagnosis, with variations across specialties and countries.  相似文献   

2.

Objectives

The Nurses Work Functioning Questionnaire (NWFQ) is a 50-item self-report questionnaire specifically developed for nurses and allied health professionals. Its seven subscales measure impairments in the work functioning due to common mental disorders. Aim of this study is to evaluate the psychometric properties of the NWFQ, by assessing reproducibility and construct validity.

Methods

The questionnaire was administered to 314 nurses and allied health professionals with a re-test in 112 subjects. Reproducibility was assessed by the intraclass correlations coefficients (ICC) and the standard error of measurement (SEM). For construct validity, correlations were calculated with a general work functioning scale, the Endicott Work Productivity Scale (EWPS) (convergent validity) and with a physical functioning scale (divergent validity). For discriminative validity, a Mann Whitney U test was performed testing for significant differences between subjects with mental health complaints and without.

Results

All subscales showed good reliability (ICC: 0.72–0.86), except for one (ICC = 0.16). Convergent validity was good in six subscales, correlations ranged from 0.38–0.62. However, in one subscale the correlation with the EWPS was too low (0.22). Divergent validity was good in all subscales based on correlations ranged from (−0.06)–(−0.23). Discriminative validity was good in all subscales, based on significant differences between subjects with and without mental health complaints (p<0.001–p = 0.003).

Conclusion

The NWFQ demonstrates good psychometric properties, for six of the seven subscales. Subscale “impaired decision making” needs improvement before further use.  相似文献   

3.

Background

There is dilemma as to whether patients infected with the Human Immunodeficiency Virus (HIV) requiring implant orthopaedic surgery are at an increased risk for post-operative surgical site infection (SSI). We conducted a systematic review to determine the effect of HIV on the risk of post-operative SSI and sought to determine if this risk is altered by antibiotic use beyond 24 hours.

Methods

We searched electronic databases, manually searched citations from relevant articles, and reviewed conference proceedings. The risk of postoperative SSI was pooled using Mantel-Haenszel method.

Results

We identified 18 cohort studies with 16 mainly small studies, addressing the subject. The pooled risk ratio of infection in the HIV patients when compared to non-HIV patients was 1.8 (95% Confidence Interval [CI] 1.3–2.4), in studies in Africa this was 2.3 (95% CI 1.5–3.5). In a sensitivity analysis the risk ratio was reduced to 1.4 (95% CI 0.5–3.8). The risk ratio of infection in patients receiving prolonged antibiotics compared to patients receiving antibiotics for up to 24 hours was 0.7 (95% CI 0.1–4.2).

Conclusions

The results may indicate an increased risk in HIV infected patients but these results are not robust and inconclusive after conducting the sensitivity analysis removing poor quality studies. There is need for larger good quality studies to provide conclusive evidence. To better develop surgical protocols, further studies should determine the effect of reduced CD4 counts, viral load suppression and prolonged antibiotics on the risk for infection.  相似文献   

4.
Honeybul S  Ho K  O'Hanlon S 《PloS one》2012,7(2):e32375

Background

Decompressive craniectomy has been traditionally used as a lifesaving rescue treatment in severe traumatic brain injury (TBI). This study assessed whether objective information on long-term prognosis would influence healthcare workers'' opinion about using decompressive craniectomy as a lifesaving procedure for patients with severe TBI.

Method

A two-part structured interview was used to assess the participants'' opinion to perform decompressive craniectomy for three patients who had very severe TBI. Their opinion was assessed before and after knowing the predicted and observed risks of an unfavourable long-term neurological outcome in various scenarios.

Results

Five hundred healthcare workers with a wide variety of clinical backgrounds participated. The participants were significantly more likely to recommend decompressive craniectomy for their patients than for themselves (mean difference in visual analogue scale [VAS] −1.5, 95% confidence interval −1.3 to −1.6), especially when the next of kin of the patients requested intervention. Patients'' preferences were more similar to patients who had advance directives. The participants'' preferences to perform the procedure for themselves and their patients both significantly reduced after knowing the predicted risks of unfavourable outcomes, and the changes in attitude were consistent across different specialties, amount of experience in caring for similar patients, religious backgrounds, and positions in the specialty of the participants.

Conclusions

Access to objective information on risk of an unfavourable long-term outcome influenced healthcare workers'' decision to recommend decompressive craniectomy, considered as a lifesaving procedure, for patients with very severe TBI.  相似文献   

5.

Background

With the Interferon-γ release assays (IGRA) a new method for the diagnosis of latent tuberculosis infections (LTBI) is available. Due to the lack of a gold standard for the diagnosis of LTBI, the IGRA is compared to the Mantoux Tuberculin Skin Test (TST), which yields discordant results in varying numbers. Therefore we assessed to which extent discordant results can be explained by potential risk factors such as age, BCG vaccination and migration.

Methods and Findings

In this pooled analysis, two German studies evaluating the Quantiferon-Gold In-Tube test (QFT) by comparison with the TST (RT23 of SSI) were combined and logistic regressions for potential risk factors for TST+/QFT− as well as THT−/QFT+ discordance were calculated. The analysis comprises 1,033 participants. Discordant results were observed in 15.4%, most of them being TST+/QFT− combinations. BCG vaccination or migration explained 85.1% of all TST+/QFT− discordance. Age explained 49.1% of all TST−/QFT+ discordance. Agreement between the two tests was 95.6% in German-born persons younger than 40 years and not BCG-vaccinated.

Conclusions

After adjustment for potential risk factors for positive or negative TST results, agreement of QFT and TST is excellent with little potential that the TST is more likely to detect old infections than the QFT. In surveillance programs for LTBI in high-income, low TB incidence countries like Germany the QFT is especially suited for persons with BCG vaccination or migrants due to better specificity and in older persons due to its superior sensitivity.  相似文献   

6.
7.

Introduction

The utility of T-cell based interferon-gamma release assays for the diagnosis of latent tuberculosis infection remains unclear in settings with a high burden of tuberculosis.

Objectives

To determine risk factors associated with positive QuantiFERON-TB Gold In-Tube (QFT-GIT) and tuberculin skin test (TST) results and the level of agreement between the tests; to explore the hypotheses that positivity in QFT-GIT is more related to recent infection and less affected by HIV than the TST.

Methods

Adult household contacts of tuberculosis patients were invited to participate in a cross-sectional study across 24 communities in Zambia and South Africa. HIV, QFT-GIT and TST tests were done. A questionnaire was used to assess risk factors.

Results

A total of 2,220 contacts were seen. 1,803 individuals had interpretable results for both tests, 1,147 (63.6%) were QFT-GIT positive while 725 (40.2%) were TST positive. Agreement between the tests was low (kappa = 0.24). QFT-GIT and TST results were associated with increasing age (adjusted OR [aOR] for each 10 year increase for QFT-GIT 1.15; 95% CI: 1.06–1.25, and for TST aOR: 1.10; 95% CI 1.01–1.20). HIV positivity was less common among those with positive results on QFT-GIT (aOR: 0.51; 95% CI: 0.39–0.67) and TST (aOR: 0.61; 95% CI: 0.46–0.82). Smear positivity of the index case was associated with QFT-GIT (aOR: 1.25; 95% CI: 0.90–1.74) and TST (aOR: 1.39; 95% CI: 0.98–1.98) results. We found little evidence in our data to support our hypotheses.

Conclusion

QFT-GIT may not be more sensitive than the TST to detect risk factors associated with tuberculous infection. We found little evidence to support the hypotheses that positivity in QFT-GIT is more related to recent infection and less affected by HIV than the TST.  相似文献   

8.

Background

Surgical site infection (SSI) surveillance is a key factor in the elaboration of strategies to reduce SSI occurrence and in providing surgeons with appropriate data feedback (risk indicators, clinical prediction rule).

Aim

To improve the predictive performance of an individual-based SSI risk model by considering a multilevel hierarchical structure.

Patients and Methods

Data were collected anonymously by the French SSI active surveillance system in 2011. An SSI diagnosis was made by the surgical teams and infection control practitioners following standardized criteria. A random 20% sample comprising 151 hospitals, 502 wards and 62280 patients was used. Three-level (patient, ward, hospital) hierarchical logistic regression models were initially performed. Parameters were estimated using the simulation-based Markov Chain Monte Carlo procedure.

Results

A total of 623 SSI were diagnosed (1%). The hospital level was discarded from the analysis as it did not contribute to variability of SSI occurrence (p  = 0.32). Established individual risk factors (patient history, surgical procedure and hospitalization characteristics) were identified. A significant heterogeneity in SSI occurrence between wards was found (median odds ratio [MOR] 3.59, 95% credibility interval [CI] 3.03 to 4.33) after adjusting for patient-level variables. The effects of the follow-up duration varied between wards (p<10−9), with an increased heterogeneity when follow-up was <15 days (MOR 6.92, 95% CI 5.31 to 9.07]). The final two-level model significantly improved the discriminative accuracy compared to the single level reference model (p<10−9), with an area under the ROC curve of 0.84.

Conclusion

This study sheds new light on the respective contribution of patient-, ward- and hospital-levels to SSI occurrence and demonstrates the significant impact of the ward level over and above risk factors present at patient level (i.e., independently from patient case-mix).  相似文献   

9.

Background

Previous studies suggest that over-nutrition in early infancy may programme long-term susceptibility to insulin resistance.

Objective

To assess the association of breast milk and quantity of infant formula and cows'' milk intake during infancy with insulin resistance measures in early adulthood.

Design

Long-term follow-up of the Barry Caerphilly Growth cohort, into which mothers and their offspring had originally been randomly assigned, between 1972–1974, to receive milk supplementation or not. Participants were the offspring, aged 23–27 years at follow-up (n = 679). Breastfeeding and formula/cows'' milk intake was recorded prospectively by nurses. The main outcomes were insulin sensitivity (ISI0) and insulin secretion (CIR30).

Results

573 (84%) individuals had valid glucose and insulin results and complete covariate information. There was little evidence of associations of breastfeeding versus any formula/cows'' milk feeding or of increasing quartiles of formula/cows'' milk consumption during infancy (<3 months) with any outcome measure in young adulthood. In fully adjusted models, the differences in outcomes between breastfeeding versus formula/cows'' milk feeding at 3 months were: fasting glucose (−0.07 mmol/l; 95% CI: −0.19, 0.05); fasting insulin (8.0%; −8.7, 27.6); ISI0 (−6.1%; −11.3, 12.1) and CIR30 (3.8%; −19.0, 32.8). There was also little evidence that increasing intakes of formula/cows'' milk at 3 months were associated with fasting glucose (increase per quartile of formula/cows'' milk intake = 0.00 mmol/l; −0.03, 0.03); fasting insulin (0.8%; −3.2, 5.1); ISI 0 (−0.9%; −5.1, 3.5) and CIR30 (−2.6%; −8.4, 3.6).

Conclusions

We found no evidence that increasing consumption of formula/cows'' milk in early infancy was associated with insulin resistance in young adulthood.  相似文献   

10.

Background

Human visceral leishmaniasis (VL), a potentially fatal disease, has emerged as an important opportunistic condition in HIV infected patients. In immunocompromised patients, serological investigation is considered not an accurate diagnostic method for VL diagnosis and molecular techniques seem especially promising.

Objective

This work is a comprehensive systematic review and meta-analysis to evaluate the accuracy of serologic and molecular tests for VL diagnosis specifically in HIV-infected patients.

Methods

Two independent reviewers searched PubMed and LILACS databases. The quality of studies was assessed by QUADAS score. Sensitivity and specificity were pooled separately and compared with overall accuracy measures: diagnostic odds ratio (DOR) and symmetric summary receiver operating characteristic (sROC).

Results

Thirty three studies recruiting 1,489 patients were included. The following tests were evaluated: Immunofluorescence Antibody Test (IFAT), Enzyme linked immunosorbent assay (ELISA), immunoblotting (Blot), direct agglutination test (DAT) and polimerase chain reaction (PCR) in whole blood and bone marrow. Most studies were carried out in Europe. Serological tests varied widely in performance, but with overall limited sensitivity. IFAT had poor sensitivity ranging from 11% to 82%. DOR (95% confidence interval) was higher for DAT 36.01 (9.95–130.29) and Blot 27.51 (9.27–81.66) than for IFAT 7.43 (3.08–1791) and ELISA 3.06 (0.71–13.10). PCR in whole blood had the highest DOR: 400.35 (58.47–2741.42). The accuracy of PCR based on Q-point was 0.95; 95%CI 0.92–0.97, which means good overall performance.

Conclusion

Based mainly on evidence gained by infection with Leishmania infantum chagasi, serological tests should not be used to rule out a diagnosis of VL among the HIV-infected, but a positive test at even low titers has diagnostic value when combined with the clinical case definition. Considering the available evidence, tests based on DNA detection are highly sensitive and may contribute to a diagnostic workup.  相似文献   

11.

Background and Aims

Healthcare professionals are required to conduct quality control of endoscopy procedures, and yet there is no standardised method for assessing quality. The topic of the present study was to validate the applicability of the procedure in daily practice, giving physicians the ability to define areas for continuous quality improvement.

Methods

In ten endoscopy units in France, 200 patients per centre undergoing colonoscopy were enrolled in the study. An evaluation was carried out based on a prospectively developed checklist of 10 quality-control indicators including five dependent upon and five independent of the colonoscopy procedure.

Results

Of the 2000 procedures, 30% were done at general hospitals, 20% at university hospitals, and 50% in private practices. The colonoscopies were carried out for a valid indication for 95.9% (range 92.5–100). Colon preparation was insufficient in 3.7% (range 1–10.5). Colonoscopies were successful in 95.3% (range 81–99). Adenoma detection rate was 0.31 (range 0.17–0.45) in successful colonoscopies.

Conclusion

This tool for evaluating the quality of colonoscopy procedures in healthcare units is based on standard endoscopy and patient criteria. It is an easy and feasible procedure giving the ability to detect suboptimal practice and differences between endoscopy-units. It will enable individual units to assess the quality of their colonoscopy techniques.  相似文献   

12.

Background

The combination of transaminases (ALT), biopsy, HBeAg and viral load have classically defined the inactive status of carriers of chronic hepatitis B. The use of FibroTest (FT) and ActiTest (AT), biomarkers of fibrosis and necroinflammatory activity, has been previously validated as alternatives to biopsy. We compared the 4-year prognostic value of combining FT-AT and viral load for a better definition of the inactive carrier status.

Methods and Findings

1,300 consecutive CHB patients who had been prospectively followed since 2001 were pre-included. The main endpoint was the absence of liver-related complications, transplantation or death. We used the manufacturers'' definitions of normal FT (< = 0.27), normal AT (< = 0.29) and 3 standard classes for viral load. The adjustment factors were age, sex, HBeAg, ethnic origin, alcohol consumption, HIV-Delta-HCV co-infections and treatment.

Results

1,074 patients with baseline FT-AT and viral load were included: 41 years old, 47% African, 27% Asian, 26% Caucasian. At 4 years follow-up, 50 complications occurred (survival without complications 93.4%), 36 deaths occurred (survival 95.0%), including 27 related to HBV (survival 96.1%). The prognostic value of FT was higher than those of viral load or ALT when compared using area under the ROC curves [0.89 (95%CI 0.84–0.93) vs 0.64 (0.55–0.71) vs 0.53 (0.46–0.60) all P<0.001], survival curves and multivariate Cox model [regression coefficient 5.2 (3.5–6.9; P<0.001) vs 0.53 (0.15–0.92; P = 0.007) vs −0.001 (−0.003−0.000;P = 0.052)] respectively. A new definition of inactive carriers was proposed with an algorithm combining “zero” scores for FT-AT (F0 and A0) and viral load classes. This new algorithm provides a 100% negative predictive value for the prediction of liver related complications or death. Among the 275 patients with the classic definition of inactive carrier, 62 (23%) had fibrosis presumed with FT, and 3 died or had complications at 4 year.

Conclusion

In patients with chronic hepatitis B, a combination of FibroTest-ActiTest and viral load testing accurately defined the prognosis and the inactive carrier status.  相似文献   

13.

Background

Numerous observational studies suggest that preventable adverse drug reactions are a significant burden in healthcare, but no meta-analysis using a standardised definition for adverse drug reactions exists. The aim of the study was to estimate the percentage of patients with preventable adverse drug reactions and the preventability of adverse drug reactions in adult outpatients and inpatients.

Methods

Studies were identified through searching Cochrane, CINAHL, EMBASE, IPA, Medline, PsycINFO and Web of Science in September 2010, and by hand searching the reference lists of identified papers. Original peer-reviewed research articles in English that defined adverse drug reactions according to WHO’s or similar definition and assessed preventability were included. Disease or treatment specific studies were excluded. Meta-analysis on the percentage of patients with preventable adverse drug reactions and the preventability of adverse drug reactions was conducted.

Results

Data were analysed from 16 original studies on outpatients with 48797 emergency visits or hospital admissions and from 8 studies involving 24128 inpatients. No studies in primary care were identified. Among adult outpatients, 2.0% (95% confidence interval (CI): 1.2–3.2%) had preventable adverse drug reactions and 52% (95% CI: 42–62%) of adverse drug reactions were preventable. Among inpatients, 1.6% (95% CI: 0.1–51%) had preventable adverse drug reactions and 45% (95% CI: 33–58%) of adverse drug reactions were preventable.

Conclusions

This meta-analysis corroborates that preventable adverse drug reactions are a significant burden to healthcare among adult outpatients. Among both outpatients and inpatients, approximately half of adverse drug reactions are preventable, demonstrating that further evidence on prevention strategies is required. The percentage of patients with preventable adverse drug reactions among inpatients and in primary care is largely unknown and should be investigated in future research.  相似文献   

14.

Background

Accurate, inexpensive point-of-care CD4+ T cell testing technologies are needed that can deliver CD4+ T cell results at lower level health centers or community outreach voluntary counseling and testing. We sought to evaluate a point-of-care CD4+ T cell counter, the Pima CD4 Test System, a portable, battery-operated bench-top instrument that is designed to use finger stick blood samples suitable for field use in conjunction with rapid HIV testing.

Methods

Duplicate measurements were performed on both capillary and venous samples using Pima CD4 analyzers, compared to the BD FACSCalibur (reference method). The mean bias was estimated by paired Student''s t-test. Bland Altman plots were used to assess agreement.

Results

206 participants were enrolled with a median CD4 count of 396 (range; 18–1500). The finger stick PIMA had a mean bias of −66.3 cells/µL (95%CI −83.4−49.2, P<0.001) compared to the FACSCalibur; the bias was smaller at lower CD4 counts (0–250 cells/µL) with a mean bias of −10.8 (95%CI −27.3−+5.6, P = 0.198), and much greater at higher CD4 cell counts (>500 cells/µL) with a mean bias of −120.6 (95%CI −162.8, −78.4, P<0.001). The sensitivity (95%CI) of the Pima CD4 analyzer was 96.3% (79.1–99.8%) for a <250 cells/ul cut-off with a negative predictive value of 99.2% (95.1–99.9%).

Conclusions

The Pima CD4 finger stick test is an easy-to-use, portable, relatively fast device to test CD4+ T cell counts in the field. Issues of negatively-biased CD4 cell counts especially at higher absolute numbers will limit its utility for longitudinal immunologic response to ART. The high sensitivity and negative predictive value of the test makes it an attractive option for field use to identify patients eligible for ART, thus potentially reducing delays in linkage to care and ART initiation.  相似文献   

15.

Background

While many North American healthcare institutions are switching from Tuberculin Skin Test (TST) to Interferon-gamma release assays (IGRAs), there is relatively limited data on association between occupational tuberculosis (TB) risk factors and test positivity and/or patterns of test discordance.

Methods

We recruited a cohort of Canadian health care workers (HCWs) in Montreal, and performed both TST and QuantiFERON-TB Gold In Tube (QFT) tests, and assessed risk factors and occupational exposure.

Results

In a cross-sectional analysis of baseline results, the prevalence of TST positivity using the 10 mm cut-off was 5.7% (22/388, 95%CI: 3.6–8.5%), while QFT positivity was 6.2% (24/388, 95%CI: 4–9.1%). Overall agreement between the tests was poor (kappa = 0.26), and 8.3% of HCWs had discordant test results, most frequently TST−/QFT+ (17/388, 4.4%). TST positivity was associated with total years worked in health care, non-occupational exposure to TB and BCG vaccination received after infancy or on multiple occasions. QFT positivity was associated with having worked as a HCW in a foreign country.

Conclusions

Our results suggest that LTBI prevalence as measured by either the TST or the QFT is low in this HCW population. Of concern is the high frequency of unexplainable test discordance, namely: TST−/QFT+ subjects, and the lack of any association between QFT positivity and clear-cut recent TB exposure. If these discordant results are indeed false positives, the use of QFT in lieu of TST in low TB incidence settings could result in overtreatment of uninfected individuals.  相似文献   

16.

Introduction

In experimental models of West Nile virus (WNV) infection, animals develop chronic kidney infection with histopathological changes in the kidney up to 8-months post-infection. However, the long term pathologic effects of acute infection in humans are largely unknown. The purpose of this study was to assess renal outcomes following WNV infection, specifically the development of chronic kidney disease (CKD).

Methods

In a cohort of 139 study participants with a previous diagnosis of WNV infection, we investigated the prevalence of CKD using the Kidney Disease Outcomes Quality Initiative (KDOQI) criteria based on the Modification of Diet in Renal Disease (MDRD) formula and urinary abnormalities, and assessed various risk factors and biomarkers.

Results

Study participants were primarily male (60%) and non-Hispanic white (86%) with a mean age of 57 years. Most (83%) were four to nine years post-infection at the time of this study. Based on the KDOQI definition, 40% of participants had evidence of CKD, with 10% having Stage III or greater and 30% having Stage I–II. By urinary dipstick testing, 26% of patients had proteinuria and 23% had hematuria. Plasma NGAL levels were elevated in 14% of participants while MCP-1 levels were increased in 12%. Over 1.5 years, the average change in eGFR was −3.71 mL/min/1.73 m2. Only a history of Neuroinvasive WNV disease was independently associated with CKD following multivariate analysis.

Discussion

We found a high prevalence of CKD after long term follow-up in a cohort of participants previously infected with WNV. The majority of those with CKD are in Stage I-II indicating early stages of renal disease. Traditional risk factors were not associated with the presence of CKD in this population. Therefore, clinicians should regularly evaluate all patients with a history of WNV for evidence of CKD.  相似文献   

17.

Background

Human genetic factors such as blood group antigens may affect the severity of infectious diseases. Presence of specific ABO and Lewis blood group antigens has been shown previously to be associated with the risk of different enteric infections. The aim of this study was to determine the relationship of the Lewis blood group antigens with susceptibility to cholera, as well as severity of disease and immune responses to infection.

Methodology

We determined Lewis and ABO blood groups of a cohort of patients infected by Vibrio cholerae O1, their household contacts, and healthy controls, and analyzed the risk of symptomatic infection, severity of disease if infected and immune response following infection.

Principal Findings

We found that more individuals with cholera expressed the Le(a+b−) phenotype than the asymptomatic household contacts (OR 1.91, 95% CI 1.03–3.56) or healthy controls (OR 1.90, 95% CI 1.13–3.21), as has been seen previously for the risk of symptomatic ETEC infection. Le(a–b+) individuals were less susceptible to cholera and if infected, required less intravenous fluid replacement in hospital, suggesting that this blood group may be associated with protection against V. cholerae O1. Individuals with Le(a–b−) blood group phenotype who had symptomatic cholera had a longer duration of diarrhea and required higher volumes of intravenous fluid replacement. In addition, individuals with Le(a–b−) phenotype also had lessened plasma IgA responses to V. cholerae O1 lipopolysaccharide on day 7 after infection compared to individuals in the other two Lewis blood group phenotypes.

Conclusion

Individuals with Lewis blood type Le(a+b−) are more susceptible and Le(a–b+) are less susceptible to V. cholerae O1 associated symptomatic disease. Presence of this histo-blood group antigen may be included in evaluating the risk for cholera in a population, as well as in vaccine efficacy studies, as is currently being done for the ABO blood group antigens.  相似文献   

18.

Background

Breast cancer is a heterogenous disease that impacts racial/ethnic groups differently. Differences in genetic composition, lifestyles, reproductive factors, or environmental exposures may contribute to the differential presentation of breast cancer among Hispanic women.

Materials and Methods

A population-based study was conducted in the city of Santiago de Compostela, Spain. A total of 645 women diagnosed with operable invasive breast cancer between 1992 and 2005 participated in the study. Data on demographics, breast cancer risk factors, and clinico-pathological characteristics of the tumors were collected. Hormone receptor negative tumors were compared with hormone receptor postive tumors on their clinico-pathological characteristics as well as risk factor profiles.

Results

Among the 645 breast cancer patients, 78% were estrogen receptor-positive (ER+) or progesterone receptor-positive (PR+), and 22% were ER−&PR−. Women with a family history of breast cancer were more likely to have ER−&PR− tumors than women without a family history (Odds ratio, 1.43; 95% confidence interval, 0.91–2.26). This association was limited to cancers diagnosed before age 50 (Odds ratio, 2.79; 95% confidence interval, 1.34–5.81).

Conclusions

An increased proportion of ER−&PR− breast cancer was observed among younger Spanish women with a family history of the disease.  相似文献   

19.

Background

Surgical Site Infections (SSI) are relatively frequent complications after colorectal surgery and are associated with substantial morbidity and mortality.

Objective

Implementing a bundle of care and measuring the effects on the SSI rate.

Design

Prospective quasi experimental cohort study.

Methods

A prospective surveillance for SSI after colorectal surgery was performed in the Amphia Hospital, Breda, from January 1, 2008 until January 1, 2012. As part of a National patient safety initiative, a bundle of care consisting of 4 elements covering the surgical process was introduced in 2009. The elements of the bundle were perioperative antibiotic prophylaxis, hair removal before surgery, perioperative normothermia and discipline in the operating room. Bundle compliance was measured every 3 months in a random sample of surgical procedures.

Results

Bundle compliance improved significantly from an average of 10% in 2009 to 60% in 2011. 1537 colorectal procedures were performed during the study period and 300 SSI (19.5%) occurred. SSI were associated with a prolonged length of stay (mean additional length of stay 18 days) and a significantly higher 6 months mortality (Adjusted OR: 2.71, 95% confidence interval 1.76–4.18). Logistic regression showed a significant decrease of the SSI rate that paralleled the introduction of the bundle. The adjusted Odds ratio of the SSI rate was 36% lower in 2011 compared to 2008.

Conclusion

The implementation of the bundle was associated with improved compliance over time and a 36% reduction of the SSI rate after adjustment for confounders. This makes the bundle an important tool to improve patient safety.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号