首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background:

There has been much discussion about whether female feticide occurs in certain immigrant groups in Canada. We examined data on live births in Ontario and compared sex ratios in different groups according to the mother’s country or region of birth and parity.

Methods:

We completed a population-based study of 766 688 singleton live births between 2002 and 2007. We used birth records provided by Ontario Vital Statistics for live births in the province between 23 and 41 weeks’ gestation. We categorized each newborn according to the mother’s country or region of birth, namely Canada (n = 486 599), Europe (n = 58 505), South Korea (n = 3663), China (n = 23 818), Philippines (n = 15 367), rest of East Asia (n = 18 971), Pakistan (n = 18 018), India (n = 31 978), rest of South Asia (n = 20 695) and other countries (n = 89 074). We calculated male:female ratios and 95% confidence intervals (CIs) for all live births by these regions and stratified them by maternal parity at the time of delivery (0, 1, 2 or ≥ 3).

Results:

Among infants of nulliparous women, the male:female ratio was about 1.05 overall. As parity increased, the ratio remained unchanged among infants of Canadian-born women. In contrast, the male:female ratio was significantly higher among infants of primiparous women born in South Korea (1.20, 95% CI 1.09–1.34) and India (1.11, 95% CI 1.07–1.15) than among infants of Canadian-born primiparous women. Among multiparous women, those born in India were significantly more likely than Canadian-born women to have a male infant (parity 2, ratio 1.36, 95% CI 1.27–1.46; parity ≥ 3, ratio 1.25, 95% CI 1.09–1.43).

Interpretation:

Our study of male:female ratios in Ontario showed that multiparous women born in India were significantly more likely than multiparous women born in Canada to have a male infant.Although there are some myths about correctly guessing the sex of a fetus,1 modern-day prenatal ultrasound enables the identification of whether a fetus is a boy or girl with 99% accuracy.2 There has been much discussion about whether female fetuses are at higher risk of pregnancy termination than male fetuses in certain ethnic groups. In India, a study of data from the National Family Health Survey for 265 516 births showed a sharp increase in the male:female ratio among second-order births when the firstborn was a girl, and no significant increase when the firstborn was a boy.3 The authors attributed this trend to the practice of selective abortion of female fetuses. A recent editorial4 and news item5 in CMAJ suggested that female feticide may also be occurring in Canada.6 Rather than using live-birth statistics, the Canadian study cited in CMAJ used data from the 2001 and 2006 Canada Census long-form questionnaires, which were completed by 20% of the population and relied on self-reporting of additional information, including the number of family members in a given household.We used contemporary data on live births in Ontario, Canada’s most populous and ethnically diverse province, and compared sex ratios among infants of Canadian-born women with sex ratios in different immigrant groups. We focused on immigrant groups from countries purported to have the highest rates of preference for a son following the birth of one or more daughters.36 We determined whether the male:female ratio increased with increasing parity in certain immigrant groups, as has been previously suggested.3,6  相似文献   

2.

Background:

Previous studies of differences in mental health care associated with children’s sociodemographic status have focused on access to community care. We examined differences associated with visits to the emergency department.

Methods:

We conducted a 6-year population-based cohort analysis using administrative databases of visits (n = 30 656) by children aged less than 18 years (n = 20 956) in Alberta. We measured differences in the number of visits by socioeconomic and First Nations status using directly standardized rates. We examined time to return to the emergency department using a Cox regression model, and we evaluated time to follow-up with a physician by physician type using a competing risks model.

Results:

First Nations children aged 15–17 years had the highest rate of visits for girls (7047 per 100 000 children) and boys (5787 per 100 000 children); children in the same age group from families not receiving government subsidy had the lowest rates (girls: 2155 per 100 000 children; boys: 1323 per 100 000 children). First Nations children (hazard ratio [HR] 1.64; 95% confidence interval [CI] 1.30–2.05), and children from families receiving government subsidies (HR 1.60, 95% CI 1.30–1.98) had a higher risk of return to an emergency department for mental health care than other children. The longest median time to follow-up with a physician was among First Nations children (79 d; 95% CI 60–91 d); this status predicted longer time to a psychiatrist (HR 0.47, 95% CI 0.32–0.70). Age, sex, diagnosis and clinical acuity also explained post-crisis use of health care.

Interpretation:

More visits to the emergency department for mental health crises were made by First Nations children and children from families receiving a subsidy. Sociodemographics predicted risk of return to the emergency department and follow-up care with a physician.Emergency departments are a critical access point for mental health care for children who have been unable to receive care elsewhere or are in crisis.1 Care provided in an emergency department can stabilize acute problems and facilitate urgent follow-up for symptom management and family support.1,2Race, ethnic background and socioeconomic status have been linked to a crisis-oriented care patterns among American children.3,4 Minority children are less likely than white children to have received mental health treatment before an emergency department visit,3,4 and uninsured children are less likely to receive an urgent mental health evaluation when needed.4 Other studies, however, have shown no relation between sociodemographic status and mental health care,5,6 and it may be that different health system characteristics (e.g., pay-for-service, insurance coverage, publicly funded care) interact with sociodemographic status to influence how mental health resources are used. Canadian studies are largely absent in this discussion, despite a known relation between lower income and poorer mental health status,7 nationwide documentation of disparities faced by Aboriginal children,810 and government-commissioned reviews that highlight deficits in universal access to mental health care.11We undertook the current study to examine whether sociodemographic differences exist in the rates of visits to emergency departments for mental health care and in the use of post-crisis health care services for children in Alberta. Knowledge of whether differences exist for children with mental health needs may help identify children who could benefit from earlier intervention to prevent illness destabilization and children who may be disadvantaged in the period after the emergency department visit. We hypothesized that higher rates of emergency department use, lower rates of follow-up physician visits after the initial emergency department visit, and a longer time to physician follow-up would be observed among First Nations children and children from families receiving government social assistance.  相似文献   

3.

Background:

Uncircumcised boys are at higher risk for urinary tract infections than circumcised boys. Whether this risk varies with the visibility of the urethral meatus is not known. Our aim was to determine whether there is a hierarchy of risk among uncircumcised boys whose urethral meatuses are visible to differing degrees.

Methods:

We conducted a prospective cross-sectional study in one pediatric emergency department. We screened 440 circumcised and uncircumcised boys. Of these, 393 boys who were not toilet trained and for whom the treating physician had requested a catheter urine culture were included in our analysis. At the time of catheter insertion, a nurse characterized the visibility of the urethral meatus (phimosis) using a 3-point scale (completely visible, partially visible or nonvisible). Our primary outcome was urinary tract infection, and our primary exposure variable was the degree of phimosis: completely visible versus partially or nonvisible urethral meatus.

Results:

Cultures grew from urine samples from 30.0% of uncircumcised boys with a completely visible meatus, and from 23.8% of those with a partially or nonvisible meatus (p = 0.4). The unadjusted odds ratio (OR) for culture growth was 0.73 (95% confidence interval [CI] 0.35–1.52), and the adjusted OR was 0.41 (95% CI 0.17–0.95). Of the boys who were circumcised, 4.8% had urinary tract infections, which was significantly lower than the rate among uncircumcised boys with a completely visible urethral meatus (unadjusted OR 0.12 [95% CI 0.04–0.39], adjusted OR 0.07 [95% CI 0.02–0.26]).

Interpretation:

We did not see variation in the risk of urinary tract infection with the visibility of the urethral meatus among uncircumcised boys. Compared with circumcised boys, we saw a higher risk of urinary tract infection in uncircumcised boys, irrespective of urethral visibility.Urinary tract infections are one of the most common serious bacterial infections in young children.16 Prompt diagnosis is important, because children with urinary tract infection are at risk for bacteremia6 and renal scarring.1,7 Uncircumcised boys have a much higher risk of urinary tract infection than circumcised boys,1,3,4,6,812 likely as a result of heavier colonization under the foreskin with pathogenic bacteria, which leads to ascending infections.13,14 The American Academy of Pediatrics recently suggested that circumcision status be used to select which boys should be evaluated for urinary tract infection.1 However, whether all uncircumcised boys are at equal risk for infection, or whether the risk varies with the visibility of the urethral opening, is not known. It has been suggested that a subset of uncircumcised boys with a poorly visible urethral opening are at increased risk of urinary tract infection,1517 leading some experts to consider giving children with tight foreskins topical cortisone or circumcision to prevent urinary tract infections.13,1821We designed a study to challenge the opinion that all uncircumcised boys are at increased risk for urinary tract infections. We hypothesized a hierarchy of risk among uncircumcised boys depending on the visibility of the urethral meatus, with those with a partially or nonvisible meatus at highest risk, and those with a completely visible meatus having a level of risk similar to that of boys who have been circumcised. Our primary aim was to compare the proportions of urinary tract infections among uncircumcised boys with a completely visible meatus with those with a partially or nonvisible meatus.  相似文献   

4.

Background:

There have been several published reports of inflammatory ocular adverse events, mainly uveitis and scleritis, among patients taking oral bisphosphonates. We examined the risk of these adverse events in a pharmacoepidemiologic cohort study.

Methods:

We conducted a retrospective cohort study involving residents of British Columbia who had visited an ophthalmologist from 2000 to 2007. Within the cohort, we identified all people who were first-time users of oral bisphosphonates and who were followed to the first inflammatory ocular adverse event, death, termination of insurance or the end of the study period. We defined an inflammatory ocular adverse event as scleritis or uveitis. We used a Cox proportional hazard model to determine the adjusted rate ratios. As a sensitivity analysis, we performed a propensity-score–adjusted analysis.

Results:

The cohort comprised 934 147 people, including 10 827 first-time users of bisphosphonates and 923 320 nonusers. The incidence rate among first-time users was 29/10 000 person-years for uveitis and 63/10 000 person-years for scleritis. In contrast, the incidence among people who did not use oral bisphosphonates was 20/10 000 person-years for uveitis and 36/10 000 for scleritis (number needed to harm: 1100 and 370, respectively). First-time users had an elevated risk of uveitis (adjusted relative risk [RR] 1.45, 95% confidence interval [CI] 1.25–1.68) and scleritis (adjusted RR 1.51, 95% CI 1.34–1.68). The rate ratio for the propensity-score–adjusted analysis did not change the results (uveitis: RR 1.50, 95% CI 1.29–1.73; scleritis: RR 1.53, 95% CI 1.39–1.70).

Interpretation:

People using oral bisphosphonates for the first time may be at a higher risk of scleritis and uveitis compared to people with no bisphosphonate use. Patients taking bisphosphonates must be familiar with the signs and symptoms of these conditions, so that they can immediately seek assessment by an ophthalmologist.Oral bisphosphonates are the most frequently prescribed class of medications for the prevention of osteoporosis. Most literature about the safety of bisphosphonates has mainly focused on long-term adverse events, including atypical fractures,1 atrial fibrillation,2 and esophageal and colon cancer.3Uveitis and scleritis are ocular inflammatory diseases that are associated with major morbidity. Anterior uveitis is the most common type of uveitis with an estimated 11.4–100.0 cases/100 000 person-years.4,5 Both diseases require immediate treatment to prevent further complications, which may include cataracts, glaucoma, macular edema and scleral perforation. Numerous case reports and case series have described an association between the use of oral bisphosphonates and anterior uveitis68 and scleritis.8,9 In most reported cases, severe eye pain was reported within days of taking an oral bisphosphonates, and the symptom resolved after stopping the agent.6,9 Only one large epidemiologic study has examined the association between the use of bisphosphonates and ocular inflammatory diseases.10 This study did not find an association, but it was limited by a small number of events and a lack of power. Thus, the association between uveitis or scleritis and the use of oral bisphosphonates is not fully known. Given that early intervention may prevent complications, we performed a pharmacoepidemiologic study to assess the true risk of these potentially serious conditions.  相似文献   

5.

Background:

Although Aboriginal adults have a higher risk of end-stage renal disease than non-Aboriginal adults, the incidence and causes of end-stage renal disease among Aboriginal children and young adults are not well described.

Methods:

We calculated age- and sex-specific incidences of end-stage renal disease among Aboriginal people less than 22 years of age using data from a national organ failure registry. Incidence rate ratios were used to compare rates between Aboriginal and white Canadians. To contrast causes of end-stage renal disease by ethnicity and age, we calculated the odds of congenital diseases, glomerulonephritis and diabetes for Aboriginal people and compared them with those for white people in the following age strata: 0 to less than 22 years, 22 to less than 40 years, 40 to less than 60 years and older than 60 years.

Results:

Incidence rate ratios of end-stage renal disease for Aboriginal children and young adults (age < 22 yr, v. white people) were 1.82 (95% confidence interval [CI] 1.40–2.38) for boys and 3.24 (95% CI 2.60–4.05) for girls. Compared with white people, congenital diseases were less common among Aboriginal people aged less than 22 years (odds ratio [OR] 0.56, 95% CI 0.36–0.86), and glomerulonephritis was more common (OR 2.18, 95% CI 1.55–3.07). An excess of glomerulonephritis, but not diabetes, was seen among Aboriginal people aged 22 to less than 40 years. The converse was true (higher risk of diabetes, lower risk of glomerulonephritis) among Aboriginal people aged 40 years and older.

Interpretation:

The incidence of end-stage renal disease is higher among Aboriginal children and young adults than among white children and young adults. This higher incidence may be driven by an increased risk of glomerulonephritis in this population.Compared with white Canadians, Aboriginal Canadians have a higher prevalence of end-stage renal disease,1,2 which is generally attributed to their increased risk for diabetes. However, there has been limited investigation of the incidence and causes of end-stage renal disease among Aboriginal children and young adults. Because most incident cases of diabetes are identified in middle-aged adults, an excess risk of end-stage renal disease in young people would not be expected if the high risk of diabetes is responsible for higher overall rates of end-stage renal disease among Aboriginal people. About 12.3% of children with end-stage renal disease in Canada are Aboriginal,3 but only 6.1% of Canadian children (age < 19 yr) are Aboriginal.4,5A few reports suggest that nondiabetic renal disease is common among Aboriginal populations in North America.2,68 Aboriginal adults in Saskatchewan are twice as likely as white adults to have end-stage renal disease caused by glomerulonephritis,7,8 and an increased rate of mesangial proliferative glomerulonephritis has been reported among Aboriginal people in the United States.6,9 These studies suggest that diabetes may be a comorbid condition rather than the sole cause of kidney failure among some Aboriginal people in whom diabetic nephropathy is diagnosed using clinical features alone.We estimated incidence rates of end-stage renal disease among Aboriginal children and young adults in Canada and compared them with the rates seen among white children and young adults. In addition, we compared relative odds of congenital renal disease, glomerulonephritis and diabetic nephropathy in Aboriginal people with the relative odds of these conditions in white people.  相似文献   

6.

Background:

Patients with type 2 diabetes have a 40% increased risk of bladder cancer. Thiazolidinediones, especially pioglitazone, may increase the risk. We conducted a systematic review and meta-analysis to evaluate the risk of bladder cancer among adults with type 2 diabetes taking thiazolidinediones.

Methods:

We searched key biomedical databases (including MEDLINE, Embase and Scopus) and sources of grey literature from inception through March 2012 for published and unpublished studies, without language restrictions. We included randomized controlled trials (RCTs), cohort studies and case–control studies that reported incident bladder cancer among people with type 2 diabetes who ever (v. never) were exposed to pioglitazone (main outcome), rosiglitazone or any thiazolidinedione.

Results:

Of the 1787 studies identified, we selected 4 RCTs, 5 cohort studies and 1 case–control study. The total number of patients was 2 657 365, of whom 3643 had newly diagnosed bladder cancer, for an overall incidence of 53.1 per 100 000 person-years. The one RCT that reported on pioglitazone use found no significant association with bladder cancer (risk ratio [RR] 2.36, 95% confidence interval [CI] 0.91–6.13). The cohort studies of thiazolidinediones (pooled RR 1.15, 95% CI 1.04–1.26; I2 = 0%) and of pioglitazone specifically (pooled RR 1.22, 95% CI 1.07–1.39; I2 = 0%) showed significant associations with bladder cancer. No significant association with bladder cancer was observed in the two RCTs that evaluated rosiglitazone use (pooled RR 0.87, 95% CI 0.34–2.23; I2 = 0%).

Interpretation:

The limited evidence available supports the hypothesis that thiazolidinediones, particularly pioglitazone, are associated with an increased risk of bladder cancer among adults with type 2 diabetes.People with type 2 diabetes are at increased risk of several types of cancer, including a 40% increased risk of bladder cancer, compared with those without diabetes.1,2 The strong association with bladder cancer is hypothesized to be a result of hyperinsulinemia, whereby elevated insulin levels in type 2 diabetes stimulate insulin receptors on neoplastic cells, promoting cancer growth and division.1,35 Additional risk factors for bladder cancer include increased age, male sex, smoking, occupational and environmental exposures and urinary tract disease.6 Exogenous insulin and other glucose-lowering medications such as sulfonylureas, metformin and thiazolidinediones, may further modify the risk of bladder cancer.1Data from the placebo-controlled PROactive trial of pioglitazone (PROspective pioglitAzone Clinical Trial in macroVascular Events) suggested a higher incidence of bladder cancer among pioglitazone users than among controls.7 Subsequent randomized controlled trials (RCTs) and observational studies have reported conflicting results for pioglitazone, with various studies reporting a significant increase,8,9 a nonsignificant increase10 and even a decreased risk11 of bladder cancer.To test the hypothesis that pioglitazone use is associated with an increased risk of bladder cancer, we conducted a systematic review and meta-analysis of RCTs and observational studies reporting bladder cancer among adults with type 2 diabetes taking pioglitazone. To clarify the possibility of a drug-class effect, we also examined data for all thiazolidinediones and for rosiglitazone alone.  相似文献   

7.

Background:

Several biomarkers of metabolic acidosis, including lower plasma bicarbonate and higher anion gap, have been associated with greater insulin resistance in cross-sectional studies. We sought to examine whether lower plasma bicarbonate is associated with the development of type 2 diabetes mellitus in a prospective study.

Methods:

We conducted a prospective, nested case–control study within the Nurses’ Health Study. Plasma bicarbonate was measured in 630 women who did not have type 2 diabetes mellitus at the time of blood draw in 1989–1990 but developed type 2 diabetes mellitus during 10 years of follow-up. Controls were matched according to age, ethnic background, fasting status and date of blood draw. We used logistic regression to calculate odds ratios (ORs) for diabetes by category of baseline plasma bicarbonate.

Results:

After adjustment for matching factors, body mass index, plasma creatinine level and history of hypertension, women with plasma bicarbonate above the median level had lower odds of diabetes (OR 0.76, 95% confidence interval [CI] 0.60–0.96) compared with women below the median level. Those in the second (OR 0.92, 95% CI 0.67–1.25), third (OR 0.70, 95% CI 0.51–0.97) and fourth (OR 0.75, 95% CI 0.54–1.05) quartiles of plasma bicarbonate had lower odds of diabetes compared with those in the lowest quartile (p for trend = 0.04). Further adjustment for C-reactive protein did not alter these findings.

Interpretation:

Higher plasma bicarbonate levels were associated with lower odds of incident type 2 diabetes mellitus among women in the Nurses’ Health Study. Further studies are needed to confirm this finding in different populations and to elucidate the mechanism for this relation.Resistance to insulin is central to the pathogenesis of type 2 diabetes mellitus.1 Several mechanisms may lead to insulin resistance and thereby contribute to the development of type 2 diabetes mellitus, including altered fatty acid metabolism, mitochondrial dysfunction and systemic inflammation.2 Metabolic acidosis may also contribute to insulin resistance. Human studies using the euglycemic and hyperglycemic clamp techniques have shown that mild metabolic acidosis induced by the administration of ammonium chloride results in reduced tissue insulin sensitivity.3 Subsequent studies in rat models have suggested that metabolic acidosis decreases the binding of insulin to its receptors.4,5 Finally, metabolic acidosis may also increase cortisol production,6 which in turn is implicated in the development of insulin resistance.7Recent epidemiologic studies have shown an association between clinical markers of metabolic acidosis and greater insulin resistance or prevalence of type 2 diabetes mellitus. In the National Health and Nutrition Examination Survey, both lower serum bicarbonate and higher anion gap (even within ranges considered normal) were associated with increased insulin resistance among adults without diabetes.8 In addition, higher levels of serum lactate, a small component of the anion gap, were associated with higher odds of prevalent type 2 diabetes mellitus in the Atherosclerosis Risk in Communities study9 and with higher odds of incident type 2 diabetes mellitus in a retrospective cohort study of the risk factors for diabetes in Swedish men.10 Other biomarkers associated with metabolic acidosis, including higher levels of serum ketones,11 lower urinary citrate excretion12 and low urine pH,13 have been associated in cross-sectional studies with either insulin resistance or the prevalence of type 2 diabetes mellitus. However, it is unclear whether these associations are a cause or consequence. We sought to address this question by prospectively examining the association between plasma bicarbonate and subsequent development of type 2 diabetes mellitus in a nested case–control study within the Nurses’ Health Study.  相似文献   

8.

Background:

Migraine is a common, disabling headache disorder that leads to lost quality of life and productivity. We investigated whether a proactive approach to patients with migraine, including an educational intervention for general practitioners, led to a decrease in headache and associated costs.

Methods:

We conducted a pragmatic randomized controlled trial. Participants were randomized to one of two groups: practices receiving the intervention and control practices. Participants were prescribed two or more doses of triptan per month. General practitioners in the intervention group received training on treating migraine and invited participating patients for a consultation and evaluation of the therapy they were receiving. Physicians in the control group continued with usual care. Our primary outcome was patients’ scores on the Headache Impact Test (HIT-6) at six months. We considered a reduction in score of 2.3 points to be clinically relevant. We used the Kessler Psychological Distress Scale (K10) questionnaire to determine if such distress was a possible effect modifier. We also examined the interventions’ cost-effectiveness.

Results:

We enrolled 490 patients in the trial (233 to the intervention group and 257 to the control group). Of the 233 patients in the intervention group, 192 (82.4%) attended the consultation to evaluate the treatment of their migraines. Of these patients, 43 (22.3%) started prophylaxis. The difference in change in score on the HIT-6 between the intervention and control groups was 0.81 (p = 0.07, calculated from modelling using generalized estimating equations). For patients with low levels of psychological distress (baseline score on the K10 ≤ 20) this change was −1.51 (p = 0.008), compared with a change of 0.16 (p = 0.494) for patients with greater psychological distress. For patients who were not using prophylaxis at baseline and had two or more migraines per month, the mean HIT-6 score improved by 1.37 points compared with controls (p = 0.04). We did not find the intervention to be cost-effective.

Interpretation:

An educational intervention for general practitioners and a proactive approach to patients with migraine did not result in a clinically relevant improvement of symptoms. Psychological distress was an important confounder of success. (Current Controlled Trials registration no. ISRCTN72421511.)Migraine is a common, disabling headache disorder that results in lost quality of life and productivity, both during and between attacks.18 Many patients with migraine suffer unnecessarily because they are not using their medications appropriately, or they are unaware of the possibility of prophylactic treatment. In the Netherlands, 3% of patients who take triptans consume 12 or more doses of the drug each month.9 These patients account for almost half of the costs associated with triptan use.10 In addition, although more than 25% of patients with migraine have two or more attacks each month, making them eligible for preventive treatment, only 8%–12% of patients use prophylaxis.2,3,1113 More than half of the patients with migraine in Dutch primary care who have an indication for prophylaxis have not discussed that option with their general practitioner.13We investigated whether a proactive approach to identifying patients with migraine who are receiving suboptimal treatment (i.e., inviting them to a consultation to evaluate their current treatment regimen and advising them about the options available for treating their migraine) could increase the use of preventive treatment and reduce the overuse of triptans, thereby reducing headache recurrence and associated costs. Our intervention involved educational sessions for general practitioners. Earlier studies aimed at reducing the overuse of other medications in primary care, such as benzodiazepines and acid-repressive drugs, showed that a proactive intervention led to a reduction in the use of medications.14,15Because most patients with migraine in the Netherlands are treated by their general practitioner, we evaluated the costs and effects of a proactive approach to migraine in primary care. We included patients who had two or more attacks per month, because improvement could be reasonably expected in this group.  相似文献   

9.

Background:

Results of randomized controlled trials evaluating zinc for the treatment of the common cold are conflicting. We conducted a systematic review and meta-analysis to evaluate the efficacy and safety of zinc for such use.

Methods:

We searched electronic databases and other sources for studies published through to Sept. 30, 2011. We included all randomized controlled trials comparing orally administered zinc with placebo or no treatment. Assessment for study inclusion, data extraction and risk-of-bias analyses were performed in duplicate. We conducted meta-analyses using a random-effects model.

Results:

We included 17 trials involving a total of 2121 participants. Compared with patients given placebo, those receiving zinc had a shorter duration of cold symptoms (mean difference −1.65 days, 95% confidence interval [CI] −2.50 to −0.81); however, heterogeneity was high (I2 = 95%). Zinc shortened the duration of cold symptoms in adults (mean difference −2.63, 95% CI −3.69 to −1.58), but no significant effect was seen among children (mean difference −0.26, 95% CI −0.78 to 0.25). Heterogeneity remained high in all subgroup analyses, including by age, dose of ionized zinc and zinc formulation. The occurrence of any adverse event (risk ratio [RR] 1.24, 95% CI 1.05 to 1.46), bad taste (RR 1.65, 95% CI 1.27 to 2.16) and nausea (RR 1.64, 95% CI 1.19 to 2.27) were more common in the zinc group than in the placebo group.

Interpretation:

The results of our meta-analysis showed that oral zinc formulations may shorten the duration of symptoms of the common cold. However, large high-quality trials are needed before definitive recommendations for clinical practice can be made. Adverse effects were common and should be the point of future study, because a good safety and tolerance profile is essential when treating this generally mild illness.The common cold is a frequent respiratory infection experienced 2 to 4 times a year by adults and up to 8 to 10 times a year by children.13 Colds can be caused by several viruses, of which rhinoviruses are the most common.4 Despite their benign nature, colds can lead to substantial morbidity, absenteeism and lost productivity.57Zinc, which can inhibit rhinovirus replication and has activity against other respiratory viruses such as respiratory syncytial virus,8 is a potential treatment for the common cold. The exact mechanism of zinc’s activity on viruses remains uncertain. Zinc may also reduce the severity of cold symptoms by acting as an astringent on the trigeminal nerve.9,10A recent meta-analysis of randomized controlled trials concluded that zinc was effective at reducing the duration and severity of common cold symptoms.11 However, there was considerable heterogeneity reported for the primary outcome (I2 = 93%), and subgroup analyses to explore between-study variations were not performed. The efficacy of zinc therefore remains uncertain, because it is unknown whether the variability among studies was due to methodologic diversity (i.e., risk of bias and therefore uncertainty in zinc’s efficacy) or differences in study populations or interventions (i.e., zinc dose and formulation).We conducted a systematic review and meta-analysis to evaluate the efficacy and safety of zinc for the treatment of the common cold. We sought to improve upon previous systematic reviews1117 by exploring the heterogeneity with subgroups identified a priori, identifying new trials by instituting a broader search and obtaining additional data from authors.  相似文献   

10.

Background:

High-sensitivity troponin assays are now available for clinical use. We investigated whether early measurement with such an assay is superior to a conventional assay in the evaluation of acute coronary syndromes.

Methods:

Patients presenting to an emergency department with chest pain who did not have ST-segment elevation were prospectively recruited from November 2007 to December 2010. Patients underwent serial testing with a conventional cardiac troponin I assay. Samples were also obtained at presentation and two hours later for measurement of troponin T levels using a high-sensitivity assay. The primary outcome was diagnosis of myocardial infarction on admission; secondary outcomes were death, myocardial infarction and heart failure at one year.

Results:

Of the 939 patients enrolled in the study, 205 (21.8%) had myocardial infarction. By two hours after presentation, the high-sensitivity troponin T assay at the cut-off point of the 99th percentile of the general population (14 ng/L) had a sensitivity of 92.2% (95% confidence interval [CI] 88.1%–95.0%) and a specificity of 79.7% (95% CI 78.6%–80.5%) for the diagnosis of non–ST-segment myocardial infarction. The sensitivity of the assay at presentation was 100% among patients who presented four to six hours after symptom onset. By one year, the high-sensitivity troponin T assay was found to be superior than the conventional assay in predicting death (hazard ratio [HR] 5.4, 95% CI 2.7–10.7) and heart failure (HR 27.8, 95% CI 6.6–116.4), whereas the conventional assay was superior in predicting nonfatal myocardial infarction (HR 4.0, 95% CI 2.4–6.7).

Interpretation:

The high-sensitivity troponin T assay at the cut-off point of the 99th percentile was highly sensitive for the diagnosis of myocardial infarction by two hours after presentation and had prognostic utility beyond that of the conventional assay. To rule out myocardial infarction, the optimal time to test a second sample using the high-sensitivity troponin T level may be four to six hours after symptom onset, but this finding needs verification in future studies before it can become routine practice.For novel cardiac markers to be clinically useful in diagnosing acute coronary syndromes, they need to show their incremental utility beyond that of existing markers, with therapeutic implications designed to improve patient care. Recent improvement in the performance of troponin assays to comply with current guidelines for the diagnosis of acute myocardial infarction1 has resulted in a new generation of assays with enhanced clinical sensitivity that are now available for use in clinical care. Assays with high sensitivity have been shown to detect myocardial injury earlier28 and identify more patients at risk of future adverse outcomes810 than conventional assays.We conducted a study to assess whether early measurement (at presentation and two hours later) with a high-sensitivity troponin T assay could (a) effectively rule out myocardial infarction without the need for later measurement of troponin levels and (b) identify more patients at risk of adverse cardiac events within one year follow-up compared with a conventional troponin assay.  相似文献   

11.

Background:

Heart failure is a leading cause of admission to hospital, but whether the incidence of heart failure is increasing or decreasing is uncertain. We examined temporal trends in the incidence and outcomes of heart failure in Ontario, Canada.

Methods:

Using population-based administrative databases of hospital discharge abstracts and physician health insurance claims, we identified 419 551 incident cases of heart failure in Ontario between Apr. 1, 1997, and Mar. 31, 2008. All patients were classified as either inpatients or outpatients based on the patient’s location at the time of the initial diagnosis. We tracked subsequent outcomes through linked administrative databases.

Results:

The age- and sex-standardized incidence of heart failure decreased 32.7% from 454.7 per 100 000 people in 1997 to 306.1 per 100 000 people in 2007 (p < 0.001). A comparable decrease in incidence occurred in both inpatient and outpatient settings. The greatest relative decrease occurred in patients aged 85 and over. Over the study period, 1-year risk-adjusted mortality decreased from 17.7% in 1997 to 16.2% in 2007 (p = 0.02) for outpatients, with a nonsignificant decrease from 35.7% in 1997 to 33.8% in 2007 (p = 0.1) for inpatients.

Interpretation:

The incidence of heart failure decreased substantially during the study period. Nevertheless, the prognosis for patients with heart failure remains poor and is associated with high mortality.Heart failure is a leading cause of admission to hospital and is associated with a poor long-term prognosis. In 1996, it was projected that the number of incident hospital admissions for heart failure in Canada would more than double by 2025 because of the aging population and increasing numbers of myocardial infarction survivors.1 By 2000, patients with heart failure accounted for the second highest number of hospital days in Canada, and the estimated 1-year case-fatality rate, after the first hospital admission, exceeded 35%.2,3 However, some recent studies suggest that admission and mortality rates for heart failure may actually be falling. It is unclear whether these changes represent lower rates of new incident cases, fewer readmissions, a shift to more outpatient care or improved survival.4,5We sought to examine temporal trends in the incidence and outcomes of heart failure in Ontario, Canada, in both inpatient and outpatient settings to assess the progress made in reducing the population burden of heart failure and to gain insight into the effectiveness of current preventive and therapeutic strategies.  相似文献   

12.

Background:

Whether the risk of cancer is increased among patients with herpes zoster is unclear. We investigated the risk of cancer among patients with herpes zoster using a nationwide health registry in Taiwan.

Methods:

We identified 35 871 patients with newly diagnosed herpes zoster during 2000–2008 from the National Health Insurance Research Database in Taiwan. We analyzed the standardized incidence ratios for various types of cancer.

Results:

Among patients with herpes zoster, 895 cases of cancer were reported. Patients with herpes zoster were not at increased risk of cancer (standardized incidence ratio 0.99, 95% confidence interval 0.93–1.06). Among the subgroups stratified by sex, age and years of follow-up, there was also no increased risk of overall cancer.

Interpretation:

Herpes zoster is not associated with increased risk of cancer in the general population. These findings do not support extensive investigations for occult cancer or enhanced surveillance for cancer in patients with herpes zoster.Herpes zoster, or shingles, is caused by reactivation of the varicella–zoster virus, a member of the Herpesviridae family. Established risk factors for herpes zoster include older age, chronic kidney disease, malignant disease and immunocompromised conditions (e.g., those experienced by patients with AIDS, transplant recipients, and those taking immunosuppressive medication because of autoimmune diseases).15 Herpes zoster occurs more frequently among patients with cancer than among those without cancer;6,7 however the relation between herpes zoster and risk of subsequent cancer is not well established.In 1955, Wyburn-Mason and colleagues reported several cases of skin cancer that arose from the healed lesions of herpes zoster.8 In 1972, a retrospective cohort study and a case series reported a higher prevalence of herpes zoster among patients with cancer, especially hematological cancer;6,7 however, they did not investigate whether herpes zoster was a risk factor for cancer. In 1982, Ragozzino and colleagues found no increased incidence of cancer (including hematologic malignancy) among patients with herpes zoster.9 There have been reports of significantly increased risk of some subtypes of cancer among patients aged more than 65 years with herpes zoster10 and among those admitted to hospital because of herpes zoster.11 Although these studies have suggested an association between herpes zoster and subsequent cancer, their results might not be generalizable because of differences in the severity of herpes zoster in the enrolled patients.Whether the risk of cancer is increased after herpes zoster remains controversial. The published studies811 were nearly all conducted in western countries, and data focusing on Asian populations are lacking.12 The results from western countries may not be directly generalizable to other ethnic groups because of differences in cancer types and profiles. Recently, a study reported that herpes zoster ophthalmicus may be a marker of increased risk of cancer in the following year.13 In the present study, we investigated the incidence rate ratio of cancer, including specific types of cancer, after diagnosis of herpes zoster.  相似文献   

13.

Background:

Use of the serum creatinine concentration, the most widely used marker of kidney function, has been associated with under-reporting of chronic kidney disease and late referral to nephrologists, especially among women and elderly people. To improve appropriateness of referrals, automatic reporting of the estimated glomerular filtration rate (eGFR) by laboratories was introduced in the province of Ontario, Canada, in March 2006. We hypothesized that such reporting, along with an ad hoc educational component for primary care physicians, would increase the number of appropriate referrals.

Methods:

We conducted a population-based before–after study with interrupted time-series analysis at a tertiary care centre. All referrals to nephrologists received at the centre during the year before and the year after automatic reporting of the eGFR was introduced were eligible for inclusion. We used regression analysis with autoregressive errors to evaluate whether such reporting by laboratories, along with ad hoc educational activities for primary care physicians, had an impact on the number and appropriateness of referrals to nephrologists.

Results:

A total of 2672 patients were included in the study. In the year after automatic reporting began, the number of referrals from primary care physicians increased by 80.6% (95% confidence interval [CI] 74.8% to 86.9%). The number of appropriate referrals increased by 43.2% (95% CI 38.0% to 48.2%). There was no significant change in the proportion of appropriate referrals between the two periods (−2.8%, 95% CI −26.4% to 43.4%). The proportion of elderly and female patients who were referred increased after reporting was introduced.

Interpretation:

The total number of referrals increased after automatic reporting of the eGFR began, especially among women and elderly people. The number of appropriate referrals also increased, but the proportion of appropriate referrals did not change significantly. Future research should be directed to understanding the reasons for inappropriate referral and to develop novel interventions for improving the referral process.Until recently, the serum creatinine concentration was used universally as an index of the glomerular filtration rate (GFR) to identify and monitor chronic kidney disease.1 The serum creatinine concentration depends on several factors, the most important being muscle mass.1 Women as compared with men, and elderly people as compared with young adults, tend to have lower muscle mass for the same degree of kidney function and thus have lower serum creatinine concentrations.2,3 Consequently, the use of the serum creatinine concentration is associated with underrecognition of chronic kidney disease, delayed workup for chronic kidney disease and late referral to nephrologists, particularly among women and elderly people. Late referral has been associated with increased mortality among patients receiving dialysis.311In 1999, the Modification of Diet in Renal Disease formula was introduced to calculate the estimated GFR (eGFR).12,13 This formula uses the patient’s serum creatinine concentration, age, sex and race (whether the patient is black or not). All of these variables are easily available to laboratories except race. Laboratories report the eGFR for non-black people, with advice to practitioners to multiply the result by 1.21 if their patient is black. Given that reporting of the eGFR markedly improves detection of chronic kidney disease,14,15 several national organizations recommended that laboratories automatically calculate and report the eGFR when the serum creatinine concentration is requested.1619 These organizations also provided guidelines on appropriate referral to nephrology based on the value.Although several studies have reported increases in referrals to nephrologists after automatic reporting of the eGFR was introduced,2026 there is limited evidence on the impact that such reporting has had on the appropriateness of referrals. An increase in the number of inappropriate referrals would affect health care delivery, diverting scarce resources to the evaluation of relatively mild kidney disease. It also would likely increase wait times for all nephrology referrals and have a financial impact on the system because specialist care is more costly than primary care.We conducted a study to evaluate whether the introduction of automatic reporting of the eGFR by laboratories, along with ad hoc educational activities for primary care physicians, had an impact on the number and appropriateness of referrals to nephrologists.  相似文献   

14.
Manea L  Gilbody S  McMillan D 《CMAJ》2012,184(3):E191-E196

Background:

The brief Patient Health Questionnaire (PHQ-9) is commonly used to screen for depression with 10 often recommended as the cut-off score. We summarized the psychometric properties of the PHQ-9 across a range of studies and cut-off scores to select the optimal cut-off for detecting depression.

Methods:

We searched Embase, MEDLINE and PsycINFO from 1999 to August 2010 for studies that reported the diagnostic accuracy of PHQ-9 to diagnose major depressive disorders. We calculated summary sensitivity, specificity, likelihood ratios and diagnostic odds ratios for detecting major depressive disorder at different cut-off scores and in different settings. We used random-effects bivariate meta-analysis at cutoff points between 7 and 15 to produce summary receiver operating characteristic curves.

Results:

We identified 18 validation studies (n = 7180) conducted in various clinical settings. Eleven studies provided details about the diagnostic properties of the questionnaire at more than one cut-off score (including 10), four studies reported a cut-off score of 10, and three studies reported cut-off scores other than 10. The pooled specificity results ranged from 0.73 (95% confidence interval [CI] 0.63–0.82) for a cut-off score of 7 to 0.96 (95% CI 0.94–0.97) for a cut-off score of 15. There was major variability in sensitivity for cut-off scores between 7 and 15. There were no substantial differences in the pooled sensitivity and specificity for a range of cut-off scores (8–11).

Interpretation:

The PHQ-9 was found to have acceptable diagnostic properties for detecting major depressive disorder for cut-off scores between 8 and 11. Authors of future validation studies should consistently report the outcomes for different cut-off scores.Depressive disorders are still under-recognized in medical settings despite major associated disability and costs. The use of short screening questionnaires may improve the recognition of depression in different medical settings.1 The depression module of the Patient Health Questionnaire (PHQ-9) has become increasingly popular in research and practice over the past decade.2 In its initial validation study, a score of 10 or higher had a sensitivity of 88% and a specificity of 88% for detecting major depressive disorders. Thus, a score of 10 has been recommended as the cut-off score for diagnosing this condition.3In a recent review of the PHQ-9, Kroenke and colleagues argued against inflexible adherence to a single cut-off score.2 A recent analysis of the management of depression in general practice in the United Kingdom showed that the accuracy of predicting major depressive disorder could be improved by using 12 as the cut-off score.4Given the widespread use of PHQ-9 in screening for depression and that certain cut-off scores are being recommended as part of national strategies to screen for depression (based on initial validation studies, which might not be generalizable),4,5 we attempted to determine whether the cut-off of 10 is optimum for screening for depression. This question could not be answered by two previous systematic reviews6,7 because of the small number of primary studies available at the time. We also aimed to provide greater clarity about the proper use of PHQ-9 given the many settings in which it is used.  相似文献   

15.
16.

Background:

Falls cause more than 60% of head injuries in older adults. Lack of objective evidence on the circumstances of these events is a barrier to prevention. We analyzed video footage to determine the frequency of and risk factors for head impact during falls in older adults in 2 long-term care facilities.

Methods:

Over 39 months, we captured on video 227 falls involving 133 residents. We used a validated questionnaire to analyze the mechanisms of each fall. We then examined whether the probability for head impact was associated with upper-limb protective responses (hand impact) and fall direction.

Results:

Head impact occurred in 37% of falls, usually onto a vinyl or linoleum floor. Hand impact occurred in 74% of falls but had no significant effect on the probability of head impact (p = 0.3). An increased probability of head impact was associated with a forward initial fall direction, compared with backward falls (odds ratio [OR] 2.7, 95% confidence interval [CI] 1.3–5.9) or sideways falls (OR 2.8, 95% CI 1.2–6.3). In 36% of sideways falls, residents rotated to land backwards, which reduced the probability of head impact (OR 0.2, 95% CI 0.04–0.8).

Interpretation:

Head impact was common in observed falls in older adults living in long-term care facilities, particularly in forward falls. Backward rotation during descent appeared to be protective, but hand impact was not. Attention to upper-limb strength and teaching rotational falling techniques (as in martial arts training) may reduce fall-related head injuries in older adults.Falls from standing height or lower are the cause of more than 60% of hospital admissions for traumatic brain injury in adults older than 65 years.15 Traumatic brain injury accounts for 32% of hospital admissions and more than 50% of deaths from falls in older adults.1,68 Furthermore, the incidence and age-adjusted rate of fall-related traumatic brain injury is increasing,1,9 especially among people older than 80 years, among whom rates have increased threefold over the past 30 years.10 One-quarter of fall-related traumatic brain injuries in older adults occur in long-term care facilities.1The development of improved strategies to prevent fall-related traumatic brain injuries is an important but challenging task. About 60% of residents in long-term care facilities fall at least once per year,11 and falls result from complex interactions of physiologic, environmental and situational factors.1216 Any fall from standing height has sufficient energy to cause brain injury if direct impact occurs between the head and a rigid floor surface.1719 Improved understanding is needed of the factors that separate falls that result in head impact and injury from those that do not.1,10 Falls in young adults rarely result in head impact, owing to protective responses such as use of the upper limbs to stop the fall, trunk flexion and rotation during descent.2023 We have limited evidence of the efficacy of protective responses to falls among older adults.In the current study, we analyzed video footage of real-life falls among older adults to estimate the prevalence of head impact from falls, and to examine the association between head impact, and biomechanical and situational factors.  相似文献   

17.

Background:

The role of atrial fibrillation in cognitive impairment and dementia, independent of stroke, is uncertain. We sought to determine the association of atrial fibrillation with cognitive and physical impairment in a large group of patients at high cardiovascular risk.

Methods:

We conducted a post-hoc analysis of two randomized controlled trials involving 31 546 patients, the aims of which were to evaluate the efficacy of treatment with ramipril plus telmisartan (ONTARGET) or telmisartan alone (TRANSCEND) in reducing cardiovascular disease. We evaluated the cognitive function of participants at baseline and after two and five years using the Mini–Mental State Examination (MMSE). In addition, we recorded incident dementia, loss of independence in activities of daily living and admission to long-term care facilities. We used a Cox regression model adjusting for main confounders to determine the association between atrial fibrillation and our primary outcomes: a decrease of three or more points in MMSE score, incident dementia, loss of independence in performing activities of daily living and admission to long-term care.

Results:

We enrolled 31 506 participants for whom complete information on atrial fibrillation was available, 70.4% of whom were men. The mean age of participants was 66.5 years, and the mean baseline MMSE score was 27.7 (standard deviation 2.9) points. At baseline, 1016 participants (3.3%) had atrial fibrillation, with the condition developing in an additional 2052 participants (6.5%) during a median follow-up of 56 months. Atrial fibrillation was associated with an increased risk of cognitive decline (hazard ratio [HR] 1.14, 95% confidence interval [CI] 1.03–1.26), new dementia (HR 1.30, 95% CI 1.14–1.49), loss of independence in performing activities of daily living (HR 1.35, 95% CI 1.19–1.54) and admission to long-term care facilities (HR 1.53, 95% CI 1.31–1.79). Results were consistent among participants with and without stroke or receiving antihypertensive drugs.

Interpretation:

Cognitive and functional decline are important consequences of atrial fibrillation, even in the absence of overt stroke.A trial fibrillation is an important and modifiable cause of ischemic stroke, which may result in considerable physical and cognitive disability.1 In addition, atrial fibrillation is associated with an increased risk of covert cerebral infarction, which is reported in about one-quarter of patients with atrial fibrillation who undergo magnetic resonance imaging of the brain.2 Thus, atrial fibrillation may be an important determinant of cognitive and functional decline, even in the absence of clinical ischemic stroke. However, previous epidemiologic studies evaluating atrial fibrillation’s association with cognitive function have been inconsistent,313 and very few have evaluated its association with functional outcomes.14A recent systematic review showed convincing evidence of an association between atrial fibrillation and dementia in patients with a history of stroke, but it concluded that there was considerable uncertainty of a link between atrial fibrillation and dementia in patients with no history of stroke.15 Large prospective cohort studies are required to determine a true association between atrial fibrillation and cognitive outcomes.In this study, we sought to determine the prospective association between atrial fibrillation and cognitive decline, loss of independence in activities of daily living and admission to long-term care facilities, using data from a large group of patients included in the ONTARGET and TRANSCEND trials.16,17  相似文献   

18.

Background:

Hospital readmissions are important patient outcomes that can be accurately captured with routinely collected administrative data. Hospital-specific readmission rates have been reported as a quality-of-care indicator. However, the extent to which these measures vary with different calculation methods is uncertain.

Methods:

We identified all discharges from Ontario hospitals from 2005 to 2010 and determined whether patients died or were urgently readmitted within 30 days. For each hospital, we calculated 4 distinct observed-to-expected ratios, estimating the expected number of events using different adjustments for confounders (age and sex v. complete) and different units of analysis (all admissions v. single admission per patient).

Results:

We included 3 148 648 admissions to hospital for 1 802 704 patients in 162 hospitals. Ratios adjusted for age and sex alone had the greatest variation. Within hospitals, ranges of the 4 ratios averaged 31% of the overall estimate. Readmission ratios adjusted for age and sex showed the lowest correlation (Spearman correlation coefficient 0.48–0.68). Hospital rankings based on the different measures had an average range of 47.4 (standard deviation 32.2) out of 162.

Interpretation:

We found notable variation in rates of death or urgent readmission within 30 days based on the extent of adjustment for confounders and the unit of analysis. Slight changes in the methods used to calculate hospital-specific readmission rates influence their values and the consequent rankings of hospitals. Our results highlight the caution required when comparing hospital performance using rates of death or urgent readmission within 30 days.Readmission rates are used to gauge and compare hospital performance and have been reported around the world.14 These rates create great public interest and concern regarding the local quality of health care. A recently created Canadian website reporting indicators including readmission rates crashed when it experienced 15 times more hits that expected.5,6 Policy-makers in some jurisdictions have implemented programs linking readmission rates to reimbursement.7The influence of the statistical methods used to calculate readmission rates has not been extensively explored. Variation exists in the methods used to calculate readmission rates: in Australia, patient-level covariates are not adjusted for;8 in the United States, Medicare uses a hierarchical model to adjust for patient age, sex and comorbidity, in addition to clustering of patients within hospitals.9 Furthermore, the patient populations included when calculating readmission rates vary, from a limited group of diagnoses in the US4 to almost all admissions to hospital in Great Britain.10Therefore, the methods used to determine readmission rates vary extensively with no apparent consensus on how these statistics should be calculated. We calculated adjusted hospital-specific rates of death or urgent readmission within 30 days and hospital rankings, varying 2 key factors relevant to generating these statistics: the completeness of confounder adjustment and the inclusion of all admissions to hospital versus a single admission per patient. Our goal was to determine the reliability of early death or urgent readmission rates as an indicator of hospital performance.  相似文献   

19.

Background:

The effect of hospital-acquired infection with Clostridium difficile on length of stay in hospital is not yet fully understood. We determined the independent impact of hospital-acquired infection with C. difficile on length of stay in hospital.

Methods:

We conducted a retrospective observational cohort study of admissions to hospital between July 1, 2002, and Mar. 31, 2009, at a single academic hospital. We measured the association between infection with hospital-acquired C. difficile and time to discharge from hospital using Kaplan–Meier methods and a Cox multivariable proportional hazards regression model. We controlled for baseline risk of death and accounted for C. difficile as a time-varying effect.

Results:

Hospital-acquired infection with C. difficile was identified in 1393 of 136 877 admissions to hospital (overall risk 1.02%, 95% confidence interval [CI] 0.97%–1.06%). The crude median length of stay in hospital was greater for patients with hospital-acquired C. difficile (34 d) than for those without C. difficile (8 d). Survival analysis showed that hospital-acquired infection with C. difficile increased the median length of stay in hospital by six days. In adjusted analyses, hospital-acquired C. difficile was significantly associated with time to discharge, modified by baseline risk of death and time to acquisition of C. difficile. The hazard ratio for discharge by day 7 among patients with hospital-acquired C. difficile was 0.55 (95% CI 0.39–0.70) for patients in the lowest decile of baseline risk of death and 0.45 (95% CI 0.32–0.58) for those in the highest decile; for discharge by day 28, the corresponding hazard ratios were 0.74 (95% CI 0.60–0.87) and 0.61 (95% CI 0.53–0.68).

Interpretation:

Hospital-acquired infection with C. difficile significantly prolonged length of stay in hospital independent of baseline risk of death.Infection with Clostridium difficile is associated with poor outcomes for patients.1,2 Previous work has determined that, regardless of baseline risk of death, for every 10 patients that acquire C. difficile in hospital, 1 patient will die.3 Clostridium difficile is also associated with increased health care costs.1,2 One of the primary mechanisms by which C. difficile increases costs is by increasing the length of time patients spend in hospital.4Previous studies have found that hospital-acquired infection with C. difficile increases a patient’s length of stay by one to three weeks.2,58 However, these estimates are potentially biased. First, previous studies have not accounted for the time-varying nature of this infection. Hospital-acquired infection with C. difficile is a variable that is unknown at admission but occurs during the stay in hospital.3,9 Treating time-varying variables as fixed in time-to-event analyses leads to “time-dependent bias” and may exaggerate the association between a risk factor and the time to the event of interest. Second, our previous work3 has shown that the risk of hospital-acquired infection with C. difficile significantly increases as a patient’s baseline risk of death increases; it is, therefore, important to account for risk of death at admission when investigating the association between hospital-acquired C. difficile and length of stay.Because of the importance of an accurate estimate of the impact of C. difficile, we conducted a retrospective observational cohort study to determine the independent association between hospital-acquired infection with C. difficile and length of stay in hospital. We accounted for each patient’s risk of death upon admission and the variable amount of time patients spent in hospital before acquiring C. difficile.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号