首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.

Background:

Uncircumcised boys are at higher risk for urinary tract infections than circumcised boys. Whether this risk varies with the visibility of the urethral meatus is not known. Our aim was to determine whether there is a hierarchy of risk among uncircumcised boys whose urethral meatuses are visible to differing degrees.

Methods:

We conducted a prospective cross-sectional study in one pediatric emergency department. We screened 440 circumcised and uncircumcised boys. Of these, 393 boys who were not toilet trained and for whom the treating physician had requested a catheter urine culture were included in our analysis. At the time of catheter insertion, a nurse characterized the visibility of the urethral meatus (phimosis) using a 3-point scale (completely visible, partially visible or nonvisible). Our primary outcome was urinary tract infection, and our primary exposure variable was the degree of phimosis: completely visible versus partially or nonvisible urethral meatus.

Results:

Cultures grew from urine samples from 30.0% of uncircumcised boys with a completely visible meatus, and from 23.8% of those with a partially or nonvisible meatus (p = 0.4). The unadjusted odds ratio (OR) for culture growth was 0.73 (95% confidence interval [CI] 0.35–1.52), and the adjusted OR was 0.41 (95% CI 0.17–0.95). Of the boys who were circumcised, 4.8% had urinary tract infections, which was significantly lower than the rate among uncircumcised boys with a completely visible urethral meatus (unadjusted OR 0.12 [95% CI 0.04–0.39], adjusted OR 0.07 [95% CI 0.02–0.26]).

Interpretation:

We did not see variation in the risk of urinary tract infection with the visibility of the urethral meatus among uncircumcised boys. Compared with circumcised boys, we saw a higher risk of urinary tract infection in uncircumcised boys, irrespective of urethral visibility.Urinary tract infections are one of the most common serious bacterial infections in young children.16 Prompt diagnosis is important, because children with urinary tract infection are at risk for bacteremia6 and renal scarring.1,7 Uncircumcised boys have a much higher risk of urinary tract infection than circumcised boys,1,3,4,6,812 likely as a result of heavier colonization under the foreskin with pathogenic bacteria, which leads to ascending infections.13,14 The American Academy of Pediatrics recently suggested that circumcision status be used to select which boys should be evaluated for urinary tract infection.1 However, whether all uncircumcised boys are at equal risk for infection, or whether the risk varies with the visibility of the urethral opening, is not known. It has been suggested that a subset of uncircumcised boys with a poorly visible urethral opening are at increased risk of urinary tract infection,1517 leading some experts to consider giving children with tight foreskins topical cortisone or circumcision to prevent urinary tract infections.13,1821We designed a study to challenge the opinion that all uncircumcised boys are at increased risk for urinary tract infections. We hypothesized a hierarchy of risk among uncircumcised boys depending on the visibility of the urethral meatus, with those with a partially or nonvisible meatus at highest risk, and those with a completely visible meatus having a level of risk similar to that of boys who have been circumcised. Our primary aim was to compare the proportions of urinary tract infections among uncircumcised boys with a completely visible meatus with those with a partially or nonvisible meatus.  相似文献   

2.

Background:

Whether the risk of cancer is increased among patients with herpes zoster is unclear. We investigated the risk of cancer among patients with herpes zoster using a nationwide health registry in Taiwan.

Methods:

We identified 35 871 patients with newly diagnosed herpes zoster during 2000–2008 from the National Health Insurance Research Database in Taiwan. We analyzed the standardized incidence ratios for various types of cancer.

Results:

Among patients with herpes zoster, 895 cases of cancer were reported. Patients with herpes zoster were not at increased risk of cancer (standardized incidence ratio 0.99, 95% confidence interval 0.93–1.06). Among the subgroups stratified by sex, age and years of follow-up, there was also no increased risk of overall cancer.

Interpretation:

Herpes zoster is not associated with increased risk of cancer in the general population. These findings do not support extensive investigations for occult cancer or enhanced surveillance for cancer in patients with herpes zoster.Herpes zoster, or shingles, is caused by reactivation of the varicella–zoster virus, a member of the Herpesviridae family. Established risk factors for herpes zoster include older age, chronic kidney disease, malignant disease and immunocompromised conditions (e.g., those experienced by patients with AIDS, transplant recipients, and those taking immunosuppressive medication because of autoimmune diseases).15 Herpes zoster occurs more frequently among patients with cancer than among those without cancer;6,7 however the relation between herpes zoster and risk of subsequent cancer is not well established.In 1955, Wyburn-Mason and colleagues reported several cases of skin cancer that arose from the healed lesions of herpes zoster.8 In 1972, a retrospective cohort study and a case series reported a higher prevalence of herpes zoster among patients with cancer, especially hematological cancer;6,7 however, they did not investigate whether herpes zoster was a risk factor for cancer. In 1982, Ragozzino and colleagues found no increased incidence of cancer (including hematologic malignancy) among patients with herpes zoster.9 There have been reports of significantly increased risk of some subtypes of cancer among patients aged more than 65 years with herpes zoster10 and among those admitted to hospital because of herpes zoster.11 Although these studies have suggested an association between herpes zoster and subsequent cancer, their results might not be generalizable because of differences in the severity of herpes zoster in the enrolled patients.Whether the risk of cancer is increased after herpes zoster remains controversial. The published studies811 were nearly all conducted in western countries, and data focusing on Asian populations are lacking.12 The results from western countries may not be directly generalizable to other ethnic groups because of differences in cancer types and profiles. Recently, a study reported that herpes zoster ophthalmicus may be a marker of increased risk of cancer in the following year.13 In the present study, we investigated the incidence rate ratio of cancer, including specific types of cancer, after diagnosis of herpes zoster.  相似文献   

3.

Background:

There have been several published reports of inflammatory ocular adverse events, mainly uveitis and scleritis, among patients taking oral bisphosphonates. We examined the risk of these adverse events in a pharmacoepidemiologic cohort study.

Methods:

We conducted a retrospective cohort study involving residents of British Columbia who had visited an ophthalmologist from 2000 to 2007. Within the cohort, we identified all people who were first-time users of oral bisphosphonates and who were followed to the first inflammatory ocular adverse event, death, termination of insurance or the end of the study period. We defined an inflammatory ocular adverse event as scleritis or uveitis. We used a Cox proportional hazard model to determine the adjusted rate ratios. As a sensitivity analysis, we performed a propensity-score–adjusted analysis.

Results:

The cohort comprised 934 147 people, including 10 827 first-time users of bisphosphonates and 923 320 nonusers. The incidence rate among first-time users was 29/10 000 person-years for uveitis and 63/10 000 person-years for scleritis. In contrast, the incidence among people who did not use oral bisphosphonates was 20/10 000 person-years for uveitis and 36/10 000 for scleritis (number needed to harm: 1100 and 370, respectively). First-time users had an elevated risk of uveitis (adjusted relative risk [RR] 1.45, 95% confidence interval [CI] 1.25–1.68) and scleritis (adjusted RR 1.51, 95% CI 1.34–1.68). The rate ratio for the propensity-score–adjusted analysis did not change the results (uveitis: RR 1.50, 95% CI 1.29–1.73; scleritis: RR 1.53, 95% CI 1.39–1.70).

Interpretation:

People using oral bisphosphonates for the first time may be at a higher risk of scleritis and uveitis compared to people with no bisphosphonate use. Patients taking bisphosphonates must be familiar with the signs and symptoms of these conditions, so that they can immediately seek assessment by an ophthalmologist.Oral bisphosphonates are the most frequently prescribed class of medications for the prevention of osteoporosis. Most literature about the safety of bisphosphonates has mainly focused on long-term adverse events, including atypical fractures,1 atrial fibrillation,2 and esophageal and colon cancer.3Uveitis and scleritis are ocular inflammatory diseases that are associated with major morbidity. Anterior uveitis is the most common type of uveitis with an estimated 11.4–100.0 cases/100 000 person-years.4,5 Both diseases require immediate treatment to prevent further complications, which may include cataracts, glaucoma, macular edema and scleral perforation. Numerous case reports and case series have described an association between the use of oral bisphosphonates and anterior uveitis68 and scleritis.8,9 In most reported cases, severe eye pain was reported within days of taking an oral bisphosphonates, and the symptom resolved after stopping the agent.6,9 Only one large epidemiologic study has examined the association between the use of bisphosphonates and ocular inflammatory diseases.10 This study did not find an association, but it was limited by a small number of events and a lack of power. Thus, the association between uveitis or scleritis and the use of oral bisphosphonates is not fully known. Given that early intervention may prevent complications, we performed a pharmacoepidemiologic study to assess the true risk of these potentially serious conditions.  相似文献   

4.

Background:

Several biomarkers of metabolic acidosis, including lower plasma bicarbonate and higher anion gap, have been associated with greater insulin resistance in cross-sectional studies. We sought to examine whether lower plasma bicarbonate is associated with the development of type 2 diabetes mellitus in a prospective study.

Methods:

We conducted a prospective, nested case–control study within the Nurses’ Health Study. Plasma bicarbonate was measured in 630 women who did not have type 2 diabetes mellitus at the time of blood draw in 1989–1990 but developed type 2 diabetes mellitus during 10 years of follow-up. Controls were matched according to age, ethnic background, fasting status and date of blood draw. We used logistic regression to calculate odds ratios (ORs) for diabetes by category of baseline plasma bicarbonate.

Results:

After adjustment for matching factors, body mass index, plasma creatinine level and history of hypertension, women with plasma bicarbonate above the median level had lower odds of diabetes (OR 0.76, 95% confidence interval [CI] 0.60–0.96) compared with women below the median level. Those in the second (OR 0.92, 95% CI 0.67–1.25), third (OR 0.70, 95% CI 0.51–0.97) and fourth (OR 0.75, 95% CI 0.54–1.05) quartiles of plasma bicarbonate had lower odds of diabetes compared with those in the lowest quartile (p for trend = 0.04). Further adjustment for C-reactive protein did not alter these findings.

Interpretation:

Higher plasma bicarbonate levels were associated with lower odds of incident type 2 diabetes mellitus among women in the Nurses’ Health Study. Further studies are needed to confirm this finding in different populations and to elucidate the mechanism for this relation.Resistance to insulin is central to the pathogenesis of type 2 diabetes mellitus.1 Several mechanisms may lead to insulin resistance and thereby contribute to the development of type 2 diabetes mellitus, including altered fatty acid metabolism, mitochondrial dysfunction and systemic inflammation.2 Metabolic acidosis may also contribute to insulin resistance. Human studies using the euglycemic and hyperglycemic clamp techniques have shown that mild metabolic acidosis induced by the administration of ammonium chloride results in reduced tissue insulin sensitivity.3 Subsequent studies in rat models have suggested that metabolic acidosis decreases the binding of insulin to its receptors.4,5 Finally, metabolic acidosis may also increase cortisol production,6 which in turn is implicated in the development of insulin resistance.7Recent epidemiologic studies have shown an association between clinical markers of metabolic acidosis and greater insulin resistance or prevalence of type 2 diabetes mellitus. In the National Health and Nutrition Examination Survey, both lower serum bicarbonate and higher anion gap (even within ranges considered normal) were associated with increased insulin resistance among adults without diabetes.8 In addition, higher levels of serum lactate, a small component of the anion gap, were associated with higher odds of prevalent type 2 diabetes mellitus in the Atherosclerosis Risk in Communities study9 and with higher odds of incident type 2 diabetes mellitus in a retrospective cohort study of the risk factors for diabetes in Swedish men.10 Other biomarkers associated with metabolic acidosis, including higher levels of serum ketones,11 lower urinary citrate excretion12 and low urine pH,13 have been associated in cross-sectional studies with either insulin resistance or the prevalence of type 2 diabetes mellitus. However, it is unclear whether these associations are a cause or consequence. We sought to address this question by prospectively examining the association between plasma bicarbonate and subsequent development of type 2 diabetes mellitus in a nested case–control study within the Nurses’ Health Study.  相似文献   

5.

Background:

Previous studies of differences in mental health care associated with children’s sociodemographic status have focused on access to community care. We examined differences associated with visits to the emergency department.

Methods:

We conducted a 6-year population-based cohort analysis using administrative databases of visits (n = 30 656) by children aged less than 18 years (n = 20 956) in Alberta. We measured differences in the number of visits by socioeconomic and First Nations status using directly standardized rates. We examined time to return to the emergency department using a Cox regression model, and we evaluated time to follow-up with a physician by physician type using a competing risks model.

Results:

First Nations children aged 15–17 years had the highest rate of visits for girls (7047 per 100 000 children) and boys (5787 per 100 000 children); children in the same age group from families not receiving government subsidy had the lowest rates (girls: 2155 per 100 000 children; boys: 1323 per 100 000 children). First Nations children (hazard ratio [HR] 1.64; 95% confidence interval [CI] 1.30–2.05), and children from families receiving government subsidies (HR 1.60, 95% CI 1.30–1.98) had a higher risk of return to an emergency department for mental health care than other children. The longest median time to follow-up with a physician was among First Nations children (79 d; 95% CI 60–91 d); this status predicted longer time to a psychiatrist (HR 0.47, 95% CI 0.32–0.70). Age, sex, diagnosis and clinical acuity also explained post-crisis use of health care.

Interpretation:

More visits to the emergency department for mental health crises were made by First Nations children and children from families receiving a subsidy. Sociodemographics predicted risk of return to the emergency department and follow-up care with a physician.Emergency departments are a critical access point for mental health care for children who have been unable to receive care elsewhere or are in crisis.1 Care provided in an emergency department can stabilize acute problems and facilitate urgent follow-up for symptom management and family support.1,2Race, ethnic background and socioeconomic status have been linked to a crisis-oriented care patterns among American children.3,4 Minority children are less likely than white children to have received mental health treatment before an emergency department visit,3,4 and uninsured children are less likely to receive an urgent mental health evaluation when needed.4 Other studies, however, have shown no relation between sociodemographic status and mental health care,5,6 and it may be that different health system characteristics (e.g., pay-for-service, insurance coverage, publicly funded care) interact with sociodemographic status to influence how mental health resources are used. Canadian studies are largely absent in this discussion, despite a known relation between lower income and poorer mental health status,7 nationwide documentation of disparities faced by Aboriginal children,810 and government-commissioned reviews that highlight deficits in universal access to mental health care.11We undertook the current study to examine whether sociodemographic differences exist in the rates of visits to emergency departments for mental health care and in the use of post-crisis health care services for children in Alberta. Knowledge of whether differences exist for children with mental health needs may help identify children who could benefit from earlier intervention to prevent illness destabilization and children who may be disadvantaged in the period after the emergency department visit. We hypothesized that higher rates of emergency department use, lower rates of follow-up physician visits after the initial emergency department visit, and a longer time to physician follow-up would be observed among First Nations children and children from families receiving government social assistance.  相似文献   

6.

Background:

Although guidelines advise titration of palliative sedation at the end of life, in practice the depth of sedation can range from mild to deep. We investigated physicians’ considerations about the depth of continuous sedation.

Methods:

We performed a qualitative study in which 54 physicians underwent semistructured interviewing about the last patient for whom they had been responsible for providing continuous palliative sedation. We also asked about their practices and general attitudes toward sedation.

Results:

We found two approaches toward the depth of continuous sedation: starting with mild sedation and only increasing the depth if necessary, and deep sedation right from the start. Physicians described similar determinants for both approaches, including titration of sedatives to the relief of refractory symptoms, patient preferences, wishes of relatives, expert advice and esthetic consequences of the sedation. However, physicians who preferred starting with mild sedation emphasized being guided by the patient’s condition and response, and physicians who preferred starting with deep sedation emphasized ensuring that relief of suffering would be maintained. Physicians who preferred each approach also expressed different perspectives about whether patient communication was important and whether waking up after sedation is started was problematic.

Interpretation:

Physicians who choose either mild or deep sedation appear to be guided by the same objective of delivering sedation in proportion to the relief of refractory symptoms, as well as other needs of patients and their families. This suggests that proportionality should be seen as a multidimensional notion that can result in different approaches toward the depth of sedation.Palliative sedation is considered to be an appropriate option when other treatments fail to relieve suffering in dying patients.1,2 There are important questions associated with this intervention, such as how deep the sedation must be to relieve suffering and how important it is for patients and their families for the patient to maintain a certain level of consciousness.1 In the national guidelines for the Netherlands, palliative sedation is defined as “the intentional lowering of consciousness of a patient in the last phase of life.”3,4 Sedatives can be administered intermittently or continuously, and the depth of palliative sedation can range from mild to deep.1,5Continuous deep sedation until death is considered the most far reaching and controversial type of palliative sedation. Nevertheless, it is used frequently: comparable nationwide studies in Europe show frequencies of 2.5% to 16% of all deaths.68 An important reason for continuous deep sedation being thought of as controversial is the possible association of this practice with the hastening of death,911 although it is also argued that palliative sedation does not shorten life when its use is restricted to the patient’s last days of life.12,13 Guidelines for palliative sedation often advise physicians to titrate sedatives,2,3,14 which means that the dosages of sedatives are adjusted to the level needed for proper relief of symptoms. To date, research has predominantly focused on the indications and type of medications used for sedation. In this study, we investigated how physicians decide the depth of continuous palliative sedation and how these decisions relate to guidelines.  相似文献   

7.

Background:

Overweight and obesity in young people are assessed by comparing body mass index (BMI) with a reference population. However, two widely used reference standards, the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO) growth curves, have different definitions of overweight and obesity, thus affecting estimates of prevalence. We compared the associations between overweight and obesity as defined by each of these curves and the presence of cardiometabolic risk factors.

Methods:

We obtained data from a population-representative study involving 2466 boys and girls aged 9, 13 and 16 years in Quebec, Canada. We calculated BMI percentiles using the CDC and WHO growth curves and compared their abilities to detect unfavourable levels of fasting lipids, glucose and insulin, and systolic and diastolic blood pressure using receiver operating characteristic curves, sensitivity, specificity and kappa coefficients.

Results:

The z scores for BMI using the WHO growth curves were higher than those using the CDC growth curves (0.35–0.43 v. 0.12–0.28, p < 0.001 for all comparisons). The WHO and CDC growth curves generated virtually identical receiver operating characteristic curves for individual or combined cardiometabolic risk factors. The definitions of overweight and obesity had low sensitivities but adequate specificities for cardiometabolic risk. Obesity as defined by the WHO or CDC growth curves discriminated cardiometabolic risk similarly, but overweight as defined by the WHO curves had marginally higher sensitivities (by 0.6%–8.6%) and lower specificities (by 2.6%–4.2%) than the CDC curves.

Interpretation:

The WHO growth curves show no significant discriminatory advantage over the CDC growth curves in detecting cardiometabolic abnormalities in children aged 9–16 years.Pediatric obesity is associated with dyslipidemia, insulin resistance and elevated blood pressure.16 Thus, accurately identifying children with obesity is crucial for clinical management and public health surveillance.Lipid screening is recommended for young people who are overweight,7,8 but studies show that estimates of the prevalence of overweight and obesity are 1%–7% lower using the growth curves of the Centers for Disease Control and Prevention (CDC) versus those of the World Health Organization (WHO).911 Although the CDC and WHO definitions of overweight and obesity both use approximations of overweight and obese values of body mass index (BMI) when children reach 19 years of age, the CDC growth curves use data from more recent samples of young people.12,13 Given the recent rise in the prevalence of obesity among young people, using a heavier reference population may lead to fewer children being identified as overweight and obese, and an identical BMI value may not trigger a clinical investigation.7 The Canadian Paediatric Society, in collaboration with the College of Family Physicians of Canada, Dietitians of Canada and Community Health Nurses of Canada, recently recommended that physicians switch from the CDC to the WHO growth curves for monitoring growth for Canadian children aged 5–19 years.14 This is a major change for health providers caring for the estimated 8 million children in Canada.15Understanding how using the different growth curves affects the identification of adverse cardiometabolic risk profiles is essential for the appropriate management of overweight and obesity among young people. Thus, our objectives were to assess whether the association between BMI percentiles and cardiometabolic risk differs between the definitions of overweight and obesity based on the WHO and CDC growth curves, and to compare the sensitivity and specificity of these definitions in detecting cardiometabolic risk.  相似文献   

8.

Background:

Although Aboriginal adults have a higher risk of end-stage renal disease than non-Aboriginal adults, the incidence and causes of end-stage renal disease among Aboriginal children and young adults are not well described.

Methods:

We calculated age- and sex-specific incidences of end-stage renal disease among Aboriginal people less than 22 years of age using data from a national organ failure registry. Incidence rate ratios were used to compare rates between Aboriginal and white Canadians. To contrast causes of end-stage renal disease by ethnicity and age, we calculated the odds of congenital diseases, glomerulonephritis and diabetes for Aboriginal people and compared them with those for white people in the following age strata: 0 to less than 22 years, 22 to less than 40 years, 40 to less than 60 years and older than 60 years.

Results:

Incidence rate ratios of end-stage renal disease for Aboriginal children and young adults (age < 22 yr, v. white people) were 1.82 (95% confidence interval [CI] 1.40–2.38) for boys and 3.24 (95% CI 2.60–4.05) for girls. Compared with white people, congenital diseases were less common among Aboriginal people aged less than 22 years (odds ratio [OR] 0.56, 95% CI 0.36–0.86), and glomerulonephritis was more common (OR 2.18, 95% CI 1.55–3.07). An excess of glomerulonephritis, but not diabetes, was seen among Aboriginal people aged 22 to less than 40 years. The converse was true (higher risk of diabetes, lower risk of glomerulonephritis) among Aboriginal people aged 40 years and older.

Interpretation:

The incidence of end-stage renal disease is higher among Aboriginal children and young adults than among white children and young adults. This higher incidence may be driven by an increased risk of glomerulonephritis in this population.Compared with white Canadians, Aboriginal Canadians have a higher prevalence of end-stage renal disease,1,2 which is generally attributed to their increased risk for diabetes. However, there has been limited investigation of the incidence and causes of end-stage renal disease among Aboriginal children and young adults. Because most incident cases of diabetes are identified in middle-aged adults, an excess risk of end-stage renal disease in young people would not be expected if the high risk of diabetes is responsible for higher overall rates of end-stage renal disease among Aboriginal people. About 12.3% of children with end-stage renal disease in Canada are Aboriginal,3 but only 6.1% of Canadian children (age < 19 yr) are Aboriginal.4,5A few reports suggest that nondiabetic renal disease is common among Aboriginal populations in North America.2,68 Aboriginal adults in Saskatchewan are twice as likely as white adults to have end-stage renal disease caused by glomerulonephritis,7,8 and an increased rate of mesangial proliferative glomerulonephritis has been reported among Aboriginal people in the United States.6,9 These studies suggest that diabetes may be a comorbid condition rather than the sole cause of kidney failure among some Aboriginal people in whom diabetic nephropathy is diagnosed using clinical features alone.We estimated incidence rates of end-stage renal disease among Aboriginal children and young adults in Canada and compared them with the rates seen among white children and young adults. In addition, we compared relative odds of congenital renal disease, glomerulonephritis and diabetic nephropathy in Aboriginal people with the relative odds of these conditions in white people.  相似文献   

9.

Background:

Although diacetylmorphine has been proven to be more effective than methadone maintenance treatment for opioid dependence, its direct costs are higher. We compared the cost-effectiveness of diacetylmorphine and methadone maintenance treatment for chronic opioid dependence refractory to treatment.

Methods:

We constructed a semi-Markov cohort model using data from the North American Opiate Medication Initiative trial, supplemented with administrative data for the province of British Columbia and other published data, to capture the chronic, recurrent nature of opioid dependence. We calculated incremental cost-effectiveness ratios to compare diacetylmorphine and methadone over 1-, 5-, 10-year and lifetime horizons.

Results:

Diacetylmorphine was found to be a dominant strategy over methadone maintenance treatment in each of the time horizons. Over a lifetime horizon, our model showed that people receiving methadone gained 7.46 discounted quality-adjusted life-years (QALYs) on average (95% credibility interval [CI] 6.91–8.01) and generated a societal cost of $1.14 million (95% CI $736 800–$1.78 million). Those who received diacetylmorphine gained 7.92 discounted QALYs on average (95% CI 7.32–8.53) and generated a societal cost of $1.10 million (95% CI $724 100–$1.71 million). Cost savings in the diacetylmorphine cohort were realized primarily because of reductions in the costs related to criminal activity. Probabilistic sensitivity analysis showed that the probability of diacetylmorphine being cost-effective at a willingness-to-pay threshold of $0 per QALY gained was 76%; the probability was 95% at a threshold of $100 000 per QALY gained. Results were confirmed over a range of sensitivity analyses.

Interpretation:

Using mathematical modelling to extrapolate results from the North American Opiate Medication Initiative, we found that diacetylmorphine may be more effective and less costly than methadone among people with chronic opioid dependence refractory to treatment.Opioid substitution with methadone is the most common treatment of opioid dependence.13 Participation in a methadone maintenance treatment program has been associated with decreases in illicit drug use,4 criminality5 and mortality.6,7 However, longitudinal studies have shown that most people who receive opioid substitution treatment are unable to abstain from illicit drug use for sustained periods, either switching from treatment to regular opioid use or continuing to use opioids while in treatment.813 An estimated 15%–25% of the most marginalized methadone clients do not benefit from treatment in terms of sustained abstention from the use of illicit opioids.14The North American Opiate Medication Initiative was a randomized controlled trial that compared supervised, medically prescribed injectable diacetylmorphine and optimized methadone maintenance treatment in people with long-standing opioid dependence and multiple failed treatment attempts with methadone or other forms of treatment.15 The trial was conducted in two Canadian cities (Vancouver, British Columbia; and Montréal, Quebec). Both treatment protocols included a comprehensive range of psychosocial services (e.g., addiction counselling, relapse prevention, case management, and individual and group interventions) and primary care services (e.g., testing for blood-borne diseases, provision of HIV treatment, and treatment of acute and chronic physical and mental health complications of substance use) in keeping with Health Canada best practices.16 The results of the trial confirmed findings of prior studies showing diacetylmorphine to be more effective than methadone maintenance treatment in retaining opioid-dependent patients in treatment15,1720 and improving health and social functioning.19,21,22 Diacetylmorphine treatment has been proposed to reach a specific population of people with opioid dependence refractory to treatment who are at high risk of adverse health consequences and engagement in criminal activities to acquire the illicit drugs.For guiding policy-makers, the North American Opiate Medication Initiative alone does not address all the important considerations for decision-making. In addition to political challenges associated with the therapy,23 there remains concern over the direct cost of diacetylmorphine over the long term, because it can be as much as 10 times greater than conventional methadone maintenance treatment.21 The North American Opiate Medication Initiative was only one year in duration, but a policy to introduce diacetylmorphine might have both positive and negative longer-term implications.We extrapolated outcomes from the North American Opiate Medication Initiative to estimate the long-term cost-effectiveness of diacetylmorphine versus methadone maintenance treatment for chronic, refractory opioid dependence.  相似文献   

10.
11.

Background:

The risk of infection following a visit to the emergency department is unknown. We explored this risk among elderly residents of long-term care facilities.

Methods:

We compared the rates of new respiratory and gastrointestinal infections among elderly residents aged 65 years and older of 22 long-term care facilities. We used standardized surveillance definitions. For each resident who visited the emergency department during the study period, we randomly selected two residents who did not visit the emergency department and matched them by facility unit, age and sex. We calculated the rates and proportions of new infections, and we used conditional logistic regression to adjust for potential confounding variables.

Results:

In total, we included 1269 residents of long-term care facilities, including 424 who visited the emergency department during the study. The baseline characteristics of residents who did or did not visit the emergency department were similar, except for underlying health status (visited the emergency department: mean Charlson Comorbidity Index 6.1, standard deviation [SD] 2.5; did not visit the emergency department: mean Charlson Comorbidity index 5.5, SD 2.7; p < 0.001) and the proportion who had visitors (visited the emergency department: 46.9%; did not visit the emergency department: 39.2%; p = 0.01). Overall, 21 (5.0%) residents who visited the emergency department and 17 (2.0%) who did not visit the emergency department acquired new infections. The incidence of new infections was 8.3/1000 patient-days among those who visited the emergency department and 3.4/1000 patient-days among those who did not visit the emergency department. The adjusted odds ratio for the risk of infection following a visit to the emergency department was 3.9 (95% confidence interval 1.4–10.8).

Interpretation:

A visit to the emergency department was associated with more than a threefold increased risk of acute infection among elderly people. Additional precautions should be considered for residents following a visit to the emergency department.Infections associated with health care are an important health risk. A recent survey by the World Health Organization reported that 8.7% of patients in hospital developed such infections.1,2 The third leading cause of death in the United States is health care–associated deaths, with over 100 000 people dying from infections associated with health care each year.3 In Canada, a point-prevalence survey found that 11.6% of adults in hospital experience a health care–associated infection.4Little attention has been paid to infections acquired in other health care settings. Visiting an emergency department has been identified as a risk for disease during outbreaks of measles5,6 and SARS,7,8 but little is known about the potential risk of endemic infection from exposure in this setting. A visit to the emergency department differs from a stay in hospital: exposure and duration of contact with other patients is shorter, but the number and density of patients with acute illness with whom there could be contact is higher.Elderly residents of long-term care facilities are likely to be at the greatest risk of morbidity and mortality from communicable diseases acquired in the emergency department. When residents are transferred to the emergency department for assessment, they are likely to have longer stays and to be cared for in multibed observation areas and corridors.9 If they acquire an infection while in the emergency department, these residents may be the source of an outbreak upon return to their facility; this can lead to increases in workload and costs. A Canadian study estimated the cost of an influenza outbreak to be over $6000 per 30-day period, with an estimated incidence of death of 0.75/100 residents during the same period.10 In this study, we explored the risk of acute respiratory and gastrointestinal infection associated with a visit to the emergency department among elderly residents of long-term care facilities.  相似文献   

12.

Background:

Use of the serum creatinine concentration, the most widely used marker of kidney function, has been associated with under-reporting of chronic kidney disease and late referral to nephrologists, especially among women and elderly people. To improve appropriateness of referrals, automatic reporting of the estimated glomerular filtration rate (eGFR) by laboratories was introduced in the province of Ontario, Canada, in March 2006. We hypothesized that such reporting, along with an ad hoc educational component for primary care physicians, would increase the number of appropriate referrals.

Methods:

We conducted a population-based before–after study with interrupted time-series analysis at a tertiary care centre. All referrals to nephrologists received at the centre during the year before and the year after automatic reporting of the eGFR was introduced were eligible for inclusion. We used regression analysis with autoregressive errors to evaluate whether such reporting by laboratories, along with ad hoc educational activities for primary care physicians, had an impact on the number and appropriateness of referrals to nephrologists.

Results:

A total of 2672 patients were included in the study. In the year after automatic reporting began, the number of referrals from primary care physicians increased by 80.6% (95% confidence interval [CI] 74.8% to 86.9%). The number of appropriate referrals increased by 43.2% (95% CI 38.0% to 48.2%). There was no significant change in the proportion of appropriate referrals between the two periods (−2.8%, 95% CI −26.4% to 43.4%). The proportion of elderly and female patients who were referred increased after reporting was introduced.

Interpretation:

The total number of referrals increased after automatic reporting of the eGFR began, especially among women and elderly people. The number of appropriate referrals also increased, but the proportion of appropriate referrals did not change significantly. Future research should be directed to understanding the reasons for inappropriate referral and to develop novel interventions for improving the referral process.Until recently, the serum creatinine concentration was used universally as an index of the glomerular filtration rate (GFR) to identify and monitor chronic kidney disease.1 The serum creatinine concentration depends on several factors, the most important being muscle mass.1 Women as compared with men, and elderly people as compared with young adults, tend to have lower muscle mass for the same degree of kidney function and thus have lower serum creatinine concentrations.2,3 Consequently, the use of the serum creatinine concentration is associated with underrecognition of chronic kidney disease, delayed workup for chronic kidney disease and late referral to nephrologists, particularly among women and elderly people. Late referral has been associated with increased mortality among patients receiving dialysis.311In 1999, the Modification of Diet in Renal Disease formula was introduced to calculate the estimated GFR (eGFR).12,13 This formula uses the patient’s serum creatinine concentration, age, sex and race (whether the patient is black or not). All of these variables are easily available to laboratories except race. Laboratories report the eGFR for non-black people, with advice to practitioners to multiply the result by 1.21 if their patient is black. Given that reporting of the eGFR markedly improves detection of chronic kidney disease,14,15 several national organizations recommended that laboratories automatically calculate and report the eGFR when the serum creatinine concentration is requested.1619 These organizations also provided guidelines on appropriate referral to nephrology based on the value.Although several studies have reported increases in referrals to nephrologists after automatic reporting of the eGFR was introduced,2026 there is limited evidence on the impact that such reporting has had on the appropriateness of referrals. An increase in the number of inappropriate referrals would affect health care delivery, diverting scarce resources to the evaluation of relatively mild kidney disease. It also would likely increase wait times for all nephrology referrals and have a financial impact on the system because specialist care is more costly than primary care.We conducted a study to evaluate whether the introduction of automatic reporting of the eGFR by laboratories, along with ad hoc educational activities for primary care physicians, had an impact on the number and appropriateness of referrals to nephrologists.  相似文献   

13.

Background:

Moderate alcohol consumption may reduce cardiovascular events, but little is known about its effect on atrial fibrillation in people at high risk of such events. We examined the association between moderate alcohol consumption and the risk of incident atrial fibrillation among older adults with existing cardiovascular disease or diabetes.

Methods:

We analyzed data for 30 433 adults who participated in 2 large antihypertensive drug treatment trials and who had no atrial fibrillation at baseline. The patients were 55 years or older and had a history of cardiovascular disease or diabetes with end-organ damage. We classified levels of alcohol consumption according to median cut-off values for low, moderate and high intake based on guidelines used in various countries, and we defined binge drinking as more than 5 drinks a day. The primary outcome measure was incident atrial fibrillation.

Results:

A total of 2093 patients had incident atrial fibrillation. The age- and sex-standardized incidence rate per 1000 person-years was 14.5 among those with a low level of alcohol consumption, 17.3 among those with a moderate level and 20.8 among those with a high level. Compared with participants who had a low level of consumption, those with higher levels had an increased risk of incident atrial fibrillation (adjusted hazard ratio [HR] 1.14, 95% confidence interval [CI] 1.04–1.26, for moderate consumption; 1.32, 95% CI 0.97–1.80, for high consumption). Results were similar after we excluded binge drinkers. Among those with moderate alcohol consumption, binge drinkers had an increased risk of atrial fibrillation compared with non–binge drinkers (adjusted HR 1.29, 95% CI 1.02–1.62).

Interpretation:

Moderate to high alcohol intake was associated with an increased incidence of atrial fibrillation among people aged 55 or older with cardiovascular disease or diabetes. Among moderate drinkers, the effect of binge drinking on the risk of atrial fibrillation was similar to that of habitual heavy drinking.A trial fibrillation is associated with an increased risk of stroke and a related high burden of mortality and morbidity, both in the general public and among patients with existing cardiovascular disease.1,2 The prevalence of atrial fibrillation increases steadily with age, as do the associated risks, and atrial fibrillation accounts for up to 23.5% of all strokes among elderly people.3Moderate alcohol consumption has been reported to be associated with a reduced risk of cardiovascular disease and all-cause death,1,2 whereas heavy alcohol intake and binge drinking have been associated with an increased risk of stroke,4 cardiovascular disease and all-cause death.5,6 Similarly, heavy drinking and binge drinking are associated with an increased risk of incident atrial fibrillation in the general population.7 However, the association between moderate drinking and incident atrial fibrillation is less consistent and not well understood among older people with existing cardiovascular disease.In this analysis, we examined whether drinking moderate quantities of alcohol, and binge drinking, would be associated with an increased risk of incident atrial fibrillation in a large cohort of people with existing cardiovascular disease or diabetes with end-organ damage who had been followed prospectively in 2 long-term antihypertensive drug treatment trials.  相似文献   

14.

Background:

Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults. Other inflammatory rheumatologic disorders are associated with an excess risk of vascular disease. We investigated whether polymyalgia rheumatica is associated with an increased risk of vascular events.

Methods:

We used the General Practice Research Database to identify patients with a diagnosis of incident polymyalgia rheumatica between Jan. 1, 1987, and Dec. 31, 1999. Patients were matched by age, sex and practice with up to 5 patients without polymyalgia rheumatica. Patients were followed until their first vascular event (cardiovascular, cerebrovascular, peripheral vascular) or the end of available records (May 2011). All participants were free of vascular disease before the diagnosis of polymyalgia rheumatica (or matched date). We used Cox regression models to compare time to first vascular event in patients with and without polymyalgia rheumatica.

Results:

A total of 3249 patients with polymyalgia rheumatica and 12 735 patients without were included in the final sample. Over a median follow-up period of 7.8 (interquartile range 3.3–12.4) years, the rate of vascular events was higher among patients with polymyalgia rheumatica than among those without (36.1 v. 12.2 per 1000 person-years; adjusted hazard ratio 2.6, 95% confidence interval 2.4–2.9). The increased risk of a vascular event was similar for each vascular disease end point. The magnitude of risk was higher in early disease and in patients younger than 60 years at diagnosis.

Interpretation:

Patients with polymyalgia rheumatica have an increased risk of vascular events. This risk is greatest in the youngest age groups. As with other forms of inflammatory arthritis, patients with polymyalgia rheumatica should have their vascular risk factors identified and actively managed to reduce this excess risk.Inflammatory rheumatologic disorders such as rheumatoid arthritis,1,2 systemic lupus erythematosus,2,3 gout,4 psoriatic arthritis2,5 and ankylosing spondylitis2,6 are associated with an increased risk of vascular disease, especially cardiovascular disease, leading to substantial morbidity and premature death.26 Recognition of this excess vascular risk has led to management guidelines advocating screening for and management of vascular risk factors.79Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults,10 with a lifetime risk of 2.4% for women and 1.7% for men.11 To date, evidence regarding the risk of vascular disease in patients with polymyalgia rheumatica is unclear. There are a number of biologically plausible mechanisms between polymyalgia rheumatica and vascular disease. These include the inflammatory burden of the disease,12,13 the association of the disease with giant cell arteritis (causing an inflammatory vasculopathy, which may lead to subclinical arteritis, stenosis or aneurysms),14 and the adverse effects of long-term corticosteroid treatment (e.g., diabetes, hypertension and dyslipidemia).15,16 Paradoxically, however, use of corticosteroids in patients with polymyalgia rheumatica may actually decrease vascular risk by controlling inflammation.17 A recent systematic review concluded that although some evidence exists to support an association between vascular disease and polymyalgia rheumatica,18 the existing literature presents conflicting results, with some studies reporting an excess risk of vascular disease19,20 and vascular death,21,22 and others reporting no association.2326 Most current studies are limited by poor methodologic quality and small samples, and are based on secondary care cohorts, who may have more severe disease, yet most patients with polymyalgia rheumatica receive treatment exclusively in primary care.27The General Practice Research Database (GPRD), based in the United Kingdom, is a large electronic system for primary care records. It has been used as a data source for previous studies,28 including studies on the association of inflammatory conditions with vascular disease29 and on the epidemiology of polymyalgia rheumatica in the UK.30 The aim of the current study was to examine the association between polymyalgia rheumatica and vascular disease in a primary care population.  相似文献   

15.
Manea L  Gilbody S  McMillan D 《CMAJ》2012,184(3):E191-E196

Background:

The brief Patient Health Questionnaire (PHQ-9) is commonly used to screen for depression with 10 often recommended as the cut-off score. We summarized the psychometric properties of the PHQ-9 across a range of studies and cut-off scores to select the optimal cut-off for detecting depression.

Methods:

We searched Embase, MEDLINE and PsycINFO from 1999 to August 2010 for studies that reported the diagnostic accuracy of PHQ-9 to diagnose major depressive disorders. We calculated summary sensitivity, specificity, likelihood ratios and diagnostic odds ratios for detecting major depressive disorder at different cut-off scores and in different settings. We used random-effects bivariate meta-analysis at cutoff points between 7 and 15 to produce summary receiver operating characteristic curves.

Results:

We identified 18 validation studies (n = 7180) conducted in various clinical settings. Eleven studies provided details about the diagnostic properties of the questionnaire at more than one cut-off score (including 10), four studies reported a cut-off score of 10, and three studies reported cut-off scores other than 10. The pooled specificity results ranged from 0.73 (95% confidence interval [CI] 0.63–0.82) for a cut-off score of 7 to 0.96 (95% CI 0.94–0.97) for a cut-off score of 15. There was major variability in sensitivity for cut-off scores between 7 and 15. There were no substantial differences in the pooled sensitivity and specificity for a range of cut-off scores (8–11).

Interpretation:

The PHQ-9 was found to have acceptable diagnostic properties for detecting major depressive disorder for cut-off scores between 8 and 11. Authors of future validation studies should consistently report the outcomes for different cut-off scores.Depressive disorders are still under-recognized in medical settings despite major associated disability and costs. The use of short screening questionnaires may improve the recognition of depression in different medical settings.1 The depression module of the Patient Health Questionnaire (PHQ-9) has become increasingly popular in research and practice over the past decade.2 In its initial validation study, a score of 10 or higher had a sensitivity of 88% and a specificity of 88% for detecting major depressive disorders. Thus, a score of 10 has been recommended as the cut-off score for diagnosing this condition.3In a recent review of the PHQ-9, Kroenke and colleagues argued against inflexible adherence to a single cut-off score.2 A recent analysis of the management of depression in general practice in the United Kingdom showed that the accuracy of predicting major depressive disorder could be improved by using 12 as the cut-off score.4Given the widespread use of PHQ-9 in screening for depression and that certain cut-off scores are being recommended as part of national strategies to screen for depression (based on initial validation studies, which might not be generalizable),4,5 we attempted to determine whether the cut-off of 10 is optimum for screening for depression. This question could not be answered by two previous systematic reviews6,7 because of the small number of primary studies available at the time. We also aimed to provide greater clarity about the proper use of PHQ-9 given the many settings in which it is used.  相似文献   

16.

Background:

Falls cause more than 60% of head injuries in older adults. Lack of objective evidence on the circumstances of these events is a barrier to prevention. We analyzed video footage to determine the frequency of and risk factors for head impact during falls in older adults in 2 long-term care facilities.

Methods:

Over 39 months, we captured on video 227 falls involving 133 residents. We used a validated questionnaire to analyze the mechanisms of each fall. We then examined whether the probability for head impact was associated with upper-limb protective responses (hand impact) and fall direction.

Results:

Head impact occurred in 37% of falls, usually onto a vinyl or linoleum floor. Hand impact occurred in 74% of falls but had no significant effect on the probability of head impact (p = 0.3). An increased probability of head impact was associated with a forward initial fall direction, compared with backward falls (odds ratio [OR] 2.7, 95% confidence interval [CI] 1.3–5.9) or sideways falls (OR 2.8, 95% CI 1.2–6.3). In 36% of sideways falls, residents rotated to land backwards, which reduced the probability of head impact (OR 0.2, 95% CI 0.04–0.8).

Interpretation:

Head impact was common in observed falls in older adults living in long-term care facilities, particularly in forward falls. Backward rotation during descent appeared to be protective, but hand impact was not. Attention to upper-limb strength and teaching rotational falling techniques (as in martial arts training) may reduce fall-related head injuries in older adults.Falls from standing height or lower are the cause of more than 60% of hospital admissions for traumatic brain injury in adults older than 65 years.15 Traumatic brain injury accounts for 32% of hospital admissions and more than 50% of deaths from falls in older adults.1,68 Furthermore, the incidence and age-adjusted rate of fall-related traumatic brain injury is increasing,1,9 especially among people older than 80 years, among whom rates have increased threefold over the past 30 years.10 One-quarter of fall-related traumatic brain injuries in older adults occur in long-term care facilities.1The development of improved strategies to prevent fall-related traumatic brain injuries is an important but challenging task. About 60% of residents in long-term care facilities fall at least once per year,11 and falls result from complex interactions of physiologic, environmental and situational factors.1216 Any fall from standing height has sufficient energy to cause brain injury if direct impact occurs between the head and a rigid floor surface.1719 Improved understanding is needed of the factors that separate falls that result in head impact and injury from those that do not.1,10 Falls in young adults rarely result in head impact, owing to protective responses such as use of the upper limbs to stop the fall, trunk flexion and rotation during descent.2023 We have limited evidence of the efficacy of protective responses to falls among older adults.In the current study, we analyzed video footage of real-life falls among older adults to estimate the prevalence of head impact from falls, and to examine the association between head impact, and biomechanical and situational factors.  相似文献   

17.

Background:

Patients with type 2 diabetes have a 40% increased risk of bladder cancer. Thiazolidinediones, especially pioglitazone, may increase the risk. We conducted a systematic review and meta-analysis to evaluate the risk of bladder cancer among adults with type 2 diabetes taking thiazolidinediones.

Methods:

We searched key biomedical databases (including MEDLINE, Embase and Scopus) and sources of grey literature from inception through March 2012 for published and unpublished studies, without language restrictions. We included randomized controlled trials (RCTs), cohort studies and case–control studies that reported incident bladder cancer among people with type 2 diabetes who ever (v. never) were exposed to pioglitazone (main outcome), rosiglitazone or any thiazolidinedione.

Results:

Of the 1787 studies identified, we selected 4 RCTs, 5 cohort studies and 1 case–control study. The total number of patients was 2 657 365, of whom 3643 had newly diagnosed bladder cancer, for an overall incidence of 53.1 per 100 000 person-years. The one RCT that reported on pioglitazone use found no significant association with bladder cancer (risk ratio [RR] 2.36, 95% confidence interval [CI] 0.91–6.13). The cohort studies of thiazolidinediones (pooled RR 1.15, 95% CI 1.04–1.26; I2 = 0%) and of pioglitazone specifically (pooled RR 1.22, 95% CI 1.07–1.39; I2 = 0%) showed significant associations with bladder cancer. No significant association with bladder cancer was observed in the two RCTs that evaluated rosiglitazone use (pooled RR 0.87, 95% CI 0.34–2.23; I2 = 0%).

Interpretation:

The limited evidence available supports the hypothesis that thiazolidinediones, particularly pioglitazone, are associated with an increased risk of bladder cancer among adults with type 2 diabetes.People with type 2 diabetes are at increased risk of several types of cancer, including a 40% increased risk of bladder cancer, compared with those without diabetes.1,2 The strong association with bladder cancer is hypothesized to be a result of hyperinsulinemia, whereby elevated insulin levels in type 2 diabetes stimulate insulin receptors on neoplastic cells, promoting cancer growth and division.1,35 Additional risk factors for bladder cancer include increased age, male sex, smoking, occupational and environmental exposures and urinary tract disease.6 Exogenous insulin and other glucose-lowering medications such as sulfonylureas, metformin and thiazolidinediones, may further modify the risk of bladder cancer.1Data from the placebo-controlled PROactive trial of pioglitazone (PROspective pioglitAzone Clinical Trial in macroVascular Events) suggested a higher incidence of bladder cancer among pioglitazone users than among controls.7 Subsequent randomized controlled trials (RCTs) and observational studies have reported conflicting results for pioglitazone, with various studies reporting a significant increase,8,9 a nonsignificant increase10 and even a decreased risk11 of bladder cancer.To test the hypothesis that pioglitazone use is associated with an increased risk of bladder cancer, we conducted a systematic review and meta-analysis of RCTs and observational studies reporting bladder cancer among adults with type 2 diabetes taking pioglitazone. To clarify the possibility of a drug-class effect, we also examined data for all thiazolidinediones and for rosiglitazone alone.  相似文献   

18.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

19.

Background:

Persistent postoperative pain continues to be an underrecognized complication. We examined the prevalence of and risk factors for this type of pain after cardiac surgery.

Methods:

We enrolled patients scheduled for coronary artery bypass grafting or valve replacement, or both, from Feb. 8, 2005, to Sept. 1, 2009. Validated measures were used to assess (a) preoperative anxiety and depression, tendency to catastrophize in the face of pain, health-related quality of life and presence of persistent pain; (b) pain intensity and interference in the first postoperative week; and (c) presence and intensity of persistent postoperative pain at 3, 6, 12 and 24 months after surgery. The primary outcome was the presence of persistent postoperative pain during 24 months of follow-up.

Results:

A total of 1247 patients completed the preoperative assessment. Follow-up retention rates at 3 and 24 months were 84% and 78%, respectively. The prevalence of persistent postoperative pain decreased significantly over time, from 40.1% at 3 months to 22.1% at 6 months, 16.5% at 12 months and 9.5% at 24 months; the pain was rated as moderate to severe in 3.6% at 24 months. Acute postoperative pain predicted both the presence and severity of persistent postoperative pain. The more intense the pain during the first week after surgery and the more it interfered with functioning, the more likely the patients were to report persistent postoperative pain. Pre-existing persistent pain and increased preoperative anxiety also predicted the presence of persistent postoperative pain.

Interpretation:

Persistent postoperative pain of nonanginal origin after cardiac surgery affected a substantial proportion of the study population. Future research is needed to determine whether interventions to modify certain risk factors, such as preoperative anxiety and the severity of pain before and immediately after surgery, may help to minimize or prevent persistent postoperative pain.Postoperative pain that persists beyond the normal time for tissue healing (> 3 mo) is increasingly recognized as an important complication after various types of surgery and can have serious consequences on patients’ daily living.13 Cardiac surgeries, such as coronary artery bypass grafting (CABG) and valve replacement, rank among the most frequently performed interventions worldwide.4 They aim to improve survival and quality of life by reducing symptoms, including anginal pain. However, persistent postoperative pain of nonanginal origin has been reported in 7% to 60% of patients following these surgeries.523 Such variability is common in other types of major surgery and is due mainly to differences in the definition of persistent postoperative pain, study design, data collection methods and duration of follow-up.13,24Few prospective cohort studies have examined the exact time course of persistent postoperative pain after cardiac surgery, and follow-up has always been limited to a year or less.9,14,25 Factors that put patients at risk of this type of problem are poorly understood.26 Studies have reported inconsistent results regarding the contribution of age, sex, body mass index, preoperative angina, surgical technique, grafting site, postoperative complications or level of opioid consumption after surgery.57,9,13,14,1619,2123,25,27 Only 1 study investigated the role of chronic nonanginal pain before surgery as a contributing factor;21 5 others prospectively assessed the association between persistent postoperative pain and acute pain intensity in the first postoperative week but reported conflicting results.13,14,21,22,25 All of the above studies were carried out in a single hospital and included relatively small samples. None of the studies examined the contribution of psychological factors such as levels of anxiety and depression before cardiac surgery, although these factors have been shown to influence acute or persistent postoperative pain in other types of surgery.1,24,28,29We conducted a prospective multicentre cohort study (the CARD-PAIN study) to determine the prevalence of persistent postoperative pain of nonanginal origin up to 24 months after cardiac surgery and to identify risk factors for the presence and severity of the condition.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号