首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.

Background:

The role of atrial fibrillation in cognitive impairment and dementia, independent of stroke, is uncertain. We sought to determine the association of atrial fibrillation with cognitive and physical impairment in a large group of patients at high cardiovascular risk.

Methods:

We conducted a post-hoc analysis of two randomized controlled trials involving 31 546 patients, the aims of which were to evaluate the efficacy of treatment with ramipril plus telmisartan (ONTARGET) or telmisartan alone (TRANSCEND) in reducing cardiovascular disease. We evaluated the cognitive function of participants at baseline and after two and five years using the Mini–Mental State Examination (MMSE). In addition, we recorded incident dementia, loss of independence in activities of daily living and admission to long-term care facilities. We used a Cox regression model adjusting for main confounders to determine the association between atrial fibrillation and our primary outcomes: a decrease of three or more points in MMSE score, incident dementia, loss of independence in performing activities of daily living and admission to long-term care.

Results:

We enrolled 31 506 participants for whom complete information on atrial fibrillation was available, 70.4% of whom were men. The mean age of participants was 66.5 years, and the mean baseline MMSE score was 27.7 (standard deviation 2.9) points. At baseline, 1016 participants (3.3%) had atrial fibrillation, with the condition developing in an additional 2052 participants (6.5%) during a median follow-up of 56 months. Atrial fibrillation was associated with an increased risk of cognitive decline (hazard ratio [HR] 1.14, 95% confidence interval [CI] 1.03–1.26), new dementia (HR 1.30, 95% CI 1.14–1.49), loss of independence in performing activities of daily living (HR 1.35, 95% CI 1.19–1.54) and admission to long-term care facilities (HR 1.53, 95% CI 1.31–1.79). Results were consistent among participants with and without stroke or receiving antihypertensive drugs.

Interpretation:

Cognitive and functional decline are important consequences of atrial fibrillation, even in the absence of overt stroke.A trial fibrillation is an important and modifiable cause of ischemic stroke, which may result in considerable physical and cognitive disability.1 In addition, atrial fibrillation is associated with an increased risk of covert cerebral infarction, which is reported in about one-quarter of patients with atrial fibrillation who undergo magnetic resonance imaging of the brain.2 Thus, atrial fibrillation may be an important determinant of cognitive and functional decline, even in the absence of clinical ischemic stroke. However, previous epidemiologic studies evaluating atrial fibrillation’s association with cognitive function have been inconsistent,313 and very few have evaluated its association with functional outcomes.14A recent systematic review showed convincing evidence of an association between atrial fibrillation and dementia in patients with a history of stroke, but it concluded that there was considerable uncertainty of a link between atrial fibrillation and dementia in patients with no history of stroke.15 Large prospective cohort studies are required to determine a true association between atrial fibrillation and cognitive outcomes.In this study, we sought to determine the prospective association between atrial fibrillation and cognitive decline, loss of independence in activities of daily living and admission to long-term care facilities, using data from a large group of patients included in the ONTARGET and TRANSCEND trials.16,17  相似文献   

2.

Background:

Brief interventions delivered by family physicians to address excessive alcohol use among adult patients are effective. We conducted a study to determine whether such an intervention would be similarly effective in reducing binge drinking and excessive cannabis use among young people.

Methods:

We conducted a cluster randomized controlled trial involving 33 family physicians in Switzerland. Physicians in the intervention group received training in delivering a brief intervention to young people during the consultation in addition to usual care. Physicians in the control group delivered usual care only. Consecutive patients aged 15–24 years were recruited from each practice and, before the consultation, completed a confidential questionnaire about their general health and substance use. Patients were followed up at 3, 6 and 12 months after the consultation. The primary outcome measure was self-reported excessive substance use (≥ 1 episode of binge drinking, or ≥ 1 joint of cannabis per week, or both) in the past 30 days.

Results:

Of the 33 participating physicians, 17 were randomly allocated to the intervention group and 16 to the control group. Of the 594 participating patients, 279 (47.0%) identified themselves as binge drinkers or excessive cannabis users, or both, at baseline. Excessive substance use did not differ significantly between patients whose physicians were in the intervention group and those whose physicians were in the control group at any of the follow-up points (odds ratio [OR] and 95% confidence interval [CI] at 3 months: 0.9 [0.6–1.4]; at 6 mo: 1.0 [0.6–1.6]; and at 12 mo: 1.1 [0.7–1.8]). The differences between groups were also nonsignificant after we re stricted the analysis to patients who reported excessive substance use at baseline (OR 1.6, 95% CI 0.9–2.8, at 3 mo; OR 1.7, 95% CI 0.9–3.2, at 6 mo; and OR 1.9, 95% CI 0.9–4.0, at 12 mo).

Interpretation:

Training family physicians to use a brief intervention to address excessive substance use among young people was not effective in reducing binge drinking and excessive cannabis use in this patient population. Trial registration: Australian New Zealand Clinical Trials Registry, no. ACTRN12608000432314.Most health-compromising behaviours begin in adolescence.1 Interventions to address these behaviours early are likely to bring long-lasting benefits.2 Harmful use of alcohol is a leading factor associated with premature death and disability worldwide, with a disproportionally high impact on young people (aged 10–24 yr).3,4 Similarly, early cannabis use can have adverse consequences that extend into adulthood.58In adolescence and early adulthood, binge drinking on at least a monthly basis is associated with an increased risk of adverse outcomes later in life.912 Although any cannabis use is potentially harmful, weekly use represents a threshold in adolescence related to an increased risk of cannabis (and tobacco) dependence in adulthood.13 Binge drinking affects 30%–50% and excessive cannabis use about 10% of the adolescent and young adult population in Europe and the United States.10,14,15Reducing substance-related harm involves multisectoral approaches, including promotion of healthy child and adolescent development, regulatory policies and early treatment interventions.16 Family physicians can add to the public health messages by personalizing their content within brief interventions.17,18 There is evidence that brief interventions can encourage young people to reduce substance use, yet most studies have been conducted in community settings (mainly educational), emergency services or specialized addiction clinics.1,16 Studies aimed at adult populations have shown favourable effects of brief alcohol interventions, and to some extent brief cannabis interventions, in primary care.1922 These interventions have been recommended for adolescent populations.4,5,16 Yet young people have different modes of substance use and communication styles that may limit the extent to which evidence from adult studies can apply to them.Recently, a systematic review of brief interventions to reduce alcohol use in adolescents identified only 1 randomized controlled trial in primary care.23 The tested intervention, not provided by family physicians but involving audio self-assessment, was ineffective in reducing alcohol use in exposed adolescents.24 Sanci and colleagues showed that training family physicians to address health-risk behaviours among adolescents was effective in improving provider performance, but the extent to which this translates into improved outcomes remains unknown.25,26 Two nonrandomized studies suggested screening for substance use and brief advice by family physicians could favour reduced alcohol and cannabis use among adolescents,27,28 but evidence from randomized trials is lacking.29We conducted the PRISM-Ado (Primary care Intervention Addressing Substance Misuse in Adolescents) trial, a cluster randomized controlled trial of the effectiveness of training family physicians to deliver a brief intervention to address binge drinking and excessive cannabis use among young people.  相似文献   

3.

Background:

There have been several published reports of inflammatory ocular adverse events, mainly uveitis and scleritis, among patients taking oral bisphosphonates. We examined the risk of these adverse events in a pharmacoepidemiologic cohort study.

Methods:

We conducted a retrospective cohort study involving residents of British Columbia who had visited an ophthalmologist from 2000 to 2007. Within the cohort, we identified all people who were first-time users of oral bisphosphonates and who were followed to the first inflammatory ocular adverse event, death, termination of insurance or the end of the study period. We defined an inflammatory ocular adverse event as scleritis or uveitis. We used a Cox proportional hazard model to determine the adjusted rate ratios. As a sensitivity analysis, we performed a propensity-score–adjusted analysis.

Results:

The cohort comprised 934 147 people, including 10 827 first-time users of bisphosphonates and 923 320 nonusers. The incidence rate among first-time users was 29/10 000 person-years for uveitis and 63/10 000 person-years for scleritis. In contrast, the incidence among people who did not use oral bisphosphonates was 20/10 000 person-years for uveitis and 36/10 000 for scleritis (number needed to harm: 1100 and 370, respectively). First-time users had an elevated risk of uveitis (adjusted relative risk [RR] 1.45, 95% confidence interval [CI] 1.25–1.68) and scleritis (adjusted RR 1.51, 95% CI 1.34–1.68). The rate ratio for the propensity-score–adjusted analysis did not change the results (uveitis: RR 1.50, 95% CI 1.29–1.73; scleritis: RR 1.53, 95% CI 1.39–1.70).

Interpretation:

People using oral bisphosphonates for the first time may be at a higher risk of scleritis and uveitis compared to people with no bisphosphonate use. Patients taking bisphosphonates must be familiar with the signs and symptoms of these conditions, so that they can immediately seek assessment by an ophthalmologist.Oral bisphosphonates are the most frequently prescribed class of medications for the prevention of osteoporosis. Most literature about the safety of bisphosphonates has mainly focused on long-term adverse events, including atypical fractures,1 atrial fibrillation,2 and esophageal and colon cancer.3Uveitis and scleritis are ocular inflammatory diseases that are associated with major morbidity. Anterior uveitis is the most common type of uveitis with an estimated 11.4–100.0 cases/100 000 person-years.4,5 Both diseases require immediate treatment to prevent further complications, which may include cataracts, glaucoma, macular edema and scleral perforation. Numerous case reports and case series have described an association between the use of oral bisphosphonates and anterior uveitis68 and scleritis.8,9 In most reported cases, severe eye pain was reported within days of taking an oral bisphosphonates, and the symptom resolved after stopping the agent.6,9 Only one large epidemiologic study has examined the association between the use of bisphosphonates and ocular inflammatory diseases.10 This study did not find an association, but it was limited by a small number of events and a lack of power. Thus, the association between uveitis or scleritis and the use of oral bisphosphonates is not fully known. Given that early intervention may prevent complications, we performed a pharmacoepidemiologic study to assess the true risk of these potentially serious conditions.  相似文献   

4.

Background:

Uncircumcised boys are at higher risk for urinary tract infections than circumcised boys. Whether this risk varies with the visibility of the urethral meatus is not known. Our aim was to determine whether there is a hierarchy of risk among uncircumcised boys whose urethral meatuses are visible to differing degrees.

Methods:

We conducted a prospective cross-sectional study in one pediatric emergency department. We screened 440 circumcised and uncircumcised boys. Of these, 393 boys who were not toilet trained and for whom the treating physician had requested a catheter urine culture were included in our analysis. At the time of catheter insertion, a nurse characterized the visibility of the urethral meatus (phimosis) using a 3-point scale (completely visible, partially visible or nonvisible). Our primary outcome was urinary tract infection, and our primary exposure variable was the degree of phimosis: completely visible versus partially or nonvisible urethral meatus.

Results:

Cultures grew from urine samples from 30.0% of uncircumcised boys with a completely visible meatus, and from 23.8% of those with a partially or nonvisible meatus (p = 0.4). The unadjusted odds ratio (OR) for culture growth was 0.73 (95% confidence interval [CI] 0.35–1.52), and the adjusted OR was 0.41 (95% CI 0.17–0.95). Of the boys who were circumcised, 4.8% had urinary tract infections, which was significantly lower than the rate among uncircumcised boys with a completely visible urethral meatus (unadjusted OR 0.12 [95% CI 0.04–0.39], adjusted OR 0.07 [95% CI 0.02–0.26]).

Interpretation:

We did not see variation in the risk of urinary tract infection with the visibility of the urethral meatus among uncircumcised boys. Compared with circumcised boys, we saw a higher risk of urinary tract infection in uncircumcised boys, irrespective of urethral visibility.Urinary tract infections are one of the most common serious bacterial infections in young children.16 Prompt diagnosis is important, because children with urinary tract infection are at risk for bacteremia6 and renal scarring.1,7 Uncircumcised boys have a much higher risk of urinary tract infection than circumcised boys,1,3,4,6,812 likely as a result of heavier colonization under the foreskin with pathogenic bacteria, which leads to ascending infections.13,14 The American Academy of Pediatrics recently suggested that circumcision status be used to select which boys should be evaluated for urinary tract infection.1 However, whether all uncircumcised boys are at equal risk for infection, or whether the risk varies with the visibility of the urethral opening, is not known. It has been suggested that a subset of uncircumcised boys with a poorly visible urethral opening are at increased risk of urinary tract infection,1517 leading some experts to consider giving children with tight foreskins topical cortisone or circumcision to prevent urinary tract infections.13,1821We designed a study to challenge the opinion that all uncircumcised boys are at increased risk for urinary tract infections. We hypothesized a hierarchy of risk among uncircumcised boys depending on the visibility of the urethral meatus, with those with a partially or nonvisible meatus at highest risk, and those with a completely visible meatus having a level of risk similar to that of boys who have been circumcised. Our primary aim was to compare the proportions of urinary tract infections among uncircumcised boys with a completely visible meatus with those with a partially or nonvisible meatus.  相似文献   

5.

Background:

Although guidelines advise titration of palliative sedation at the end of life, in practice the depth of sedation can range from mild to deep. We investigated physicians’ considerations about the depth of continuous sedation.

Methods:

We performed a qualitative study in which 54 physicians underwent semistructured interviewing about the last patient for whom they had been responsible for providing continuous palliative sedation. We also asked about their practices and general attitudes toward sedation.

Results:

We found two approaches toward the depth of continuous sedation: starting with mild sedation and only increasing the depth if necessary, and deep sedation right from the start. Physicians described similar determinants for both approaches, including titration of sedatives to the relief of refractory symptoms, patient preferences, wishes of relatives, expert advice and esthetic consequences of the sedation. However, physicians who preferred starting with mild sedation emphasized being guided by the patient’s condition and response, and physicians who preferred starting with deep sedation emphasized ensuring that relief of suffering would be maintained. Physicians who preferred each approach also expressed different perspectives about whether patient communication was important and whether waking up after sedation is started was problematic.

Interpretation:

Physicians who choose either mild or deep sedation appear to be guided by the same objective of delivering sedation in proportion to the relief of refractory symptoms, as well as other needs of patients and their families. This suggests that proportionality should be seen as a multidimensional notion that can result in different approaches toward the depth of sedation.Palliative sedation is considered to be an appropriate option when other treatments fail to relieve suffering in dying patients.1,2 There are important questions associated with this intervention, such as how deep the sedation must be to relieve suffering and how important it is for patients and their families for the patient to maintain a certain level of consciousness.1 In the national guidelines for the Netherlands, palliative sedation is defined as “the intentional lowering of consciousness of a patient in the last phase of life.”3,4 Sedatives can be administered intermittently or continuously, and the depth of palliative sedation can range from mild to deep.1,5Continuous deep sedation until death is considered the most far reaching and controversial type of palliative sedation. Nevertheless, it is used frequently: comparable nationwide studies in Europe show frequencies of 2.5% to 16% of all deaths.68 An important reason for continuous deep sedation being thought of as controversial is the possible association of this practice with the hastening of death,911 although it is also argued that palliative sedation does not shorten life when its use is restricted to the patient’s last days of life.12,13 Guidelines for palliative sedation often advise physicians to titrate sedatives,2,3,14 which means that the dosages of sedatives are adjusted to the level needed for proper relief of symptoms. To date, research has predominantly focused on the indications and type of medications used for sedation. In this study, we investigated how physicians decide the depth of continuous palliative sedation and how these decisions relate to guidelines.  相似文献   

6.

Background:

Whether the risk of cancer is increased among patients with herpes zoster is unclear. We investigated the risk of cancer among patients with herpes zoster using a nationwide health registry in Taiwan.

Methods:

We identified 35 871 patients with newly diagnosed herpes zoster during 2000–2008 from the National Health Insurance Research Database in Taiwan. We analyzed the standardized incidence ratios for various types of cancer.

Results:

Among patients with herpes zoster, 895 cases of cancer were reported. Patients with herpes zoster were not at increased risk of cancer (standardized incidence ratio 0.99, 95% confidence interval 0.93–1.06). Among the subgroups stratified by sex, age and years of follow-up, there was also no increased risk of overall cancer.

Interpretation:

Herpes zoster is not associated with increased risk of cancer in the general population. These findings do not support extensive investigations for occult cancer or enhanced surveillance for cancer in patients with herpes zoster.Herpes zoster, or shingles, is caused by reactivation of the varicella–zoster virus, a member of the Herpesviridae family. Established risk factors for herpes zoster include older age, chronic kidney disease, malignant disease and immunocompromised conditions (e.g., those experienced by patients with AIDS, transplant recipients, and those taking immunosuppressive medication because of autoimmune diseases).15 Herpes zoster occurs more frequently among patients with cancer than among those without cancer;6,7 however the relation between herpes zoster and risk of subsequent cancer is not well established.In 1955, Wyburn-Mason and colleagues reported several cases of skin cancer that arose from the healed lesions of herpes zoster.8 In 1972, a retrospective cohort study and a case series reported a higher prevalence of herpes zoster among patients with cancer, especially hematological cancer;6,7 however, they did not investigate whether herpes zoster was a risk factor for cancer. In 1982, Ragozzino and colleagues found no increased incidence of cancer (including hematologic malignancy) among patients with herpes zoster.9 There have been reports of significantly increased risk of some subtypes of cancer among patients aged more than 65 years with herpes zoster10 and among those admitted to hospital because of herpes zoster.11 Although these studies have suggested an association between herpes zoster and subsequent cancer, their results might not be generalizable because of differences in the severity of herpes zoster in the enrolled patients.Whether the risk of cancer is increased after herpes zoster remains controversial. The published studies811 were nearly all conducted in western countries, and data focusing on Asian populations are lacking.12 The results from western countries may not be directly generalizable to other ethnic groups because of differences in cancer types and profiles. Recently, a study reported that herpes zoster ophthalmicus may be a marker of increased risk of cancer in the following year.13 In the present study, we investigated the incidence rate ratio of cancer, including specific types of cancer, after diagnosis of herpes zoster.  相似文献   

7.

Background:

Although diacetylmorphine has been proven to be more effective than methadone maintenance treatment for opioid dependence, its direct costs are higher. We compared the cost-effectiveness of diacetylmorphine and methadone maintenance treatment for chronic opioid dependence refractory to treatment.

Methods:

We constructed a semi-Markov cohort model using data from the North American Opiate Medication Initiative trial, supplemented with administrative data for the province of British Columbia and other published data, to capture the chronic, recurrent nature of opioid dependence. We calculated incremental cost-effectiveness ratios to compare diacetylmorphine and methadone over 1-, 5-, 10-year and lifetime horizons.

Results:

Diacetylmorphine was found to be a dominant strategy over methadone maintenance treatment in each of the time horizons. Over a lifetime horizon, our model showed that people receiving methadone gained 7.46 discounted quality-adjusted life-years (QALYs) on average (95% credibility interval [CI] 6.91–8.01) and generated a societal cost of $1.14 million (95% CI $736 800–$1.78 million). Those who received diacetylmorphine gained 7.92 discounted QALYs on average (95% CI 7.32–8.53) and generated a societal cost of $1.10 million (95% CI $724 100–$1.71 million). Cost savings in the diacetylmorphine cohort were realized primarily because of reductions in the costs related to criminal activity. Probabilistic sensitivity analysis showed that the probability of diacetylmorphine being cost-effective at a willingness-to-pay threshold of $0 per QALY gained was 76%; the probability was 95% at a threshold of $100 000 per QALY gained. Results were confirmed over a range of sensitivity analyses.

Interpretation:

Using mathematical modelling to extrapolate results from the North American Opiate Medication Initiative, we found that diacetylmorphine may be more effective and less costly than methadone among people with chronic opioid dependence refractory to treatment.Opioid substitution with methadone is the most common treatment of opioid dependence.13 Participation in a methadone maintenance treatment program has been associated with decreases in illicit drug use,4 criminality5 and mortality.6,7 However, longitudinal studies have shown that most people who receive opioid substitution treatment are unable to abstain from illicit drug use for sustained periods, either switching from treatment to regular opioid use or continuing to use opioids while in treatment.813 An estimated 15%–25% of the most marginalized methadone clients do not benefit from treatment in terms of sustained abstention from the use of illicit opioids.14The North American Opiate Medication Initiative was a randomized controlled trial that compared supervised, medically prescribed injectable diacetylmorphine and optimized methadone maintenance treatment in people with long-standing opioid dependence and multiple failed treatment attempts with methadone or other forms of treatment.15 The trial was conducted in two Canadian cities (Vancouver, British Columbia; and Montréal, Quebec). Both treatment protocols included a comprehensive range of psychosocial services (e.g., addiction counselling, relapse prevention, case management, and individual and group interventions) and primary care services (e.g., testing for blood-borne diseases, provision of HIV treatment, and treatment of acute and chronic physical and mental health complications of substance use) in keeping with Health Canada best practices.16 The results of the trial confirmed findings of prior studies showing diacetylmorphine to be more effective than methadone maintenance treatment in retaining opioid-dependent patients in treatment15,1720 and improving health and social functioning.19,21,22 Diacetylmorphine treatment has been proposed to reach a specific population of people with opioid dependence refractory to treatment who are at high risk of adverse health consequences and engagement in criminal activities to acquire the illicit drugs.For guiding policy-makers, the North American Opiate Medication Initiative alone does not address all the important considerations for decision-making. In addition to political challenges associated with the therapy,23 there remains concern over the direct cost of diacetylmorphine over the long term, because it can be as much as 10 times greater than conventional methadone maintenance treatment.21 The North American Opiate Medication Initiative was only one year in duration, but a policy to introduce diacetylmorphine might have both positive and negative longer-term implications.We extrapolated outcomes from the North American Opiate Medication Initiative to estimate the long-term cost-effectiveness of diacetylmorphine versus methadone maintenance treatment for chronic, refractory opioid dependence.  相似文献   

8.

Background:

Although warfarin has been extensively studied in clinical trials, little is known about rates of hemorrhage attributable to its use in routine clinical practice. Our objective was to examine incident hemorrhagic events in a large population-based cohort of patients with atrial fibrillation who were starting treatment with warfarin.

Methods:

We conducted a population-based cohort study involving residents of Ontario (age ≥ 66 yr) with atrial fibrillation who started taking warfarin between Apr. 1, 1997, and Mar. 31, 2008. We defined a major hemorrhage as any visit to hospital for hemorrage. We determined crude rates of hemorrhage during warfarin treatment, overall and stratified by CHADS2 score (congestive heart failure, hypertension, age ≥ 75 yr, diabetes mellitus and prior stroke, transient ischemic attack or thromboembolism).

Results:

We included 125 195 patients with atrial fibrillation who started treatment with warfarin during the study period. Overall, the rate of hemorrhage was 3.8% (95% confidence interval [CI] 3.8%–3.9%) per person-year. The risk of major hemorrhage was highest during the first 30 days of treatment. During this period, rates of hemorrhage were 11.8% (95% CI 11.1%–12.5%) per person-year in all patients and 16.7% (95% CI 14.3%–19.4%) per person-year among patients with a CHADS2 scores of 4 or greater. Over the 5-year follow-up, 10 840 patients (8.7%) visited the hospital for hemorrhage; of these patients, 1963 (18.1%) died in hospital or within 7 days of being discharged.

Interpretation:

In this large cohort of older patients with atrial fibrillation, we found that rates of hemorrhage are highest within the first 30 days of warfarin therapy. These rates are considerably higher than the rates of 1%–3% reported in randomized controlled trials of warfarin therapy. Our study provides timely estimates of warfarin-related adverse events that may be useful to clinicians, patients and policy-makers as new options for treatment become available.Atrial fibrillation is a major risk factor for stroke and systemic embolism, and strong evidence supports the use of the anticoagulant warfarin to reduce this risk.13 However, warfarin has a narrow therapeutic range and requires regular monitoring of the international normalized ratio to optimize its effectiveness and minimize the risk of hemorrhage.4,5 Although rates of major hemorrhage reported in trials of warfarin therapy typically range between 1% and 3% per person-year,611 observational studies suggest that rates may be considerably higher when warfarin is prescribed outside of a clinical trial setting,1215 approaching 7% per person-year in some studies.1315 The different safety profiles derived from clinical trials and observational data may reflect the careful selection of patients, precise definitions of bleeding and close monitoring in the trial setting. Furthermore, although a few observational studies suggest that hemorrhage rates are higher than generally appreciated, these studies involve small numbers of patients who received care in specialized settings.1416 Consequently, the generalizability of their results to general practice may be limited.More information regarding hemorrhage rates during warfarin therapy is particularly important in light of the recent introduction of new oral anticoagulant agents such as dabigatran, rivaroxaban and apixaban, which may be associated with different outcome profiles.1719 There are currently no large studies offering real-world, population-based estimates of hemorrhage rates among patients taking warfarin, which are needed for future comparisons with new anticoagulant agents once they are widely used in routine clinical practice.20We sought to describe the risk of incident hemorrhage in a large population-based cohort of patients with atrial fibrillation who had recently started warfarin therapy.  相似文献   

9.
10.

Background:

Several biomarkers of metabolic acidosis, including lower plasma bicarbonate and higher anion gap, have been associated with greater insulin resistance in cross-sectional studies. We sought to examine whether lower plasma bicarbonate is associated with the development of type 2 diabetes mellitus in a prospective study.

Methods:

We conducted a prospective, nested case–control study within the Nurses’ Health Study. Plasma bicarbonate was measured in 630 women who did not have type 2 diabetes mellitus at the time of blood draw in 1989–1990 but developed type 2 diabetes mellitus during 10 years of follow-up. Controls were matched according to age, ethnic background, fasting status and date of blood draw. We used logistic regression to calculate odds ratios (ORs) for diabetes by category of baseline plasma bicarbonate.

Results:

After adjustment for matching factors, body mass index, plasma creatinine level and history of hypertension, women with plasma bicarbonate above the median level had lower odds of diabetes (OR 0.76, 95% confidence interval [CI] 0.60–0.96) compared with women below the median level. Those in the second (OR 0.92, 95% CI 0.67–1.25), third (OR 0.70, 95% CI 0.51–0.97) and fourth (OR 0.75, 95% CI 0.54–1.05) quartiles of plasma bicarbonate had lower odds of diabetes compared with those in the lowest quartile (p for trend = 0.04). Further adjustment for C-reactive protein did not alter these findings.

Interpretation:

Higher plasma bicarbonate levels were associated with lower odds of incident type 2 diabetes mellitus among women in the Nurses’ Health Study. Further studies are needed to confirm this finding in different populations and to elucidate the mechanism for this relation.Resistance to insulin is central to the pathogenesis of type 2 diabetes mellitus.1 Several mechanisms may lead to insulin resistance and thereby contribute to the development of type 2 diabetes mellitus, including altered fatty acid metabolism, mitochondrial dysfunction and systemic inflammation.2 Metabolic acidosis may also contribute to insulin resistance. Human studies using the euglycemic and hyperglycemic clamp techniques have shown that mild metabolic acidosis induced by the administration of ammonium chloride results in reduced tissue insulin sensitivity.3 Subsequent studies in rat models have suggested that metabolic acidosis decreases the binding of insulin to its receptors.4,5 Finally, metabolic acidosis may also increase cortisol production,6 which in turn is implicated in the development of insulin resistance.7Recent epidemiologic studies have shown an association between clinical markers of metabolic acidosis and greater insulin resistance or prevalence of type 2 diabetes mellitus. In the National Health and Nutrition Examination Survey, both lower serum bicarbonate and higher anion gap (even within ranges considered normal) were associated with increased insulin resistance among adults without diabetes.8 In addition, higher levels of serum lactate, a small component of the anion gap, were associated with higher odds of prevalent type 2 diabetes mellitus in the Atherosclerosis Risk in Communities study9 and with higher odds of incident type 2 diabetes mellitus in a retrospective cohort study of the risk factors for diabetes in Swedish men.10 Other biomarkers associated with metabolic acidosis, including higher levels of serum ketones,11 lower urinary citrate excretion12 and low urine pH,13 have been associated in cross-sectional studies with either insulin resistance or the prevalence of type 2 diabetes mellitus. However, it is unclear whether these associations are a cause or consequence. We sought to address this question by prospectively examining the association between plasma bicarbonate and subsequent development of type 2 diabetes mellitus in a nested case–control study within the Nurses’ Health Study.  相似文献   

11.

Background:

Previous studies of differences in mental health care associated with children’s sociodemographic status have focused on access to community care. We examined differences associated with visits to the emergency department.

Methods:

We conducted a 6-year population-based cohort analysis using administrative databases of visits (n = 30 656) by children aged less than 18 years (n = 20 956) in Alberta. We measured differences in the number of visits by socioeconomic and First Nations status using directly standardized rates. We examined time to return to the emergency department using a Cox regression model, and we evaluated time to follow-up with a physician by physician type using a competing risks model.

Results:

First Nations children aged 15–17 years had the highest rate of visits for girls (7047 per 100 000 children) and boys (5787 per 100 000 children); children in the same age group from families not receiving government subsidy had the lowest rates (girls: 2155 per 100 000 children; boys: 1323 per 100 000 children). First Nations children (hazard ratio [HR] 1.64; 95% confidence interval [CI] 1.30–2.05), and children from families receiving government subsidies (HR 1.60, 95% CI 1.30–1.98) had a higher risk of return to an emergency department for mental health care than other children. The longest median time to follow-up with a physician was among First Nations children (79 d; 95% CI 60–91 d); this status predicted longer time to a psychiatrist (HR 0.47, 95% CI 0.32–0.70). Age, sex, diagnosis and clinical acuity also explained post-crisis use of health care.

Interpretation:

More visits to the emergency department for mental health crises were made by First Nations children and children from families receiving a subsidy. Sociodemographics predicted risk of return to the emergency department and follow-up care with a physician.Emergency departments are a critical access point for mental health care for children who have been unable to receive care elsewhere or are in crisis.1 Care provided in an emergency department can stabilize acute problems and facilitate urgent follow-up for symptom management and family support.1,2Race, ethnic background and socioeconomic status have been linked to a crisis-oriented care patterns among American children.3,4 Minority children are less likely than white children to have received mental health treatment before an emergency department visit,3,4 and uninsured children are less likely to receive an urgent mental health evaluation when needed.4 Other studies, however, have shown no relation between sociodemographic status and mental health care,5,6 and it may be that different health system characteristics (e.g., pay-for-service, insurance coverage, publicly funded care) interact with sociodemographic status to influence how mental health resources are used. Canadian studies are largely absent in this discussion, despite a known relation between lower income and poorer mental health status,7 nationwide documentation of disparities faced by Aboriginal children,810 and government-commissioned reviews that highlight deficits in universal access to mental health care.11We undertook the current study to examine whether sociodemographic differences exist in the rates of visits to emergency departments for mental health care and in the use of post-crisis health care services for children in Alberta. Knowledge of whether differences exist for children with mental health needs may help identify children who could benefit from earlier intervention to prevent illness destabilization and children who may be disadvantaged in the period after the emergency department visit. We hypothesized that higher rates of emergency department use, lower rates of follow-up physician visits after the initial emergency department visit, and a longer time to physician follow-up would be observed among First Nations children and children from families receiving government social assistance.  相似文献   

12.

Background:

Use of the serum creatinine concentration, the most widely used marker of kidney function, has been associated with under-reporting of chronic kidney disease and late referral to nephrologists, especially among women and elderly people. To improve appropriateness of referrals, automatic reporting of the estimated glomerular filtration rate (eGFR) by laboratories was introduced in the province of Ontario, Canada, in March 2006. We hypothesized that such reporting, along with an ad hoc educational component for primary care physicians, would increase the number of appropriate referrals.

Methods:

We conducted a population-based before–after study with interrupted time-series analysis at a tertiary care centre. All referrals to nephrologists received at the centre during the year before and the year after automatic reporting of the eGFR was introduced were eligible for inclusion. We used regression analysis with autoregressive errors to evaluate whether such reporting by laboratories, along with ad hoc educational activities for primary care physicians, had an impact on the number and appropriateness of referrals to nephrologists.

Results:

A total of 2672 patients were included in the study. In the year after automatic reporting began, the number of referrals from primary care physicians increased by 80.6% (95% confidence interval [CI] 74.8% to 86.9%). The number of appropriate referrals increased by 43.2% (95% CI 38.0% to 48.2%). There was no significant change in the proportion of appropriate referrals between the two periods (−2.8%, 95% CI −26.4% to 43.4%). The proportion of elderly and female patients who were referred increased after reporting was introduced.

Interpretation:

The total number of referrals increased after automatic reporting of the eGFR began, especially among women and elderly people. The number of appropriate referrals also increased, but the proportion of appropriate referrals did not change significantly. Future research should be directed to understanding the reasons for inappropriate referral and to develop novel interventions for improving the referral process.Until recently, the serum creatinine concentration was used universally as an index of the glomerular filtration rate (GFR) to identify and monitor chronic kidney disease.1 The serum creatinine concentration depends on several factors, the most important being muscle mass.1 Women as compared with men, and elderly people as compared with young adults, tend to have lower muscle mass for the same degree of kidney function and thus have lower serum creatinine concentrations.2,3 Consequently, the use of the serum creatinine concentration is associated with underrecognition of chronic kidney disease, delayed workup for chronic kidney disease and late referral to nephrologists, particularly among women and elderly people. Late referral has been associated with increased mortality among patients receiving dialysis.311In 1999, the Modification of Diet in Renal Disease formula was introduced to calculate the estimated GFR (eGFR).12,13 This formula uses the patient’s serum creatinine concentration, age, sex and race (whether the patient is black or not). All of these variables are easily available to laboratories except race. Laboratories report the eGFR for non-black people, with advice to practitioners to multiply the result by 1.21 if their patient is black. Given that reporting of the eGFR markedly improves detection of chronic kidney disease,14,15 several national organizations recommended that laboratories automatically calculate and report the eGFR when the serum creatinine concentration is requested.1619 These organizations also provided guidelines on appropriate referral to nephrology based on the value.Although several studies have reported increases in referrals to nephrologists after automatic reporting of the eGFR was introduced,2026 there is limited evidence on the impact that such reporting has had on the appropriateness of referrals. An increase in the number of inappropriate referrals would affect health care delivery, diverting scarce resources to the evaluation of relatively mild kidney disease. It also would likely increase wait times for all nephrology referrals and have a financial impact on the system because specialist care is more costly than primary care.We conducted a study to evaluate whether the introduction of automatic reporting of the eGFR by laboratories, along with ad hoc educational activities for primary care physicians, had an impact on the number and appropriateness of referrals to nephrologists.  相似文献   

13.
Rachel Mann  Joy Adamson  Simon M. Gilbody 《CMAJ》2012,184(8):E424-E430

Background:

Guidelines for perinatal mental health care recommend the use of two case-finding questions about depressed feelings and loss of interest in activities, despite the absence of validation studies in this context. We examined the diagnostic accuracy of these questions and of a third question about the need for help asked of women receiving perinatal care.

Methods:

We evaluated self-reported responses to two case-finding questions against an interviewer-assessed diagnostic standard (DSM-IV criteria for major depressive disorder) among 152 women receiving antenatal care at 26–28 weeks’ gestation and postnatal care at 5–13 weeks after delivery. Among women who answered “yes” to either question, we assessed the usefulness of asking a third question about the need for help. We calculated sensitivity, specificity and likelihood ratios for the two case-finding questions and for the added question about the need for help.

Results:

Antenatally, the two case-finding questions had a sensitivity of 100% (95% confidence interval [CI] 77%–100%), a specificity of 68% (95% CI 58%–76%), a positive likelihood ratio of 3.03 (95% CI 2.28–4.02) and a negative likelihood ratio of 0.041 (95% CI 0.003–0.63) in identifying perinatal depression. Postnatal results were similar. Among the women who screened positive antenatally, the additional question about the need for help had a sensitivity of 58% (95% CI 38%–76%), a specificity of 91% (95% CI 78%–97%), a positive likelihood ratio of 6.86 (95% CI 2.16–21.7) and a negative likelihood ratio of 0.45 (95% CI 0.25–0.80), with lower sensitivity and higher specificity postnatally.

Interpretation:

Negative responses to both of the case-finding questions showed acceptable accuracy for ruling out perinatal depression. For positive responses, the use of a third question about the need for help improved specificity and the ability to rule in depression.The occurrence of depressive symptoms during the perinatal period is well-recognized. The estimated prevalence is 7.4%–20% antenatally1,2 and up to 19.2% in the first three postnatal months.3 Antenatal depression is associated with malnutrition, substance and alcohol abuse, poor self-reported health, poor use of antenatal care services and adverse neonatal outcomes.4 Postnatal depression has a substantial impact on the mother and her partner, the family, mother–baby interaction and on the longer-term emotional and cognitive development of the baby.5Screening strategies to identify perinatal depression have been advocated, and specific questionnaires for use in the perinatal period, such as the Edinburgh Postnatal Depression Scale,6 were developed. However, in their current recommendations, the UK National Screening Committee7 and the US Committee on Obstetric Practice8 state that there is insufficient evidence to support the implementation of universal perinatal screening programs. The initial decision in 2001 by the National Screening Committee to not support universal perinatal screening9 attracted particular controversy in the United Kingdom; some service providers subsequently withdrew resources for treatment of postnatal depression, and subsequent pressure by perinatal community practitioners led to modification of the screening guidance in order to clarify the role of screening questionnaires in the assessment of perinatal depression.10In 2007, the National Institute for Health and Clinical Excellence issued clinical guidelines for perinatal mental health care in the UK, which included guidance on the use of questionnaires to identify antenatal and postnatal depression.11 In this guidance, a case-finding approach to identify perinatal depression was strongly recommended; it involved the use of two case-finding questions (sometimes referred to as the Whooley questions), and an additional question about the need for help asked of women who answered “yes” to either of the initial questions (Box 1).

Box 1:

Case-finding questions recommended for the identification of perinatal depression10

  • “During the past month, have you often been bothered by feeling down, depressed or hopeless?”
  • “During the past month, have you often been bothered by having little interest or pleasure in doing things?”
  • A third question should be considered if the woman answers “yes” to either of the initial screening questions: “Is this something you feel you need or want help with?”
Useful case-finding questions should be both sensitive and specific so they accurately identify those with and without the condition. The two case-finding questions have been validated in primary care samples12,13 and examined in other clinical populations1416 and are endorsed in recommendations by US and Canadian bodies for screening depression in adults.17,18 However, at the time the guidance from the National Institute for Health and Clinical Excellence was issued, there were no validation studies conducted in perinatal populations. A recent systematic review19 identified one study conducted in the United States that validated the two questions against established diagnostic criteria in 506 women attending well-child visits postnatally;20 sensitivity and specificity of the questions were 100% and 44% respectively at four weeks. The review failed to identify studies that validated the two questions and the additional question about the need for help against a gold-standard measure.We conducted a validation study to assess the diagnostic accuracy of this brief case-finding approach against gold-standard psychiatric diagnostic criteria for depression in a population of women receiving perinatal care.  相似文献   

14.

Background:

Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults. Other inflammatory rheumatologic disorders are associated with an excess risk of vascular disease. We investigated whether polymyalgia rheumatica is associated with an increased risk of vascular events.

Methods:

We used the General Practice Research Database to identify patients with a diagnosis of incident polymyalgia rheumatica between Jan. 1, 1987, and Dec. 31, 1999. Patients were matched by age, sex and practice with up to 5 patients without polymyalgia rheumatica. Patients were followed until their first vascular event (cardiovascular, cerebrovascular, peripheral vascular) or the end of available records (May 2011). All participants were free of vascular disease before the diagnosis of polymyalgia rheumatica (or matched date). We used Cox regression models to compare time to first vascular event in patients with and without polymyalgia rheumatica.

Results:

A total of 3249 patients with polymyalgia rheumatica and 12 735 patients without were included in the final sample. Over a median follow-up period of 7.8 (interquartile range 3.3–12.4) years, the rate of vascular events was higher among patients with polymyalgia rheumatica than among those without (36.1 v. 12.2 per 1000 person-years; adjusted hazard ratio 2.6, 95% confidence interval 2.4–2.9). The increased risk of a vascular event was similar for each vascular disease end point. The magnitude of risk was higher in early disease and in patients younger than 60 years at diagnosis.

Interpretation:

Patients with polymyalgia rheumatica have an increased risk of vascular events. This risk is greatest in the youngest age groups. As with other forms of inflammatory arthritis, patients with polymyalgia rheumatica should have their vascular risk factors identified and actively managed to reduce this excess risk.Inflammatory rheumatologic disorders such as rheumatoid arthritis,1,2 systemic lupus erythematosus,2,3 gout,4 psoriatic arthritis2,5 and ankylosing spondylitis2,6 are associated with an increased risk of vascular disease, especially cardiovascular disease, leading to substantial morbidity and premature death.26 Recognition of this excess vascular risk has led to management guidelines advocating screening for and management of vascular risk factors.79Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults,10 with a lifetime risk of 2.4% for women and 1.7% for men.11 To date, evidence regarding the risk of vascular disease in patients with polymyalgia rheumatica is unclear. There are a number of biologically plausible mechanisms between polymyalgia rheumatica and vascular disease. These include the inflammatory burden of the disease,12,13 the association of the disease with giant cell arteritis (causing an inflammatory vasculopathy, which may lead to subclinical arteritis, stenosis or aneurysms),14 and the adverse effects of long-term corticosteroid treatment (e.g., diabetes, hypertension and dyslipidemia).15,16 Paradoxically, however, use of corticosteroids in patients with polymyalgia rheumatica may actually decrease vascular risk by controlling inflammation.17 A recent systematic review concluded that although some evidence exists to support an association between vascular disease and polymyalgia rheumatica,18 the existing literature presents conflicting results, with some studies reporting an excess risk of vascular disease19,20 and vascular death,21,22 and others reporting no association.2326 Most current studies are limited by poor methodologic quality and small samples, and are based on secondary care cohorts, who may have more severe disease, yet most patients with polymyalgia rheumatica receive treatment exclusively in primary care.27The General Practice Research Database (GPRD), based in the United Kingdom, is a large electronic system for primary care records. It has been used as a data source for previous studies,28 including studies on the association of inflammatory conditions with vascular disease29 and on the epidemiology of polymyalgia rheumatica in the UK.30 The aim of the current study was to examine the association between polymyalgia rheumatica and vascular disease in a primary care population.  相似文献   

15.

Background:

Overweight and obesity in young people are assessed by comparing body mass index (BMI) with a reference population. However, two widely used reference standards, the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO) growth curves, have different definitions of overweight and obesity, thus affecting estimates of prevalence. We compared the associations between overweight and obesity as defined by each of these curves and the presence of cardiometabolic risk factors.

Methods:

We obtained data from a population-representative study involving 2466 boys and girls aged 9, 13 and 16 years in Quebec, Canada. We calculated BMI percentiles using the CDC and WHO growth curves and compared their abilities to detect unfavourable levels of fasting lipids, glucose and insulin, and systolic and diastolic blood pressure using receiver operating characteristic curves, sensitivity, specificity and kappa coefficients.

Results:

The z scores for BMI using the WHO growth curves were higher than those using the CDC growth curves (0.35–0.43 v. 0.12–0.28, p < 0.001 for all comparisons). The WHO and CDC growth curves generated virtually identical receiver operating characteristic curves for individual or combined cardiometabolic risk factors. The definitions of overweight and obesity had low sensitivities but adequate specificities for cardiometabolic risk. Obesity as defined by the WHO or CDC growth curves discriminated cardiometabolic risk similarly, but overweight as defined by the WHO curves had marginally higher sensitivities (by 0.6%–8.6%) and lower specificities (by 2.6%–4.2%) than the CDC curves.

Interpretation:

The WHO growth curves show no significant discriminatory advantage over the CDC growth curves in detecting cardiometabolic abnormalities in children aged 9–16 years.Pediatric obesity is associated with dyslipidemia, insulin resistance and elevated blood pressure.16 Thus, accurately identifying children with obesity is crucial for clinical management and public health surveillance.Lipid screening is recommended for young people who are overweight,7,8 but studies show that estimates of the prevalence of overweight and obesity are 1%–7% lower using the growth curves of the Centers for Disease Control and Prevention (CDC) versus those of the World Health Organization (WHO).911 Although the CDC and WHO definitions of overweight and obesity both use approximations of overweight and obese values of body mass index (BMI) when children reach 19 years of age, the CDC growth curves use data from more recent samples of young people.12,13 Given the recent rise in the prevalence of obesity among young people, using a heavier reference population may lead to fewer children being identified as overweight and obese, and an identical BMI value may not trigger a clinical investigation.7 The Canadian Paediatric Society, in collaboration with the College of Family Physicians of Canada, Dietitians of Canada and Community Health Nurses of Canada, recently recommended that physicians switch from the CDC to the WHO growth curves for monitoring growth for Canadian children aged 5–19 years.14 This is a major change for health providers caring for the estimated 8 million children in Canada.15Understanding how using the different growth curves affects the identification of adverse cardiometabolic risk profiles is essential for the appropriate management of overweight and obesity among young people. Thus, our objectives were to assess whether the association between BMI percentiles and cardiometabolic risk differs between the definitions of overweight and obesity based on the WHO and CDC growth curves, and to compare the sensitivity and specificity of these definitions in detecting cardiometabolic risk.  相似文献   

16.
17.

Background:

Patients with type 2 diabetes have a 40% increased risk of bladder cancer. Thiazolidinediones, especially pioglitazone, may increase the risk. We conducted a systematic review and meta-analysis to evaluate the risk of bladder cancer among adults with type 2 diabetes taking thiazolidinediones.

Methods:

We searched key biomedical databases (including MEDLINE, Embase and Scopus) and sources of grey literature from inception through March 2012 for published and unpublished studies, without language restrictions. We included randomized controlled trials (RCTs), cohort studies and case–control studies that reported incident bladder cancer among people with type 2 diabetes who ever (v. never) were exposed to pioglitazone (main outcome), rosiglitazone or any thiazolidinedione.

Results:

Of the 1787 studies identified, we selected 4 RCTs, 5 cohort studies and 1 case–control study. The total number of patients was 2 657 365, of whom 3643 had newly diagnosed bladder cancer, for an overall incidence of 53.1 per 100 000 person-years. The one RCT that reported on pioglitazone use found no significant association with bladder cancer (risk ratio [RR] 2.36, 95% confidence interval [CI] 0.91–6.13). The cohort studies of thiazolidinediones (pooled RR 1.15, 95% CI 1.04–1.26; I2 = 0%) and of pioglitazone specifically (pooled RR 1.22, 95% CI 1.07–1.39; I2 = 0%) showed significant associations with bladder cancer. No significant association with bladder cancer was observed in the two RCTs that evaluated rosiglitazone use (pooled RR 0.87, 95% CI 0.34–2.23; I2 = 0%).

Interpretation:

The limited evidence available supports the hypothesis that thiazolidinediones, particularly pioglitazone, are associated with an increased risk of bladder cancer among adults with type 2 diabetes.People with type 2 diabetes are at increased risk of several types of cancer, including a 40% increased risk of bladder cancer, compared with those without diabetes.1,2 The strong association with bladder cancer is hypothesized to be a result of hyperinsulinemia, whereby elevated insulin levels in type 2 diabetes stimulate insulin receptors on neoplastic cells, promoting cancer growth and division.1,35 Additional risk factors for bladder cancer include increased age, male sex, smoking, occupational and environmental exposures and urinary tract disease.6 Exogenous insulin and other glucose-lowering medications such as sulfonylureas, metformin and thiazolidinediones, may further modify the risk of bladder cancer.1Data from the placebo-controlled PROactive trial of pioglitazone (PROspective pioglitAzone Clinical Trial in macroVascular Events) suggested a higher incidence of bladder cancer among pioglitazone users than among controls.7 Subsequent randomized controlled trials (RCTs) and observational studies have reported conflicting results for pioglitazone, with various studies reporting a significant increase,8,9 a nonsignificant increase10 and even a decreased risk11 of bladder cancer.To test the hypothesis that pioglitazone use is associated with an increased risk of bladder cancer, we conducted a systematic review and meta-analysis of RCTs and observational studies reporting bladder cancer among adults with type 2 diabetes taking pioglitazone. To clarify the possibility of a drug-class effect, we also examined data for all thiazolidinediones and for rosiglitazone alone.  相似文献   

18.

Background:

Although Aboriginal adults have a higher risk of end-stage renal disease than non-Aboriginal adults, the incidence and causes of end-stage renal disease among Aboriginal children and young adults are not well described.

Methods:

We calculated age- and sex-specific incidences of end-stage renal disease among Aboriginal people less than 22 years of age using data from a national organ failure registry. Incidence rate ratios were used to compare rates between Aboriginal and white Canadians. To contrast causes of end-stage renal disease by ethnicity and age, we calculated the odds of congenital diseases, glomerulonephritis and diabetes for Aboriginal people and compared them with those for white people in the following age strata: 0 to less than 22 years, 22 to less than 40 years, 40 to less than 60 years and older than 60 years.

Results:

Incidence rate ratios of end-stage renal disease for Aboriginal children and young adults (age < 22 yr, v. white people) were 1.82 (95% confidence interval [CI] 1.40–2.38) for boys and 3.24 (95% CI 2.60–4.05) for girls. Compared with white people, congenital diseases were less common among Aboriginal people aged less than 22 years (odds ratio [OR] 0.56, 95% CI 0.36–0.86), and glomerulonephritis was more common (OR 2.18, 95% CI 1.55–3.07). An excess of glomerulonephritis, but not diabetes, was seen among Aboriginal people aged 22 to less than 40 years. The converse was true (higher risk of diabetes, lower risk of glomerulonephritis) among Aboriginal people aged 40 years and older.

Interpretation:

The incidence of end-stage renal disease is higher among Aboriginal children and young adults than among white children and young adults. This higher incidence may be driven by an increased risk of glomerulonephritis in this population.Compared with white Canadians, Aboriginal Canadians have a higher prevalence of end-stage renal disease,1,2 which is generally attributed to their increased risk for diabetes. However, there has been limited investigation of the incidence and causes of end-stage renal disease among Aboriginal children and young adults. Because most incident cases of diabetes are identified in middle-aged adults, an excess risk of end-stage renal disease in young people would not be expected if the high risk of diabetes is responsible for higher overall rates of end-stage renal disease among Aboriginal people. About 12.3% of children with end-stage renal disease in Canada are Aboriginal,3 but only 6.1% of Canadian children (age < 19 yr) are Aboriginal.4,5A few reports suggest that nondiabetic renal disease is common among Aboriginal populations in North America.2,68 Aboriginal adults in Saskatchewan are twice as likely as white adults to have end-stage renal disease caused by glomerulonephritis,7,8 and an increased rate of mesangial proliferative glomerulonephritis has been reported among Aboriginal people in the United States.6,9 These studies suggest that diabetes may be a comorbid condition rather than the sole cause of kidney failure among some Aboriginal people in whom diabetic nephropathy is diagnosed using clinical features alone.We estimated incidence rates of end-stage renal disease among Aboriginal children and young adults in Canada and compared them with the rates seen among white children and young adults. In addition, we compared relative odds of congenital renal disease, glomerulonephritis and diabetic nephropathy in Aboriginal people with the relative odds of these conditions in white people.  相似文献   

19.
20.

Background:

Persistent postoperative pain continues to be an underrecognized complication. We examined the prevalence of and risk factors for this type of pain after cardiac surgery.

Methods:

We enrolled patients scheduled for coronary artery bypass grafting or valve replacement, or both, from Feb. 8, 2005, to Sept. 1, 2009. Validated measures were used to assess (a) preoperative anxiety and depression, tendency to catastrophize in the face of pain, health-related quality of life and presence of persistent pain; (b) pain intensity and interference in the first postoperative week; and (c) presence and intensity of persistent postoperative pain at 3, 6, 12 and 24 months after surgery. The primary outcome was the presence of persistent postoperative pain during 24 months of follow-up.

Results:

A total of 1247 patients completed the preoperative assessment. Follow-up retention rates at 3 and 24 months were 84% and 78%, respectively. The prevalence of persistent postoperative pain decreased significantly over time, from 40.1% at 3 months to 22.1% at 6 months, 16.5% at 12 months and 9.5% at 24 months; the pain was rated as moderate to severe in 3.6% at 24 months. Acute postoperative pain predicted both the presence and severity of persistent postoperative pain. The more intense the pain during the first week after surgery and the more it interfered with functioning, the more likely the patients were to report persistent postoperative pain. Pre-existing persistent pain and increased preoperative anxiety also predicted the presence of persistent postoperative pain.

Interpretation:

Persistent postoperative pain of nonanginal origin after cardiac surgery affected a substantial proportion of the study population. Future research is needed to determine whether interventions to modify certain risk factors, such as preoperative anxiety and the severity of pain before and immediately after surgery, may help to minimize or prevent persistent postoperative pain.Postoperative pain that persists beyond the normal time for tissue healing (> 3 mo) is increasingly recognized as an important complication after various types of surgery and can have serious consequences on patients’ daily living.13 Cardiac surgeries, such as coronary artery bypass grafting (CABG) and valve replacement, rank among the most frequently performed interventions worldwide.4 They aim to improve survival and quality of life by reducing symptoms, including anginal pain. However, persistent postoperative pain of nonanginal origin has been reported in 7% to 60% of patients following these surgeries.523 Such variability is common in other types of major surgery and is due mainly to differences in the definition of persistent postoperative pain, study design, data collection methods and duration of follow-up.13,24Few prospective cohort studies have examined the exact time course of persistent postoperative pain after cardiac surgery, and follow-up has always been limited to a year or less.9,14,25 Factors that put patients at risk of this type of problem are poorly understood.26 Studies have reported inconsistent results regarding the contribution of age, sex, body mass index, preoperative angina, surgical technique, grafting site, postoperative complications or level of opioid consumption after surgery.57,9,13,14,1619,2123,25,27 Only 1 study investigated the role of chronic nonanginal pain before surgery as a contributing factor;21 5 others prospectively assessed the association between persistent postoperative pain and acute pain intensity in the first postoperative week but reported conflicting results.13,14,21,22,25 All of the above studies were carried out in a single hospital and included relatively small samples. None of the studies examined the contribution of psychological factors such as levels of anxiety and depression before cardiac surgery, although these factors have been shown to influence acute or persistent postoperative pain in other types of surgery.1,24,28,29We conducted a prospective multicentre cohort study (the CARD-PAIN study) to determine the prevalence of persistent postoperative pain of nonanginal origin up to 24 months after cardiac surgery and to identify risk factors for the presence and severity of the condition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号