首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background:

There have been several published reports of inflammatory ocular adverse events, mainly uveitis and scleritis, among patients taking oral bisphosphonates. We examined the risk of these adverse events in a pharmacoepidemiologic cohort study.

Methods:

We conducted a retrospective cohort study involving residents of British Columbia who had visited an ophthalmologist from 2000 to 2007. Within the cohort, we identified all people who were first-time users of oral bisphosphonates and who were followed to the first inflammatory ocular adverse event, death, termination of insurance or the end of the study period. We defined an inflammatory ocular adverse event as scleritis or uveitis. We used a Cox proportional hazard model to determine the adjusted rate ratios. As a sensitivity analysis, we performed a propensity-score–adjusted analysis.

Results:

The cohort comprised 934 147 people, including 10 827 first-time users of bisphosphonates and 923 320 nonusers. The incidence rate among first-time users was 29/10 000 person-years for uveitis and 63/10 000 person-years for scleritis. In contrast, the incidence among people who did not use oral bisphosphonates was 20/10 000 person-years for uveitis and 36/10 000 for scleritis (number needed to harm: 1100 and 370, respectively). First-time users had an elevated risk of uveitis (adjusted relative risk [RR] 1.45, 95% confidence interval [CI] 1.25–1.68) and scleritis (adjusted RR 1.51, 95% CI 1.34–1.68). The rate ratio for the propensity-score–adjusted analysis did not change the results (uveitis: RR 1.50, 95% CI 1.29–1.73; scleritis: RR 1.53, 95% CI 1.39–1.70).

Interpretation:

People using oral bisphosphonates for the first time may be at a higher risk of scleritis and uveitis compared to people with no bisphosphonate use. Patients taking bisphosphonates must be familiar with the signs and symptoms of these conditions, so that they can immediately seek assessment by an ophthalmologist.Oral bisphosphonates are the most frequently prescribed class of medications for the prevention of osteoporosis. Most literature about the safety of bisphosphonates has mainly focused on long-term adverse events, including atypical fractures,1 atrial fibrillation,2 and esophageal and colon cancer.3Uveitis and scleritis are ocular inflammatory diseases that are associated with major morbidity. Anterior uveitis is the most common type of uveitis with an estimated 11.4–100.0 cases/100 000 person-years.4,5 Both diseases require immediate treatment to prevent further complications, which may include cataracts, glaucoma, macular edema and scleral perforation. Numerous case reports and case series have described an association between the use of oral bisphosphonates and anterior uveitis68 and scleritis.8,9 In most reported cases, severe eye pain was reported within days of taking an oral bisphosphonates, and the symptom resolved after stopping the agent.6,9 Only one large epidemiologic study has examined the association between the use of bisphosphonates and ocular inflammatory diseases.10 This study did not find an association, but it was limited by a small number of events and a lack of power. Thus, the association between uveitis or scleritis and the use of oral bisphosphonates is not fully known. Given that early intervention may prevent complications, we performed a pharmacoepidemiologic study to assess the true risk of these potentially serious conditions.  相似文献   

2.

Background:

Uncircumcised boys are at higher risk for urinary tract infections than circumcised boys. Whether this risk varies with the visibility of the urethral meatus is not known. Our aim was to determine whether there is a hierarchy of risk among uncircumcised boys whose urethral meatuses are visible to differing degrees.

Methods:

We conducted a prospective cross-sectional study in one pediatric emergency department. We screened 440 circumcised and uncircumcised boys. Of these, 393 boys who were not toilet trained and for whom the treating physician had requested a catheter urine culture were included in our analysis. At the time of catheter insertion, a nurse characterized the visibility of the urethral meatus (phimosis) using a 3-point scale (completely visible, partially visible or nonvisible). Our primary outcome was urinary tract infection, and our primary exposure variable was the degree of phimosis: completely visible versus partially or nonvisible urethral meatus.

Results:

Cultures grew from urine samples from 30.0% of uncircumcised boys with a completely visible meatus, and from 23.8% of those with a partially or nonvisible meatus (p = 0.4). The unadjusted odds ratio (OR) for culture growth was 0.73 (95% confidence interval [CI] 0.35–1.52), and the adjusted OR was 0.41 (95% CI 0.17–0.95). Of the boys who were circumcised, 4.8% had urinary tract infections, which was significantly lower than the rate among uncircumcised boys with a completely visible urethral meatus (unadjusted OR 0.12 [95% CI 0.04–0.39], adjusted OR 0.07 [95% CI 0.02–0.26]).

Interpretation:

We did not see variation in the risk of urinary tract infection with the visibility of the urethral meatus among uncircumcised boys. Compared with circumcised boys, we saw a higher risk of urinary tract infection in uncircumcised boys, irrespective of urethral visibility.Urinary tract infections are one of the most common serious bacterial infections in young children.16 Prompt diagnosis is important, because children with urinary tract infection are at risk for bacteremia6 and renal scarring.1,7 Uncircumcised boys have a much higher risk of urinary tract infection than circumcised boys,1,3,4,6,812 likely as a result of heavier colonization under the foreskin with pathogenic bacteria, which leads to ascending infections.13,14 The American Academy of Pediatrics recently suggested that circumcision status be used to select which boys should be evaluated for urinary tract infection.1 However, whether all uncircumcised boys are at equal risk for infection, or whether the risk varies with the visibility of the urethral opening, is not known. It has been suggested that a subset of uncircumcised boys with a poorly visible urethral opening are at increased risk of urinary tract infection,1517 leading some experts to consider giving children with tight foreskins topical cortisone or circumcision to prevent urinary tract infections.13,1821We designed a study to challenge the opinion that all uncircumcised boys are at increased risk for urinary tract infections. We hypothesized a hierarchy of risk among uncircumcised boys depending on the visibility of the urethral meatus, with those with a partially or nonvisible meatus at highest risk, and those with a completely visible meatus having a level of risk similar to that of boys who have been circumcised. Our primary aim was to compare the proportions of urinary tract infections among uncircumcised boys with a completely visible meatus with those with a partially or nonvisible meatus.  相似文献   

3.
Rachel Mann  Joy Adamson  Simon M. Gilbody 《CMAJ》2012,184(8):E424-E430

Background:

Guidelines for perinatal mental health care recommend the use of two case-finding questions about depressed feelings and loss of interest in activities, despite the absence of validation studies in this context. We examined the diagnostic accuracy of these questions and of a third question about the need for help asked of women receiving perinatal care.

Methods:

We evaluated self-reported responses to two case-finding questions against an interviewer-assessed diagnostic standard (DSM-IV criteria for major depressive disorder) among 152 women receiving antenatal care at 26–28 weeks’ gestation and postnatal care at 5–13 weeks after delivery. Among women who answered “yes” to either question, we assessed the usefulness of asking a third question about the need for help. We calculated sensitivity, specificity and likelihood ratios for the two case-finding questions and for the added question about the need for help.

Results:

Antenatally, the two case-finding questions had a sensitivity of 100% (95% confidence interval [CI] 77%–100%), a specificity of 68% (95% CI 58%–76%), a positive likelihood ratio of 3.03 (95% CI 2.28–4.02) and a negative likelihood ratio of 0.041 (95% CI 0.003–0.63) in identifying perinatal depression. Postnatal results were similar. Among the women who screened positive antenatally, the additional question about the need for help had a sensitivity of 58% (95% CI 38%–76%), a specificity of 91% (95% CI 78%–97%), a positive likelihood ratio of 6.86 (95% CI 2.16–21.7) and a negative likelihood ratio of 0.45 (95% CI 0.25–0.80), with lower sensitivity and higher specificity postnatally.

Interpretation:

Negative responses to both of the case-finding questions showed acceptable accuracy for ruling out perinatal depression. For positive responses, the use of a third question about the need for help improved specificity and the ability to rule in depression.The occurrence of depressive symptoms during the perinatal period is well-recognized. The estimated prevalence is 7.4%–20% antenatally1,2 and up to 19.2% in the first three postnatal months.3 Antenatal depression is associated with malnutrition, substance and alcohol abuse, poor self-reported health, poor use of antenatal care services and adverse neonatal outcomes.4 Postnatal depression has a substantial impact on the mother and her partner, the family, mother–baby interaction and on the longer-term emotional and cognitive development of the baby.5Screening strategies to identify perinatal depression have been advocated, and specific questionnaires for use in the perinatal period, such as the Edinburgh Postnatal Depression Scale,6 were developed. However, in their current recommendations, the UK National Screening Committee7 and the US Committee on Obstetric Practice8 state that there is insufficient evidence to support the implementation of universal perinatal screening programs. The initial decision in 2001 by the National Screening Committee to not support universal perinatal screening9 attracted particular controversy in the United Kingdom; some service providers subsequently withdrew resources for treatment of postnatal depression, and subsequent pressure by perinatal community practitioners led to modification of the screening guidance in order to clarify the role of screening questionnaires in the assessment of perinatal depression.10In 2007, the National Institute for Health and Clinical Excellence issued clinical guidelines for perinatal mental health care in the UK, which included guidance on the use of questionnaires to identify antenatal and postnatal depression.11 In this guidance, a case-finding approach to identify perinatal depression was strongly recommended; it involved the use of two case-finding questions (sometimes referred to as the Whooley questions), and an additional question about the need for help asked of women who answered “yes” to either of the initial questions (Box 1).

Box 1:

Case-finding questions recommended for the identification of perinatal depression10

  • “During the past month, have you often been bothered by feeling down, depressed or hopeless?”
  • “During the past month, have you often been bothered by having little interest or pleasure in doing things?”
  • A third question should be considered if the woman answers “yes” to either of the initial screening questions: “Is this something you feel you need or want help with?”
Useful case-finding questions should be both sensitive and specific so they accurately identify those with and without the condition. The two case-finding questions have been validated in primary care samples12,13 and examined in other clinical populations1416 and are endorsed in recommendations by US and Canadian bodies for screening depression in adults.17,18 However, at the time the guidance from the National Institute for Health and Clinical Excellence was issued, there were no validation studies conducted in perinatal populations. A recent systematic review19 identified one study conducted in the United States that validated the two questions against established diagnostic criteria in 506 women attending well-child visits postnatally;20 sensitivity and specificity of the questions were 100% and 44% respectively at four weeks. The review failed to identify studies that validated the two questions and the additional question about the need for help against a gold-standard measure.We conducted a validation study to assess the diagnostic accuracy of this brief case-finding approach against gold-standard psychiatric diagnostic criteria for depression in a population of women receiving perinatal care.  相似文献   

4.

Background:

Although diacetylmorphine has been proven to be more effective than methadone maintenance treatment for opioid dependence, its direct costs are higher. We compared the cost-effectiveness of diacetylmorphine and methadone maintenance treatment for chronic opioid dependence refractory to treatment.

Methods:

We constructed a semi-Markov cohort model using data from the North American Opiate Medication Initiative trial, supplemented with administrative data for the province of British Columbia and other published data, to capture the chronic, recurrent nature of opioid dependence. We calculated incremental cost-effectiveness ratios to compare diacetylmorphine and methadone over 1-, 5-, 10-year and lifetime horizons.

Results:

Diacetylmorphine was found to be a dominant strategy over methadone maintenance treatment in each of the time horizons. Over a lifetime horizon, our model showed that people receiving methadone gained 7.46 discounted quality-adjusted life-years (QALYs) on average (95% credibility interval [CI] 6.91–8.01) and generated a societal cost of $1.14 million (95% CI $736 800–$1.78 million). Those who received diacetylmorphine gained 7.92 discounted QALYs on average (95% CI 7.32–8.53) and generated a societal cost of $1.10 million (95% CI $724 100–$1.71 million). Cost savings in the diacetylmorphine cohort were realized primarily because of reductions in the costs related to criminal activity. Probabilistic sensitivity analysis showed that the probability of diacetylmorphine being cost-effective at a willingness-to-pay threshold of $0 per QALY gained was 76%; the probability was 95% at a threshold of $100 000 per QALY gained. Results were confirmed over a range of sensitivity analyses.

Interpretation:

Using mathematical modelling to extrapolate results from the North American Opiate Medication Initiative, we found that diacetylmorphine may be more effective and less costly than methadone among people with chronic opioid dependence refractory to treatment.Opioid substitution with methadone is the most common treatment of opioid dependence.13 Participation in a methadone maintenance treatment program has been associated with decreases in illicit drug use,4 criminality5 and mortality.6,7 However, longitudinal studies have shown that most people who receive opioid substitution treatment are unable to abstain from illicit drug use for sustained periods, either switching from treatment to regular opioid use or continuing to use opioids while in treatment.813 An estimated 15%–25% of the most marginalized methadone clients do not benefit from treatment in terms of sustained abstention from the use of illicit opioids.14The North American Opiate Medication Initiative was a randomized controlled trial that compared supervised, medically prescribed injectable diacetylmorphine and optimized methadone maintenance treatment in people with long-standing opioid dependence and multiple failed treatment attempts with methadone or other forms of treatment.15 The trial was conducted in two Canadian cities (Vancouver, British Columbia; and Montréal, Quebec). Both treatment protocols included a comprehensive range of psychosocial services (e.g., addiction counselling, relapse prevention, case management, and individual and group interventions) and primary care services (e.g., testing for blood-borne diseases, provision of HIV treatment, and treatment of acute and chronic physical and mental health complications of substance use) in keeping with Health Canada best practices.16 The results of the trial confirmed findings of prior studies showing diacetylmorphine to be more effective than methadone maintenance treatment in retaining opioid-dependent patients in treatment15,1720 and improving health and social functioning.19,21,22 Diacetylmorphine treatment has been proposed to reach a specific population of people with opioid dependence refractory to treatment who are at high risk of adverse health consequences and engagement in criminal activities to acquire the illicit drugs.For guiding policy-makers, the North American Opiate Medication Initiative alone does not address all the important considerations for decision-making. In addition to political challenges associated with the therapy,23 there remains concern over the direct cost of diacetylmorphine over the long term, because it can be as much as 10 times greater than conventional methadone maintenance treatment.21 The North American Opiate Medication Initiative was only one year in duration, but a policy to introduce diacetylmorphine might have both positive and negative longer-term implications.We extrapolated outcomes from the North American Opiate Medication Initiative to estimate the long-term cost-effectiveness of diacetylmorphine versus methadone maintenance treatment for chronic, refractory opioid dependence.  相似文献   

5.

Background:

Overweight and obesity in young people are assessed by comparing body mass index (BMI) with a reference population. However, two widely used reference standards, the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO) growth curves, have different definitions of overweight and obesity, thus affecting estimates of prevalence. We compared the associations between overweight and obesity as defined by each of these curves and the presence of cardiometabolic risk factors.

Methods:

We obtained data from a population-representative study involving 2466 boys and girls aged 9, 13 and 16 years in Quebec, Canada. We calculated BMI percentiles using the CDC and WHO growth curves and compared their abilities to detect unfavourable levels of fasting lipids, glucose and insulin, and systolic and diastolic blood pressure using receiver operating characteristic curves, sensitivity, specificity and kappa coefficients.

Results:

The z scores for BMI using the WHO growth curves were higher than those using the CDC growth curves (0.35–0.43 v. 0.12–0.28, p < 0.001 for all comparisons). The WHO and CDC growth curves generated virtually identical receiver operating characteristic curves for individual or combined cardiometabolic risk factors. The definitions of overweight and obesity had low sensitivities but adequate specificities for cardiometabolic risk. Obesity as defined by the WHO or CDC growth curves discriminated cardiometabolic risk similarly, but overweight as defined by the WHO curves had marginally higher sensitivities (by 0.6%–8.6%) and lower specificities (by 2.6%–4.2%) than the CDC curves.

Interpretation:

The WHO growth curves show no significant discriminatory advantage over the CDC growth curves in detecting cardiometabolic abnormalities in children aged 9–16 years.Pediatric obesity is associated with dyslipidemia, insulin resistance and elevated blood pressure.16 Thus, accurately identifying children with obesity is crucial for clinical management and public health surveillance.Lipid screening is recommended for young people who are overweight,7,8 but studies show that estimates of the prevalence of overweight and obesity are 1%–7% lower using the growth curves of the Centers for Disease Control and Prevention (CDC) versus those of the World Health Organization (WHO).911 Although the CDC and WHO definitions of overweight and obesity both use approximations of overweight and obese values of body mass index (BMI) when children reach 19 years of age, the CDC growth curves use data from more recent samples of young people.12,13 Given the recent rise in the prevalence of obesity among young people, using a heavier reference population may lead to fewer children being identified as overweight and obese, and an identical BMI value may not trigger a clinical investigation.7 The Canadian Paediatric Society, in collaboration with the College of Family Physicians of Canada, Dietitians of Canada and Community Health Nurses of Canada, recently recommended that physicians switch from the CDC to the WHO growth curves for monitoring growth for Canadian children aged 5–19 years.14 This is a major change for health providers caring for the estimated 8 million children in Canada.15Understanding how using the different growth curves affects the identification of adverse cardiometabolic risk profiles is essential for the appropriate management of overweight and obesity among young people. Thus, our objectives were to assess whether the association between BMI percentiles and cardiometabolic risk differs between the definitions of overweight and obesity based on the WHO and CDC growth curves, and to compare the sensitivity and specificity of these definitions in detecting cardiometabolic risk.  相似文献   

6.

Background:

Although Aboriginal adults have a higher risk of end-stage renal disease than non-Aboriginal adults, the incidence and causes of end-stage renal disease among Aboriginal children and young adults are not well described.

Methods:

We calculated age- and sex-specific incidences of end-stage renal disease among Aboriginal people less than 22 years of age using data from a national organ failure registry. Incidence rate ratios were used to compare rates between Aboriginal and white Canadians. To contrast causes of end-stage renal disease by ethnicity and age, we calculated the odds of congenital diseases, glomerulonephritis and diabetes for Aboriginal people and compared them with those for white people in the following age strata: 0 to less than 22 years, 22 to less than 40 years, 40 to less than 60 years and older than 60 years.

Results:

Incidence rate ratios of end-stage renal disease for Aboriginal children and young adults (age < 22 yr, v. white people) were 1.82 (95% confidence interval [CI] 1.40–2.38) for boys and 3.24 (95% CI 2.60–4.05) for girls. Compared with white people, congenital diseases were less common among Aboriginal people aged less than 22 years (odds ratio [OR] 0.56, 95% CI 0.36–0.86), and glomerulonephritis was more common (OR 2.18, 95% CI 1.55–3.07). An excess of glomerulonephritis, but not diabetes, was seen among Aboriginal people aged 22 to less than 40 years. The converse was true (higher risk of diabetes, lower risk of glomerulonephritis) among Aboriginal people aged 40 years and older.

Interpretation:

The incidence of end-stage renal disease is higher among Aboriginal children and young adults than among white children and young adults. This higher incidence may be driven by an increased risk of glomerulonephritis in this population.Compared with white Canadians, Aboriginal Canadians have a higher prevalence of end-stage renal disease,1,2 which is generally attributed to their increased risk for diabetes. However, there has been limited investigation of the incidence and causes of end-stage renal disease among Aboriginal children and young adults. Because most incident cases of diabetes are identified in middle-aged adults, an excess risk of end-stage renal disease in young people would not be expected if the high risk of diabetes is responsible for higher overall rates of end-stage renal disease among Aboriginal people. About 12.3% of children with end-stage renal disease in Canada are Aboriginal,3 but only 6.1% of Canadian children (age < 19 yr) are Aboriginal.4,5A few reports suggest that nondiabetic renal disease is common among Aboriginal populations in North America.2,68 Aboriginal adults in Saskatchewan are twice as likely as white adults to have end-stage renal disease caused by glomerulonephritis,7,8 and an increased rate of mesangial proliferative glomerulonephritis has been reported among Aboriginal people in the United States.6,9 These studies suggest that diabetes may be a comorbid condition rather than the sole cause of kidney failure among some Aboriginal people in whom diabetic nephropathy is diagnosed using clinical features alone.We estimated incidence rates of end-stage renal disease among Aboriginal children and young adults in Canada and compared them with the rates seen among white children and young adults. In addition, we compared relative odds of congenital renal disease, glomerulonephritis and diabetic nephropathy in Aboriginal people with the relative odds of these conditions in white people.  相似文献   

7.

Background:

Whether the risk of cancer is increased among patients with herpes zoster is unclear. We investigated the risk of cancer among patients with herpes zoster using a nationwide health registry in Taiwan.

Methods:

We identified 35 871 patients with newly diagnosed herpes zoster during 2000–2008 from the National Health Insurance Research Database in Taiwan. We analyzed the standardized incidence ratios for various types of cancer.

Results:

Among patients with herpes zoster, 895 cases of cancer were reported. Patients with herpes zoster were not at increased risk of cancer (standardized incidence ratio 0.99, 95% confidence interval 0.93–1.06). Among the subgroups stratified by sex, age and years of follow-up, there was also no increased risk of overall cancer.

Interpretation:

Herpes zoster is not associated with increased risk of cancer in the general population. These findings do not support extensive investigations for occult cancer or enhanced surveillance for cancer in patients with herpes zoster.Herpes zoster, or shingles, is caused by reactivation of the varicella–zoster virus, a member of the Herpesviridae family. Established risk factors for herpes zoster include older age, chronic kidney disease, malignant disease and immunocompromised conditions (e.g., those experienced by patients with AIDS, transplant recipients, and those taking immunosuppressive medication because of autoimmune diseases).15 Herpes zoster occurs more frequently among patients with cancer than among those without cancer;6,7 however the relation between herpes zoster and risk of subsequent cancer is not well established.In 1955, Wyburn-Mason and colleagues reported several cases of skin cancer that arose from the healed lesions of herpes zoster.8 In 1972, a retrospective cohort study and a case series reported a higher prevalence of herpes zoster among patients with cancer, especially hematological cancer;6,7 however, they did not investigate whether herpes zoster was a risk factor for cancer. In 1982, Ragozzino and colleagues found no increased incidence of cancer (including hematologic malignancy) among patients with herpes zoster.9 There have been reports of significantly increased risk of some subtypes of cancer among patients aged more than 65 years with herpes zoster10 and among those admitted to hospital because of herpes zoster.11 Although these studies have suggested an association between herpes zoster and subsequent cancer, their results might not be generalizable because of differences in the severity of herpes zoster in the enrolled patients.Whether the risk of cancer is increased after herpes zoster remains controversial. The published studies811 were nearly all conducted in western countries, and data focusing on Asian populations are lacking.12 The results from western countries may not be directly generalizable to other ethnic groups because of differences in cancer types and profiles. Recently, a study reported that herpes zoster ophthalmicus may be a marker of increased risk of cancer in the following year.13 In the present study, we investigated the incidence rate ratio of cancer, including specific types of cancer, after diagnosis of herpes zoster.  相似文献   

8.

Background:

Several biomarkers of metabolic acidosis, including lower plasma bicarbonate and higher anion gap, have been associated with greater insulin resistance in cross-sectional studies. We sought to examine whether lower plasma bicarbonate is associated with the development of type 2 diabetes mellitus in a prospective study.

Methods:

We conducted a prospective, nested case–control study within the Nurses’ Health Study. Plasma bicarbonate was measured in 630 women who did not have type 2 diabetes mellitus at the time of blood draw in 1989–1990 but developed type 2 diabetes mellitus during 10 years of follow-up. Controls were matched according to age, ethnic background, fasting status and date of blood draw. We used logistic regression to calculate odds ratios (ORs) for diabetes by category of baseline plasma bicarbonate.

Results:

After adjustment for matching factors, body mass index, plasma creatinine level and history of hypertension, women with plasma bicarbonate above the median level had lower odds of diabetes (OR 0.76, 95% confidence interval [CI] 0.60–0.96) compared with women below the median level. Those in the second (OR 0.92, 95% CI 0.67–1.25), third (OR 0.70, 95% CI 0.51–0.97) and fourth (OR 0.75, 95% CI 0.54–1.05) quartiles of plasma bicarbonate had lower odds of diabetes compared with those in the lowest quartile (p for trend = 0.04). Further adjustment for C-reactive protein did not alter these findings.

Interpretation:

Higher plasma bicarbonate levels were associated with lower odds of incident type 2 diabetes mellitus among women in the Nurses’ Health Study. Further studies are needed to confirm this finding in different populations and to elucidate the mechanism for this relation.Resistance to insulin is central to the pathogenesis of type 2 diabetes mellitus.1 Several mechanisms may lead to insulin resistance and thereby contribute to the development of type 2 diabetes mellitus, including altered fatty acid metabolism, mitochondrial dysfunction and systemic inflammation.2 Metabolic acidosis may also contribute to insulin resistance. Human studies using the euglycemic and hyperglycemic clamp techniques have shown that mild metabolic acidosis induced by the administration of ammonium chloride results in reduced tissue insulin sensitivity.3 Subsequent studies in rat models have suggested that metabolic acidosis decreases the binding of insulin to its receptors.4,5 Finally, metabolic acidosis may also increase cortisol production,6 which in turn is implicated in the development of insulin resistance.7Recent epidemiologic studies have shown an association between clinical markers of metabolic acidosis and greater insulin resistance or prevalence of type 2 diabetes mellitus. In the National Health and Nutrition Examination Survey, both lower serum bicarbonate and higher anion gap (even within ranges considered normal) were associated with increased insulin resistance among adults without diabetes.8 In addition, higher levels of serum lactate, a small component of the anion gap, were associated with higher odds of prevalent type 2 diabetes mellitus in the Atherosclerosis Risk in Communities study9 and with higher odds of incident type 2 diabetes mellitus in a retrospective cohort study of the risk factors for diabetes in Swedish men.10 Other biomarkers associated with metabolic acidosis, including higher levels of serum ketones,11 lower urinary citrate excretion12 and low urine pH,13 have been associated in cross-sectional studies with either insulin resistance or the prevalence of type 2 diabetes mellitus. However, it is unclear whether these associations are a cause or consequence. We sought to address this question by prospectively examining the association between plasma bicarbonate and subsequent development of type 2 diabetes mellitus in a nested case–control study within the Nurses’ Health Study.  相似文献   

9.

Background:

Previous studies of differences in mental health care associated with children’s sociodemographic status have focused on access to community care. We examined differences associated with visits to the emergency department.

Methods:

We conducted a 6-year population-based cohort analysis using administrative databases of visits (n = 30 656) by children aged less than 18 years (n = 20 956) in Alberta. We measured differences in the number of visits by socioeconomic and First Nations status using directly standardized rates. We examined time to return to the emergency department using a Cox regression model, and we evaluated time to follow-up with a physician by physician type using a competing risks model.

Results:

First Nations children aged 15–17 years had the highest rate of visits for girls (7047 per 100 000 children) and boys (5787 per 100 000 children); children in the same age group from families not receiving government subsidy had the lowest rates (girls: 2155 per 100 000 children; boys: 1323 per 100 000 children). First Nations children (hazard ratio [HR] 1.64; 95% confidence interval [CI] 1.30–2.05), and children from families receiving government subsidies (HR 1.60, 95% CI 1.30–1.98) had a higher risk of return to an emergency department for mental health care than other children. The longest median time to follow-up with a physician was among First Nations children (79 d; 95% CI 60–91 d); this status predicted longer time to a psychiatrist (HR 0.47, 95% CI 0.32–0.70). Age, sex, diagnosis and clinical acuity also explained post-crisis use of health care.

Interpretation:

More visits to the emergency department for mental health crises were made by First Nations children and children from families receiving a subsidy. Sociodemographics predicted risk of return to the emergency department and follow-up care with a physician.Emergency departments are a critical access point for mental health care for children who have been unable to receive care elsewhere or are in crisis.1 Care provided in an emergency department can stabilize acute problems and facilitate urgent follow-up for symptom management and family support.1,2Race, ethnic background and socioeconomic status have been linked to a crisis-oriented care patterns among American children.3,4 Minority children are less likely than white children to have received mental health treatment before an emergency department visit,3,4 and uninsured children are less likely to receive an urgent mental health evaluation when needed.4 Other studies, however, have shown no relation between sociodemographic status and mental health care,5,6 and it may be that different health system characteristics (e.g., pay-for-service, insurance coverage, publicly funded care) interact with sociodemographic status to influence how mental health resources are used. Canadian studies are largely absent in this discussion, despite a known relation between lower income and poorer mental health status,7 nationwide documentation of disparities faced by Aboriginal children,810 and government-commissioned reviews that highlight deficits in universal access to mental health care.11We undertook the current study to examine whether sociodemographic differences exist in the rates of visits to emergency departments for mental health care and in the use of post-crisis health care services for children in Alberta. Knowledge of whether differences exist for children with mental health needs may help identify children who could benefit from earlier intervention to prevent illness destabilization and children who may be disadvantaged in the period after the emergency department visit. We hypothesized that higher rates of emergency department use, lower rates of follow-up physician visits after the initial emergency department visit, and a longer time to physician follow-up would be observed among First Nations children and children from families receiving government social assistance.  相似文献   

10.

Background:

Patients with type 2 diabetes have a 40% increased risk of bladder cancer. Thiazolidinediones, especially pioglitazone, may increase the risk. We conducted a systematic review and meta-analysis to evaluate the risk of bladder cancer among adults with type 2 diabetes taking thiazolidinediones.

Methods:

We searched key biomedical databases (including MEDLINE, Embase and Scopus) and sources of grey literature from inception through March 2012 for published and unpublished studies, without language restrictions. We included randomized controlled trials (RCTs), cohort studies and case–control studies that reported incident bladder cancer among people with type 2 diabetes who ever (v. never) were exposed to pioglitazone (main outcome), rosiglitazone or any thiazolidinedione.

Results:

Of the 1787 studies identified, we selected 4 RCTs, 5 cohort studies and 1 case–control study. The total number of patients was 2 657 365, of whom 3643 had newly diagnosed bladder cancer, for an overall incidence of 53.1 per 100 000 person-years. The one RCT that reported on pioglitazone use found no significant association with bladder cancer (risk ratio [RR] 2.36, 95% confidence interval [CI] 0.91–6.13). The cohort studies of thiazolidinediones (pooled RR 1.15, 95% CI 1.04–1.26; I2 = 0%) and of pioglitazone specifically (pooled RR 1.22, 95% CI 1.07–1.39; I2 = 0%) showed significant associations with bladder cancer. No significant association with bladder cancer was observed in the two RCTs that evaluated rosiglitazone use (pooled RR 0.87, 95% CI 0.34–2.23; I2 = 0%).

Interpretation:

The limited evidence available supports the hypothesis that thiazolidinediones, particularly pioglitazone, are associated with an increased risk of bladder cancer among adults with type 2 diabetes.People with type 2 diabetes are at increased risk of several types of cancer, including a 40% increased risk of bladder cancer, compared with those without diabetes.1,2 The strong association with bladder cancer is hypothesized to be a result of hyperinsulinemia, whereby elevated insulin levels in type 2 diabetes stimulate insulin receptors on neoplastic cells, promoting cancer growth and division.1,35 Additional risk factors for bladder cancer include increased age, male sex, smoking, occupational and environmental exposures and urinary tract disease.6 Exogenous insulin and other glucose-lowering medications such as sulfonylureas, metformin and thiazolidinediones, may further modify the risk of bladder cancer.1Data from the placebo-controlled PROactive trial of pioglitazone (PROspective pioglitAzone Clinical Trial in macroVascular Events) suggested a higher incidence of bladder cancer among pioglitazone users than among controls.7 Subsequent randomized controlled trials (RCTs) and observational studies have reported conflicting results for pioglitazone, with various studies reporting a significant increase,8,9 a nonsignificant increase10 and even a decreased risk11 of bladder cancer.To test the hypothesis that pioglitazone use is associated with an increased risk of bladder cancer, we conducted a systematic review and meta-analysis of RCTs and observational studies reporting bladder cancer among adults with type 2 diabetes taking pioglitazone. To clarify the possibility of a drug-class effect, we also examined data for all thiazolidinediones and for rosiglitazone alone.  相似文献   

11.

Background:

For every patient with chronic kidney disease who undergoes renal-replacement therapy, there is one patient who undergoes conservative management of their disease. We aimed to determine the most important characteristics of dialysis and the trade-offs patients were willing to make in choosing dialysis instead of conservative care.

Methods:

We conducted a discrete choice experiment involving adults with stage 3–5 chronic kidney disease from eight renal clinics in Australia. We assessed the influence of treatment characteristics (life expectancy, number of visits to the hospital per week, ability to travel, time spent undergoing dialysis [i.e., time spent attached to a dialysis machine per treatment, measured in hours], time of day at which treatment occurred, availability of subsidized transport and flexibility of the treatment schedule) on patients’ preferences for dialysis versus conservative care.

Results:

Of 151 patients invited to participate, 105 completed our survey. Patients were more likely to choose dialysis than conservative care if dialysis involved an increased average life expectancy (odds ratio [OR] 1.84, 95% confidence interval [CI] 1.57–2.15), if they were able to dialyse during the day or evening rather than during the day only (OR 8.95, 95% CI 4.46–17.97), and if subsidized transport was available (OR 1.55, 95% CI 1.24–1.95). Patients were less likely to choose dialysis over conservative care if an increase in the number of visits to hospital was required (OR 0.70, 95% CI 0.56–0.88) and if there were more restrictions on their ability to travel (OR = 0.47, 95%CI 0.36–0.61). Patients were willing to forgo 7 months of life expectancy to reduce the number of required visits to hospital and 15 months of life expectancy to increase their ability to travel.

Interpretation:

Patients approaching end-stage kidney disease are willing to trade considerable life expectancy to reduce the burden and restrictions imposed by dialysis.Stage 5 chronic kidney disease is a major health issue worldwide and has a mortality that exceeds many cancers.1,2 The treatment options for stage 5 (i.e., end-stage) kidney disease include dialysis, kidney transplantation and supportive nondialytic treatment (conservative care). A national report by the Australian Institute of Health and Welfare estimates that for every patient with chronic kidney disease who undergoes dialysis or transplantation, there is one other patient whose disease is managed conservatively.3Conservative care includes the multidisciplinary management of uremic symptoms through diet and medications, such as erythropoietin and diuretics, as well as psychosocial support and eventual palliative care. The reported median survival with conservative care for end-stage kidney disease is between 6 and 32 months. For some patients, particularly the elderly and those with ischemic heart disease, this period may be equal to or greater than their expected survival with dialysis.47 Dialysis usually prolongs life, but it can impose a substantial burden on patients and their families and may be associated with a reduction in quality of life. The decision to start dialysis thus involves an assessment of both the evidence-based outcomes for the population in question and the preferences of the individual patient.Incorporating patient preferences for treatment of stage 5 chronic kidney disease is recommended in clinical guidelines;8 however, little is known about the trade-offs that patients are willing to consider when choosing between dialysis and conservative care. Discrete choice experiments are used to quantify patient preferences. These experiments are grounded in economic theory9,10 and allow the measurement of patients’ strengths of preferences for different characteristics of treatment and the trade-offs involved. Real-world decisions are closely simulated through the simultaneous consideration of all treatment characteristics.11 Discrete choice experiments are a valid and reliable approach to eliciting preferences for health care1214 and have been used to measure the preferences of patients with chronic kidney disease in terms of organ donation and allocation, and end-of-life care.15Knowing patients’ preferences for the treatment of stage 5 chronic kidney disease is necessary to plan appropriate health care services and enhance the quality of care. With this study, we aimed to quantify the extent to which the characteristics of dialysis influence patient preferences for treatment and to assess the trade-offs patients were willing to make between these characteristics.  相似文献   

12.

Background:

Although guidelines advise titration of palliative sedation at the end of life, in practice the depth of sedation can range from mild to deep. We investigated physicians’ considerations about the depth of continuous sedation.

Methods:

We performed a qualitative study in which 54 physicians underwent semistructured interviewing about the last patient for whom they had been responsible for providing continuous palliative sedation. We also asked about their practices and general attitudes toward sedation.

Results:

We found two approaches toward the depth of continuous sedation: starting with mild sedation and only increasing the depth if necessary, and deep sedation right from the start. Physicians described similar determinants for both approaches, including titration of sedatives to the relief of refractory symptoms, patient preferences, wishes of relatives, expert advice and esthetic consequences of the sedation. However, physicians who preferred starting with mild sedation emphasized being guided by the patient’s condition and response, and physicians who preferred starting with deep sedation emphasized ensuring that relief of suffering would be maintained. Physicians who preferred each approach also expressed different perspectives about whether patient communication was important and whether waking up after sedation is started was problematic.

Interpretation:

Physicians who choose either mild or deep sedation appear to be guided by the same objective of delivering sedation in proportion to the relief of refractory symptoms, as well as other needs of patients and their families. This suggests that proportionality should be seen as a multidimensional notion that can result in different approaches toward the depth of sedation.Palliative sedation is considered to be an appropriate option when other treatments fail to relieve suffering in dying patients.1,2 There are important questions associated with this intervention, such as how deep the sedation must be to relieve suffering and how important it is for patients and their families for the patient to maintain a certain level of consciousness.1 In the national guidelines for the Netherlands, palliative sedation is defined as “the intentional lowering of consciousness of a patient in the last phase of life.”3,4 Sedatives can be administered intermittently or continuously, and the depth of palliative sedation can range from mild to deep.1,5Continuous deep sedation until death is considered the most far reaching and controversial type of palliative sedation. Nevertheless, it is used frequently: comparable nationwide studies in Europe show frequencies of 2.5% to 16% of all deaths.68 An important reason for continuous deep sedation being thought of as controversial is the possible association of this practice with the hastening of death,911 although it is also argued that palliative sedation does not shorten life when its use is restricted to the patient’s last days of life.12,13 Guidelines for palliative sedation often advise physicians to titrate sedatives,2,3,14 which means that the dosages of sedatives are adjusted to the level needed for proper relief of symptoms. To date, research has predominantly focused on the indications and type of medications used for sedation. In this study, we investigated how physicians decide the depth of continuous palliative sedation and how these decisions relate to guidelines.  相似文献   

13.

Background:

Persistent postoperative pain continues to be an underrecognized complication. We examined the prevalence of and risk factors for this type of pain after cardiac surgery.

Methods:

We enrolled patients scheduled for coronary artery bypass grafting or valve replacement, or both, from Feb. 8, 2005, to Sept. 1, 2009. Validated measures were used to assess (a) preoperative anxiety and depression, tendency to catastrophize in the face of pain, health-related quality of life and presence of persistent pain; (b) pain intensity and interference in the first postoperative week; and (c) presence and intensity of persistent postoperative pain at 3, 6, 12 and 24 months after surgery. The primary outcome was the presence of persistent postoperative pain during 24 months of follow-up.

Results:

A total of 1247 patients completed the preoperative assessment. Follow-up retention rates at 3 and 24 months were 84% and 78%, respectively. The prevalence of persistent postoperative pain decreased significantly over time, from 40.1% at 3 months to 22.1% at 6 months, 16.5% at 12 months and 9.5% at 24 months; the pain was rated as moderate to severe in 3.6% at 24 months. Acute postoperative pain predicted both the presence and severity of persistent postoperative pain. The more intense the pain during the first week after surgery and the more it interfered with functioning, the more likely the patients were to report persistent postoperative pain. Pre-existing persistent pain and increased preoperative anxiety also predicted the presence of persistent postoperative pain.

Interpretation:

Persistent postoperative pain of nonanginal origin after cardiac surgery affected a substantial proportion of the study population. Future research is needed to determine whether interventions to modify certain risk factors, such as preoperative anxiety and the severity of pain before and immediately after surgery, may help to minimize or prevent persistent postoperative pain.Postoperative pain that persists beyond the normal time for tissue healing (> 3 mo) is increasingly recognized as an important complication after various types of surgery and can have serious consequences on patients’ daily living.13 Cardiac surgeries, such as coronary artery bypass grafting (CABG) and valve replacement, rank among the most frequently performed interventions worldwide.4 They aim to improve survival and quality of life by reducing symptoms, including anginal pain. However, persistent postoperative pain of nonanginal origin has been reported in 7% to 60% of patients following these surgeries.523 Such variability is common in other types of major surgery and is due mainly to differences in the definition of persistent postoperative pain, study design, data collection methods and duration of follow-up.13,24Few prospective cohort studies have examined the exact time course of persistent postoperative pain after cardiac surgery, and follow-up has always been limited to a year or less.9,14,25 Factors that put patients at risk of this type of problem are poorly understood.26 Studies have reported inconsistent results regarding the contribution of age, sex, body mass index, preoperative angina, surgical technique, grafting site, postoperative complications or level of opioid consumption after surgery.57,9,13,14,1619,2123,25,27 Only 1 study investigated the role of chronic nonanginal pain before surgery as a contributing factor;21 5 others prospectively assessed the association between persistent postoperative pain and acute pain intensity in the first postoperative week but reported conflicting results.13,14,21,22,25 All of the above studies were carried out in a single hospital and included relatively small samples. None of the studies examined the contribution of psychological factors such as levels of anxiety and depression before cardiac surgery, although these factors have been shown to influence acute or persistent postoperative pain in other types of surgery.1,24,28,29We conducted a prospective multicentre cohort study (the CARD-PAIN study) to determine the prevalence of persistent postoperative pain of nonanginal origin up to 24 months after cardiac surgery and to identify risk factors for the presence and severity of the condition.  相似文献   

14.
Background:Rates of imaging for low-back pain are high and are associated with increased health care costs and radiation exposure as well as potentially poorer patient outcomes. We conducted a systematic review to investigate the effectiveness of interventions aimed at reducing the use of imaging for low-back pain.Methods:We searched MEDLINE, Embase, CINAHL and the Cochrane Central Register of Controlled Trials from the earliest records to June 23, 2014. We included randomized controlled trials, controlled clinical trials and interrupted time series studies that assessed interventions designed to reduce the use of imaging in any clinical setting, including primary, emergency and specialist care. Two independent reviewers extracted data and assessed risk of bias. We used raw data on imaging rates to calculate summary statistics. Study heterogeneity prevented meta-analysis.Results:A total of 8500 records were identified through the literature search. Of the 54 potentially eligible studies reviewed in full, 7 were included in our review. Clinical decision support involving a modified referral form in a hospital setting reduced imaging by 36.8% (95% confidence interval [CI] 33.2% to 40.5%). Targeted reminders to primary care physicians of appropriate indications for imaging reduced referrals for imaging by 22.5% (95% CI 8.4% to 36.8%). Interventions that used practitioner audits and feedback, practitioner education or guideline dissemination did not significantly reduce imaging rates. Lack of power within some of the included studies resulted in lack of statistical significance despite potentially clinically important effects.Interpretation:Clinical decision support in a hospital setting and targeted reminders to primary care doctors were effective interventions in reducing the use of imaging for low-back pain. These are potentially low-cost interventions that would substantially decrease medical expenditures associated with the management of low-back pain.Current evidence-based clinical practice guidelines recommend against the routine use of imaging in patients presenting with low-back pain.13 Despite this, imaging rates remain high,4,5 which indicates poor concordance with these guidelines.6,7Unnecessary imaging for low-back pain has been associated with poorer patient outcomes, increased radiation exposure and higher health care costs.8 No short- or long-term clinical benefits have been shown with routine imaging of the low back, and the diagnostic value of incidental imaging findings remains uncertain.912 A 2008 systematic review found that imaging accounted for 7% of direct costs associated with low-back pain, which in 1998 translated to more than US$6 billion in the United States and £114 million in the United Kingdom.13 Current costs are likely to be substantially higher, with an estimated 65% increase in spine-related expenditures between 1997 and 2005.14Various interventions have been tried for reducing imaging rates among people with low-back pain. These include strategies targeted at the practitioner such as guideline dissemination,1517 education workshops,18,19 audit and feedback of imaging use,7,20,21 ongoing reminders7 and clinical decision support.2224 It is unclear which, if any, of these strategies are effective.25 We conducted a systematic review to investigate the effectiveness of interventions designed to reduce imaging rates for the management of low-back pain.  相似文献   

15.
16.

Background:

Use of the serum creatinine concentration, the most widely used marker of kidney function, has been associated with under-reporting of chronic kidney disease and late referral to nephrologists, especially among women and elderly people. To improve appropriateness of referrals, automatic reporting of the estimated glomerular filtration rate (eGFR) by laboratories was introduced in the province of Ontario, Canada, in March 2006. We hypothesized that such reporting, along with an ad hoc educational component for primary care physicians, would increase the number of appropriate referrals.

Methods:

We conducted a population-based before–after study with interrupted time-series analysis at a tertiary care centre. All referrals to nephrologists received at the centre during the year before and the year after automatic reporting of the eGFR was introduced were eligible for inclusion. We used regression analysis with autoregressive errors to evaluate whether such reporting by laboratories, along with ad hoc educational activities for primary care physicians, had an impact on the number and appropriateness of referrals to nephrologists.

Results:

A total of 2672 patients were included in the study. In the year after automatic reporting began, the number of referrals from primary care physicians increased by 80.6% (95% confidence interval [CI] 74.8% to 86.9%). The number of appropriate referrals increased by 43.2% (95% CI 38.0% to 48.2%). There was no significant change in the proportion of appropriate referrals between the two periods (−2.8%, 95% CI −26.4% to 43.4%). The proportion of elderly and female patients who were referred increased after reporting was introduced.

Interpretation:

The total number of referrals increased after automatic reporting of the eGFR began, especially among women and elderly people. The number of appropriate referrals also increased, but the proportion of appropriate referrals did not change significantly. Future research should be directed to understanding the reasons for inappropriate referral and to develop novel interventions for improving the referral process.Until recently, the serum creatinine concentration was used universally as an index of the glomerular filtration rate (GFR) to identify and monitor chronic kidney disease.1 The serum creatinine concentration depends on several factors, the most important being muscle mass.1 Women as compared with men, and elderly people as compared with young adults, tend to have lower muscle mass for the same degree of kidney function and thus have lower serum creatinine concentrations.2,3 Consequently, the use of the serum creatinine concentration is associated with underrecognition of chronic kidney disease, delayed workup for chronic kidney disease and late referral to nephrologists, particularly among women and elderly people. Late referral has been associated with increased mortality among patients receiving dialysis.311In 1999, the Modification of Diet in Renal Disease formula was introduced to calculate the estimated GFR (eGFR).12,13 This formula uses the patient’s serum creatinine concentration, age, sex and race (whether the patient is black or not). All of these variables are easily available to laboratories except race. Laboratories report the eGFR for non-black people, with advice to practitioners to multiply the result by 1.21 if their patient is black. Given that reporting of the eGFR markedly improves detection of chronic kidney disease,14,15 several national organizations recommended that laboratories automatically calculate and report the eGFR when the serum creatinine concentration is requested.1619 These organizations also provided guidelines on appropriate referral to nephrology based on the value.Although several studies have reported increases in referrals to nephrologists after automatic reporting of the eGFR was introduced,2026 there is limited evidence on the impact that such reporting has had on the appropriateness of referrals. An increase in the number of inappropriate referrals would affect health care delivery, diverting scarce resources to the evaluation of relatively mild kidney disease. It also would likely increase wait times for all nephrology referrals and have a financial impact on the system because specialist care is more costly than primary care.We conducted a study to evaluate whether the introduction of automatic reporting of the eGFR by laboratories, along with ad hoc educational activities for primary care physicians, had an impact on the number and appropriateness of referrals to nephrologists.  相似文献   

17.

Background:

Results of randomized controlled trials evaluating zinc for the treatment of the common cold are conflicting. We conducted a systematic review and meta-analysis to evaluate the efficacy and safety of zinc for such use.

Methods:

We searched electronic databases and other sources for studies published through to Sept. 30, 2011. We included all randomized controlled trials comparing orally administered zinc with placebo or no treatment. Assessment for study inclusion, data extraction and risk-of-bias analyses were performed in duplicate. We conducted meta-analyses using a random-effects model.

Results:

We included 17 trials involving a total of 2121 participants. Compared with patients given placebo, those receiving zinc had a shorter duration of cold symptoms (mean difference −1.65 days, 95% confidence interval [CI] −2.50 to −0.81); however, heterogeneity was high (I2 = 95%). Zinc shortened the duration of cold symptoms in adults (mean difference −2.63, 95% CI −3.69 to −1.58), but no significant effect was seen among children (mean difference −0.26, 95% CI −0.78 to 0.25). Heterogeneity remained high in all subgroup analyses, including by age, dose of ionized zinc and zinc formulation. The occurrence of any adverse event (risk ratio [RR] 1.24, 95% CI 1.05 to 1.46), bad taste (RR 1.65, 95% CI 1.27 to 2.16) and nausea (RR 1.64, 95% CI 1.19 to 2.27) were more common in the zinc group than in the placebo group.

Interpretation:

The results of our meta-analysis showed that oral zinc formulations may shorten the duration of symptoms of the common cold. However, large high-quality trials are needed before definitive recommendations for clinical practice can be made. Adverse effects were common and should be the point of future study, because a good safety and tolerance profile is essential when treating this generally mild illness.The common cold is a frequent respiratory infection experienced 2 to 4 times a year by adults and up to 8 to 10 times a year by children.13 Colds can be caused by several viruses, of which rhinoviruses are the most common.4 Despite their benign nature, colds can lead to substantial morbidity, absenteeism and lost productivity.57Zinc, which can inhibit rhinovirus replication and has activity against other respiratory viruses such as respiratory syncytial virus,8 is a potential treatment for the common cold. The exact mechanism of zinc’s activity on viruses remains uncertain. Zinc may also reduce the severity of cold symptoms by acting as an astringent on the trigeminal nerve.9,10A recent meta-analysis of randomized controlled trials concluded that zinc was effective at reducing the duration and severity of common cold symptoms.11 However, there was considerable heterogeneity reported for the primary outcome (I2 = 93%), and subgroup analyses to explore between-study variations were not performed. The efficacy of zinc therefore remains uncertain, because it is unknown whether the variability among studies was due to methodologic diversity (i.e., risk of bias and therefore uncertainty in zinc’s efficacy) or differences in study populations or interventions (i.e., zinc dose and formulation).We conducted a systematic review and meta-analysis to evaluate the efficacy and safety of zinc for the treatment of the common cold. We sought to improve upon previous systematic reviews1117 by exploring the heterogeneity with subgroups identified a priori, identifying new trials by instituting a broader search and obtaining additional data from authors.  相似文献   

18.

Background:

The role of atrial fibrillation in cognitive impairment and dementia, independent of stroke, is uncertain. We sought to determine the association of atrial fibrillation with cognitive and physical impairment in a large group of patients at high cardiovascular risk.

Methods:

We conducted a post-hoc analysis of two randomized controlled trials involving 31 546 patients, the aims of which were to evaluate the efficacy of treatment with ramipril plus telmisartan (ONTARGET) or telmisartan alone (TRANSCEND) in reducing cardiovascular disease. We evaluated the cognitive function of participants at baseline and after two and five years using the Mini–Mental State Examination (MMSE). In addition, we recorded incident dementia, loss of independence in activities of daily living and admission to long-term care facilities. We used a Cox regression model adjusting for main confounders to determine the association between atrial fibrillation and our primary outcomes: a decrease of three or more points in MMSE score, incident dementia, loss of independence in performing activities of daily living and admission to long-term care.

Results:

We enrolled 31 506 participants for whom complete information on atrial fibrillation was available, 70.4% of whom were men. The mean age of participants was 66.5 years, and the mean baseline MMSE score was 27.7 (standard deviation 2.9) points. At baseline, 1016 participants (3.3%) had atrial fibrillation, with the condition developing in an additional 2052 participants (6.5%) during a median follow-up of 56 months. Atrial fibrillation was associated with an increased risk of cognitive decline (hazard ratio [HR] 1.14, 95% confidence interval [CI] 1.03–1.26), new dementia (HR 1.30, 95% CI 1.14–1.49), loss of independence in performing activities of daily living (HR 1.35, 95% CI 1.19–1.54) and admission to long-term care facilities (HR 1.53, 95% CI 1.31–1.79). Results were consistent among participants with and without stroke or receiving antihypertensive drugs.

Interpretation:

Cognitive and functional decline are important consequences of atrial fibrillation, even in the absence of overt stroke.A trial fibrillation is an important and modifiable cause of ischemic stroke, which may result in considerable physical and cognitive disability.1 In addition, atrial fibrillation is associated with an increased risk of covert cerebral infarction, which is reported in about one-quarter of patients with atrial fibrillation who undergo magnetic resonance imaging of the brain.2 Thus, atrial fibrillation may be an important determinant of cognitive and functional decline, even in the absence of clinical ischemic stroke. However, previous epidemiologic studies evaluating atrial fibrillation’s association with cognitive function have been inconsistent,313 and very few have evaluated its association with functional outcomes.14A recent systematic review showed convincing evidence of an association between atrial fibrillation and dementia in patients with a history of stroke, but it concluded that there was considerable uncertainty of a link between atrial fibrillation and dementia in patients with no history of stroke.15 Large prospective cohort studies are required to determine a true association between atrial fibrillation and cognitive outcomes.In this study, we sought to determine the prospective association between atrial fibrillation and cognitive decline, loss of independence in activities of daily living and admission to long-term care facilities, using data from a large group of patients included in the ONTARGET and TRANSCEND trials.16,17  相似文献   

19.

Background:

Evidence from observational studies have raised the possibility that statin treatment reduces the incidence of certain bacterial infections, particularly pneumonia. We analyzed data from a randomized controlled trial of rosuvastatin to examine this hypothesis.

Methods:

We analyzed data from the randomized, double-blind, placebo-controlled JUPITER trial (Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin). In this trial, 17 802 healthy participants (men 50 years and older and women 60 and older) with a low-density lipoprotein (LDL) cholesterol level below 130 mg/dL (3.4 mmol/L) and a high-sensitivity C-reactive protein level of 2.0 mg/L or greater were randomly assigned to receive either rosuvastatin or placebo. We evaluated the incidence of pneumonia on an intention-to-treat basis by reviewing reports of adverse events from the study investigators, who were unaware of the treatment assignments.

Results:

Among 17 802 trial participants followed for a median of 1.9 years, incident pneumonia was reported as an adverse event in 214 participants in the rosuvastatin group and 257 in the placebo group (hazard ratio [HR] 0.83, 95% confidence interval [CI] 0.69–1.00). In analyses restricted to events occurring before a cardiovascular event, pneumonia occurred in 203 participants given rosuvastatin and 250 given placebo (HR 0.81, 95% CI 0.67–0.97). Inclusion of recurrent pneumonia events did not modify this effect (HR 0.81, 95% CI 0.67–0.98), nor did adjustment for age, sex, smoking, metabolic syndrome, lipid levels and C-reactive protein level.

Interpretation:

Data from this randomized controlled trial support the hypothesis that statin treatment may modestly reduce the incidence of pneumonia.(ClinicalTrials.gov trial register no. NCT0023968.)Randomized trials of statin treatment have consistently shown reductions in the incidence of cardiovascular events.1 In addition to these proven vascular effects, several observational studies have raised the possibility that statins reduce the incidence and severity of certain bacterial infections,25 particularly pneumonia.69 Mechanistic support for this hypothesis is provided in part by laboratory evidence that statins, in addition to lowering low-density lipoprotein (LDL) cholesterol levels considerably, have a positive effect on inflammation, apoptosis, antioxidant balance and endothelial function.10 However, a common confounder typical of these observational studies relates to the fact that statin treatment may be a nonspecific marker of improved quality of care (healthy user effect).11,12 In addition, because infections such as pneumonia are a common complication of myocardial infarction and stroke, any beneficial effect of statin treatment on pneumonia and other infections reported in observational studies may have been due simply to a reduction in these vascular events.We reviewed data from the recently completed JUPITER trial (Justification for the Use of Statins in Prevention: an Intervention Trial Evaluating Rosuvastatin), a randomized controlled trial involving more than 17 000 men and women randomly assigned to receive either rosuvastatin or placebo, to examine the possibility that statins may reduce the incidence of pneumonia.  相似文献   

20.
Manea L  Gilbody S  McMillan D 《CMAJ》2012,184(3):E191-E196

Background:

The brief Patient Health Questionnaire (PHQ-9) is commonly used to screen for depression with 10 often recommended as the cut-off score. We summarized the psychometric properties of the PHQ-9 across a range of studies and cut-off scores to select the optimal cut-off for detecting depression.

Methods:

We searched Embase, MEDLINE and PsycINFO from 1999 to August 2010 for studies that reported the diagnostic accuracy of PHQ-9 to diagnose major depressive disorders. We calculated summary sensitivity, specificity, likelihood ratios and diagnostic odds ratios for detecting major depressive disorder at different cut-off scores and in different settings. We used random-effects bivariate meta-analysis at cutoff points between 7 and 15 to produce summary receiver operating characteristic curves.

Results:

We identified 18 validation studies (n = 7180) conducted in various clinical settings. Eleven studies provided details about the diagnostic properties of the questionnaire at more than one cut-off score (including 10), four studies reported a cut-off score of 10, and three studies reported cut-off scores other than 10. The pooled specificity results ranged from 0.73 (95% confidence interval [CI] 0.63–0.82) for a cut-off score of 7 to 0.96 (95% CI 0.94–0.97) for a cut-off score of 15. There was major variability in sensitivity for cut-off scores between 7 and 15. There were no substantial differences in the pooled sensitivity and specificity for a range of cut-off scores (8–11).

Interpretation:

The PHQ-9 was found to have acceptable diagnostic properties for detecting major depressive disorder for cut-off scores between 8 and 11. Authors of future validation studies should consistently report the outcomes for different cut-off scores.Depressive disorders are still under-recognized in medical settings despite major associated disability and costs. The use of short screening questionnaires may improve the recognition of depression in different medical settings.1 The depression module of the Patient Health Questionnaire (PHQ-9) has become increasingly popular in research and practice over the past decade.2 In its initial validation study, a score of 10 or higher had a sensitivity of 88% and a specificity of 88% for detecting major depressive disorders. Thus, a score of 10 has been recommended as the cut-off score for diagnosing this condition.3In a recent review of the PHQ-9, Kroenke and colleagues argued against inflexible adherence to a single cut-off score.2 A recent analysis of the management of depression in general practice in the United Kingdom showed that the accuracy of predicting major depressive disorder could be improved by using 12 as the cut-off score.4Given the widespread use of PHQ-9 in screening for depression and that certain cut-off scores are being recommended as part of national strategies to screen for depression (based on initial validation studies, which might not be generalizable),4,5 we attempted to determine whether the cut-off of 10 is optimum for screening for depression. This question could not be answered by two previous systematic reviews6,7 because of the small number of primary studies available at the time. We also aimed to provide greater clarity about the proper use of PHQ-9 given the many settings in which it is used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号