首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

The use of proton pump inhibitors has been associated with an increased risk of hip fracture. We sought to further explore the relation between duration of exposure to proton pump inhibitors and osteoporosis-related fractures.

Methods

We used administrative claims data to identify patients with a fracture of the hip, vertebra or wrist between April 1996 and March 2004. Cases were each matched with 3 controls based on age, sex and comorbidities. We calculated adjusted odds ratios (OR) for the risk of hip fracture and all osteoporosis-related fractures for durations of proton pump inhibitor exposure ranging from 1 or more years to more than 7 years.

Results

We matched 15 792 cases of osteoporosis-related fractures with 47 289 controls. We did not detect a significant association between the overall risk of an osteoportic fracture and the use of proton pump inhibitors for durations of 6 years or less. However, exposure of 7 or more years was associated with increased risk of an osteoporosis-related fracture (adjusted OR 1.92, 95% confidence interval [CI] 1.16–3.18, p = 0.011). We also found an increased risk of hip fracture after 5 or more years of exposure (adjusted OR 1.62, 95% CI 1.02–2.58, p = 0.04), with even higher risk after 7 or more years exposure (adjusted OR 4.55, 95% CI 1.68–12.29, p = 0.002).

Interpretation

Use of proton pump inhibitors for 7 or more years is associated with a significantly increased risk of an osteoporosis-related fracture. There is an increased risk of hip fracture after 5 or more years exposure. Further study is required to determine the clinical importance of this finding and to determine the value of osteoprotective medications for patients with long-term use of proton pump inhibitors.Osteoporosis is a common condition throughout the developed world, affecting up to 16% of women and 7% of men aged 50 years and older.1 The presence of underlying osteoporosis is a major risk factor for the development of fractures of the hip, proximal femur, spinal vertebra and forearm. In 2000, the estimated number of people with fractures worldwide was 56 million, and about 9 million new osteoporotic fractures occur each year.2 In 1993/94, the number of hip fractures in Canada was 23 375.3 This number is predicted to increase to 88 124 by the year 2041, with a parallel increase in the number of days in hospital (465 000 patient-days in 1993/94 to 1.8 million in 2041).3 Moreover, the case-fatality rate for hip fractures can exceed 20%,4 and all osteoporosis-related fractures can lead to substantial long-term disability and decreased quality of life.5Many risk factors for the development of osteoporosis-related fracture have been identified, including white ethnic background, low body mass index, physical inactivity and female sex.6–8 There are also a number of medication classes, including corticosteroids and serotonin selective reuptake inhibitors, whose use has been linked to higher rates of osteoporosis.9–11 Furthermore, any condition or drug that increases the risk of falls and injury also increases the risk of an osteoporosis-related fracture.12,13One medication class that may affect bone mineral metabolism is proton pump inhibitors. This class of drugs inhibits the production and intragastric secretion of hydrochloric acid, which is believed to be an important mediator of calcium absorption in the small intestine.14 Recent studies have suggested that the use of proton pump inhibitors for 1 or more years is associated with hip fracture and other osteoporotic fractures; however, there is limited data on additional risk beyond 4 years exposure.15,16Because proton pump inhibitors are commonly prescribed to control and prevent symptoms of chronic unrelenting conditions, it is likely that many patients will use these medications for more than 4 years. Therefore, we used an adminstrative database to examine the effects of longer durations of proton pump inhibitor use on the development of osteoporosis-related fractures.  相似文献   

2.

Background

Medication-related visits to the emergency department are an important but poorly understood phenomenon. We sought to evaluate the frequency, severity and preventability of drug-related visits to the emergency department.

Methods

We performed a prospective observational study of randomly selected adults presenting to the emergency department over a 12-week period. Emergency department visits were identified as drug-related on the basis of assessment by a pharmacist research assistant and an emergency physician; discrepancies were adjudicated by 2 independent reviewers.

Results

Among the 1017 patients included in the study, the emergency department visit was identified as drug-related for 122 patients (12.0%, 95% confidence interval [CI] 10.1%–14.2%); of these, 83 visits (68.0%, 95% CI 59.0%–76.2%) were deemed preventable. Severity was classified as mild in 15.6% of the 122 cases, moderate in 74.6% and severe in 9.8%. The most common reasons for drug-related visits were adverse drug reactions (39.3%), nonadherence (27.9%) and use of the wrong or suboptimal drug (11.5%). The probability of admission was significantly higher among patients who had a drug-related visit than among those whose visit was not drug-related (OR 2.18, 95% CI 1.46–3.27, p < 0.001), and among those admitted, the median length of stay was longer (8.0 [interquartile range 23.5] v. 5.5 [interquartile range 10.0] days, p = 0.06).

Interpretation

More than 1 in 9 emergency department visits are due to drug-related adverse events, a potentially preventable problem in our health care system.Adverse drug-related events are unfavourable occurrences related to the use or misuse of medications.1 It has been estimated that such events account for 17 million emergency department visits and 8.7 million hospital admissions annually in the United States.2,3 Between 1995 and 2000, costs associated with adverse drug-related events rose from US$76.6 billion to over US$177.4 billion.3,4Adverse drug-related events have recently been evaluated in ambulatory care settings and among patients admitted to hospital,5–9 and it has been estimated that 5%–25% of hospital admissions are drug-related.7,8 Unfortunately, emergency department visits are not reflected in most hospital studies, because patients seen in the emergency department for an adverse drug-related event are typically not admitted.10 In addition, most research evaluating drug-related visits to the emergency department has involved retrospective studies or analysis of administrative data.11–13 Retrospective studies may underestimate the incidence of drug-related visits because information may be missing or inaccurately documented.14 Finally, studies performed to date have used variable definitions of “drug-related events,”1,10 which limits comparative evaluation and generalizability.Despite the burden of drug-related morbidity and mortality, prospective research characterizing drug-related visits to the emergency department has been limited.15–17 We sought to overcome some of the limitations of research in this area by using a prospective design and a comprehensive definition of adverse drug-related events. The purpose of this study was to evaluate the frequency, severity and preventability of drug-related visits to the emergency department of a large tertiary care hospital, to classify the visits by type of drug-related problem and to identify patient, prescriber, drug and system factors associated with these visits.  相似文献   

3.

Background

The presumed superiority of newer fluoroquinolones for the treatment of acute bacterial sinusitis is based on laboratory data but has not yet been established on clinical grounds.

Methods

We performed a meta-analysis of randomized controlled trials comparing the effectiveness and safety of fluoroquinolones and β-lactams in acute bacterial sinusitis.

Results

We identified 8 randomized controlled trials investigating the newer “respiratory” fluoroquinolones moxifloxacin, levofloxacin and gatifloxacin. In the primary effectiveness analysis involving 2133 intention-to-treat patients from 5 randomized controlled trials, the extent of clinical cure and improvement did not differ between fluoroquinolones and β-lactams (odds ratio [OR] 1.09, 95% confidence interval [CI] 0.85–1.39) at the test-of-cure assessment, which varied from 10 to 31 days after the start of treatment. Fluoroquinolones were associated with an increased chance of clinical success among the clinically evaluable patients in all of the randomized controlled trials (OR 1.29, 95% CI 1.03–1.63) and in 4 blinded randomized controlled trials (OR 1.45, 95% CI 1.05–2.00). There was no statistically significant difference between fluoroquinolones and amoxicillin–clavulanate (OR 1.24, 95% CI 0.93–1.65). Eradication or presumed eradication of the pathogens isolated before treatment was more likely with fluoroquinolone treatment than with β-lactam treatment (OR 2.11, 95% CI 1.09–4.08). In the primary safety analysis, adverse events did not differ between treatments (OR 1.17, 95% CI 0.86–1.59). However, more adverse events occurred with fluoroquinolone use than with β-lactam use in 2 blinded randomized controlled trials. The associations described here were generally consistent when we included 3 additional studies involving other fluoroquinolones (ciprofloxacin and sparfloxacin) in the analysis.

Interpretation

In the treatment of acute bacterial sinusitis, newer fluoroquinolones conferred no benefit over β-lactam antibiotics. The use of fluoroquinolones as first-line therapy cannot be endorsed.Acute bacterial sinusitis (more accurately known as rhinosinusitis, given that the nasal mucosa is commonly involved1) is one of the most frequent health disorders;2 it has an adverse impact on patients'' quality of life3 and accounts for nearly 3 million ambulatory care visits in the United States annually4 and substantial health care costs.5Acute bacterial sinusitis typically follows an episode of viral upper respiratory tract illness.2 The diagnosis of bacterial disease in routine clinical practice is usually based on the presence of a constellation of clinical manifestations.1 The bacterial pathogens most commonly involved are Streptococcus pneumoniae, Haemophilus influenzae and, to a lesser degree, Moraxella catarrhalis.6,7 Over the years, these pathogens have acquired various degrees of resistance to many traditional antibiotics.8The benefit of older antibiotics over placebo in the treatment of acute bacterial sinusitis appears limited, mostly because of the high success rate achieved with placebo.9,10 However, newer, third-and fourth-generation fluoroquinolones possess excellent in vitro activity against the most common respiratory pathogens,11 and for this reason these drugs are often designated as “respiratory.” Based on analysis of the available laboratory data, current guidelines give the newer fluoroquinolones the highest ranking, in terms of expected clinical effectiveness, among the antimicrobials used to treat acute bacterial sinusitis (although admittedly the difference is marginal).7The presumed clinical advantage of the respiratory fluoroquinolones over other classes of antimicrobials has not been clearly demonstrated in comparative clinical trials or meta-analyses.9,10 We aimed to comprehensively reassess the role of fluoroquinolones in the treatment of acute bacterial sinusitis, in terms of effectiveness and safety, by performing a meta-analysis of relevant randomized controlled trials.  相似文献   

4.
5.

Background

Patients undergoing hip or knee replacement are at high risk of developing a postoperative venous thromboembolism even after discharge from hospital. We sought to identify hospital and patient characteristics associated with receiving thromboprophylaxis after discharge and to compare the risk of short-term mortality among those who did or did not receive thromboprophylaxis.

Methods

We conducted a retrospective cohort study using system-wide hospital discharge summary records, physician billing information, medication reimbursement claims and demographic records. We included patients aged 65 years and older who received a hip or knee replace ment and who were discharged home after surgery.

Results

In total we included 10 744 patients. Of these, 7058 patients who received a hip replacement and 3686 who received a knee replacement. The mean age was 75.4 (standard deviation [SD] 6.8) years and 38% of patients were men. In total, 2059 (19%) patients received thomboprophylaxis at discharge. Patients discharged from university teaching hospitals were less likely than those discharged from community hospitals to received thromboprophylaxis after discharge (odds ratio [OR] 0.89, 95% confidence interval [CI] 0.80–1.00). Patients were less likely to receive thromboprophylaxis after discharge if they had a longer hospital stay (15–30 days v. 1–7 days, OR 0.69, 95% CI 0.59–0.81). Patients were more likely to receive thromboprophylaxis if they had hip (v. knee) replacement, osteoarthritis, heart failure, atrial fibrillation or hypertension, higher (v. lower) income or if they were treated at medium-volume hospitals (69–116 hip and knee replacements per year). In total, 223 patients (2%) died in the 3-month period after discharge. The risk of short-term mortality was lower among those who received thromboprophylaxis after discharge (hazard ratio [HR] 0.34, 95% CI 0.20–0.57).

Interpretation

Fewer than 1 in 5 elderly patients discharged home after a hip-or knee-replacement surgery received postdischarge thromboprophylaxis. Those prescribed these medications had a lower risk of short-term mortality. The benefits of and barriers to thromboprophylaxis therapy after discharge in this population requires further study.Venous thromboembolism is a leading cause of mortality among patients in hospital.1,2 Major orthopedic surgery (e.g., hip or knee replacement) is associated with a high risk for postoperative venous thromboembolism.1,3,4 Because the clinical diagnosis of venous thromboembolism is unreliable and its first manifestation may be a life-threatening pulmonary embolism,5 it is recommended that patients undergoing hip or knee replacement receive routine thromboprophylaxis with anticoagulant therapy after surgery unless they have contraindications to anticoagulant therapy.1,3,5,6Thromboprophylaxis is commonly administered for the entire hospital stay, which is usually between 4 and 14 days.7 Expert consensus guidelines recommend that patients undergoing hip or knee replacement receive thromboprophylaxis medications for at least 10 days after surgery.6 These guidelines also recommend extended thromboprophylaxis for up to 28–35 days after surgery for patients undergoing hip replacement.6 Although there is evidence that extended thromboprophylaxis after hospital discharge is effective for reducing the risk of venous thromboembolism among patients who undergo hip replacement,8 the benefit among patients who undergo knee replacement has not been established.6 Thromboprophylaxis after discharge is likely to most benefit patients at high risk for venous thromboembolism, such as those with cancer, heart failure or major respiratory disease.6–9 However, given that patients who undergo joint replacement are often elderly and have multiple comorbidities, the risks associated with extended thromboprophylaxis, particularly gastrointestinal bleeding and hemorrhagic strokes, may be substantial and may be relative contraindications for this therapy.10Among patients discharged home after hip-or knee-replacement surgery, we sought characterize the use of thromboprophylaxis after discharge and its consequences on risk of short-term mortality.  相似文献   

6.
7.

Background

Studies of the prevalence of asthma among migrating populations may help in identifying environmental risk factors.

Methods

We analyzed data from Vancouver, Canada, and from Guangzhou, Beijing and Hong Kong, China, collected during phase 3 of the International Study of Asthma and Allergies in Childhood. We subdivided the Vancouver adolescents according to whether they were Chinese immigrants to Canada, Canadian-born Chinese or Canadian-born non-Chinese. We compared the prevalence of asthma and wheezing among Chinese adolescents born in Canada, Chinese adolescents who had immigrated to Canada and Chinese adolescents living in China.

Results

Of 7794 Chinese adolescents who met the inclusion criteria, 3058 were from Guangzhou, 2824 were from Beijing, and 1912 were from Hong Kong. Of 2235 adolescents in Vancouver, Canada, 475 were Chinese immigrants, 617 were Canadian-born Chinese, and 1143 were Canadian-born non-Chinese. The prevalence of current wheezing among boys ranged from 5.9% in Guangzhou to 11.2% in Canadian-born Chinese adolescents. For girls, the range was 4.3% in Guangzhou to 9.8% in Canadian-born Chinese adolescents. The prevalence of ever having had asthma ranged from 6.6% to 16.6% for boys and from 2.9% to 15.0% for girls. Prevalence gradients persisted after adjustment for other environmental variables (odds ratios for ever having had asthma among Canadian-born Chinese compared with native Chinese in Guangzhou: 2.72 [95% confidence interval 1.75–4.23] for boys and 5.50 [95% confidence interval 3.21–9.44] for girls; p < 0.001 for both). Among Chinese adolescents living in Vancouver, the prevalence of ever wheezing increased with duration of residence, from 14.5% among those living in Canada for less than 7 years to 20.9% among those living their entire life in Canada. The same pattern was observed for the prevalence of ever having had asthma, from 7.7% to 15.9%.

Interpretation

Asthma symptoms in Chinese adolescents were lowest among residents of mainland China, were greater for those in Hong Kong and those who had immigrated to Canada, and were highest among those born in Canada. These findings suggest that environmental factors and duration of exposure influence asthma prevalence.The prevalence of asthma symptoms exhibits large geographic variations, even among genetically similar groups,1,2 which suggests that differences may reflect variation in environmental factors. Epidemiologic studies have demonstrated associations between asthma and exposure to household allergens,3 pets,4 environmental tobacco smoke5 and environmental pollution,6 as well as sex,7 obesity,8 number of siblings and birth order,9 and maternal education.10 Increasing “westernization” of environmental factors (such as changes in maternal diet, smaller family size, fewer infections during infancy, increased use of antibiotics and vaccination, less exposure to rural environments and improved sanitation) has been associated with an increased risk of childhood asthma.11 Conversely, the hygiene hypothesis proposed by Strachan in 1989 suggested that infections and contact with older siblings may reduce the risk of allergic diseases.12Migration studies examining children of the same ethnic background living in different environments for part or all of their lives may help to identify factors relevant to the development of diseases and may explain some of the observed geographic variations in prevalence. In the International Study of Asthma and Allergies in Childhood, prevalence rates for asthma in Canada were among the highest in the world, whereas those in China were among the lowest.2 This difference could reflect genetic or environmental factors. China has been and continues to be a major source of international migration.13,14 Of immigrants in Vancouver, Canada, who landed between 1985 and 2001, half were born in East Asia, mainly Hong Kong and mainland China.15 Few studies on the prevalence of asthma among immigrants have been undertaken in Canada,16 and data for Chinese people living in Canada are not available.We hypothesized that the prevalence of asthma would be highest among Canadian-born Chinese adolescents, lower among Chinese adolescents who had immigrated to Canada and lowest among Chinese adolescents living in China. We further hypothesized that, among Chinese immigrants to Canada, prevalence rates of asthma would relate to duration of residence in Canada.  相似文献   

8.
9.
Highly active antiretroviral therapy (HAART) can reduce human immunodeficiency virus type 1 (HIV-1) viremia to clinically undetectable levels. Despite this dramatic reduction, some virus is present in the blood. In addition, a long-lived latent reservoir for HIV-1 exists in resting memory CD4+ T cells. This reservoir is believed to be a source of the residual viremia and is the focus of eradication efforts. Here, we use two measures of population structure—analysis of molecular variance and the Slatkin-Maddison test—to demonstrate that the residual viremia is genetically distinct from proviruses in resting CD4+ T cells but that proviruses in resting and activated CD4+ T cells belong to a single population. Residual viremia is genetically distinct from proviruses in activated CD4+ T cells, monocytes, and unfractionated peripheral blood mononuclear cells. The finding that some of the residual viremia in patients on HAART stems from an unidentified cellular source other than CD4+ T cells has implications for eradication efforts.Successful treatment of human immunodeficiency virus type 1 (HIV-1) infection with highly active antiretroviral therapy (HAART) reduces free virus in the blood to levels undetectable by the most sensitive clinical assays (18, 36). However, HIV-1 persists as a latent provirus in resting, memory CD4+ T lymphocytes (6, 9, 12, 16, 48) and perhaps in other cell types (45, 52). The latent reservoir in resting CD4+ T cells represents a barrier to eradication because of its long half-life (15, 37, 40-42) and because specifically targeting and purging this reservoir is inherently difficult (8, 25, 27).In addition to the latent reservoir in resting CD4+ T cells, patients on HAART also have a low amount of free virus in the plasma, typically at levels below the limit of detection of current clinical assays (13, 19, 35, 37). Because free virus has a short half-life (20, 47), residual viremia is indicative of active virus production. The continued presence of free virus in the plasma of patients on HAART indicates either ongoing replication (10, 13, 17, 19), release of virus after reactivation of latently infected CD4+ T cells (22, 24, 31, 50), release from other cellular reservoirs (7, 45, 52), or some combination of these mechanisms. Finding the cellular source of residual viremia is important because it will identify the cells that are still capable of producing virus in patients on HAART, cells that must be targeted in any eradication effort.Detailed analysis of this residual viremia has been hindered by technical challenges involved in working with very low concentrations of virus (13, 19, 35). Recently, new insights into the nature of residual viremia have been obtained through intensive patient sampling and enhanced ultrasensitive sequencing methods (1). In a subset of patients, most of the residual viremia consisted of a small number of viral clones (1, 46) produced by a cell type severely underrepresented in the peripheral circulation (1). These unique viral clones, termed predominant plasma clones (PPCs), persist unchanged for extended periods of time (1). The persistence of PPCs indicates that in some patients there may be another major cellular source of residual viremia (1). However, PPCs were observed in a small group of patients who started HAART with very low CD4 counts, and it has been unclear whether the PPC phenomenon extends beyond this group of patients. More importantly, it has been unclear whether the residual viremia generally consists of distinct virus populations produced by different cell types.Since the HIV-1 infection in most patients is initially established by a single viral clone (23, 51), with subsequent diversification (29), the presence of genetically distinct populations of virus in a single individual can reflect entry of viruses into compartments where replication occurs with limited subsequent intercompartmental mixing (32). Sophisticated genetic tests can detect such population structure in a sample of viral sequences (4, 39, 49). Using two complementary tests of population structure (14, 43), we analyzed viral sequences from multiple sources within individual patients in order to determine whether a source other than circulating resting CD4+ T cells contributes to residual viremia and viral persistence. Our results have important clinical implications for understanding HIV-1 persistence and treatment failure and for improving eradication strategies, which are currently focusing only on the latent CD4+ T-cell reservoir.  相似文献   

10.

Background

Measurement of bone mineral density is the most common method of diagnosing and assessing osteoporosis. We sought to estimate the average rate of change in bone mineral density as a function of age among Canadians aged 25–85, stratified by sex and use of antiresorptive agents.

Methods

We examined a longitudinal cohort of 9423 participants. We measured the bone mineral density in the lumbar spine, total hip and femoral neck at baseline in 1995–1997, and at 3-year (participants aged 40–60 years only) and 5-year follow-up visits. We used the measurements to compute individual rates of change.

Results

Bone loss in all 3 skeletal sites began among women at age 40–44. Bone loss was particularly rapid in the total hip and was greatest among women aged 50–54 who were transitioning from premenopause to postmenopause, with a change from baseline of –6.8% (95% confidence interval [CI] –7.5% to –4.9%) over 5 years. The rate of decline, particularly in the total hip, increased again among women older than 70 years. Bone loss in all 3 skeletal sites began at an earlier age (25–39) among men than among women. The rate of decline of bone density in the total hip was nearly constant among men 35 and older and then increased among men older than 65. Use of antiresorptive agents was associated with attenuated bone loss in both sexes among participants aged 50–79.

Interpretation

The period of accelerated loss of bone mineral density in the hip bones occurring among women and men older than 65 may be an important contributor to the increased incidence of hip fracture among patients in that age group. The extent of bone loss that we observed in both sexes indicates that, in the absence of additional risk factors or therapy, repeat testing of bone mineral density to diagnose osteoporosis could be delayed to every 5 years.Low bone mineral density is one of the most important risk factors for fracture.1,2,3–7 Treatment with antiresorptive agents has been widely used for several decades, and the results of randomized controlled trials have shown that at least part of their efficacy is associated with their capacity to increase or stabilize bone density.4 Although clinical guidelines recommend measurement of bone density, among other important risk factors, when assessing a patient''s risk for fracture,3,8,9 there is no international consensus on the optimal age at which to begin measurement, or on the frequency of measurement.10 The Canadian guidelines recommend it for patients aged 65 and older, even in the absence of risk factors or treatment, and suggest a frequency of every 2–3 years.8 Furthermore, it has been suggested that the rate of decline rather than a single measurement of bone density may better identify patients with an elevated risk for fracture.11 Consequently, determining changes in bone density over time may provide clues on the pathophysiology of fractures and provide more accurate estimates of the optimal timing for repeat measurement.Previous studies of change in bone mineral density as a function of age have had a number of limitations. Many were cross-sectional; had small samples, limited age ranges or differing inclusion and exclusion criteria; and most excluded men.12–20 The third National Health and Nutrition Examination Survey,21 a large cross-sectional study based in the United States included women and men aged 20 years and older but excluded only those who were pregnant or who had a fracture in both hips. It reported that, based on a single measurement of bone density in the hip, age-dependent bone loss in the hips begins early (20–40 years) and continues in both sexes throughout life. Cross-sectional data from the ongoing Canadian Multicentre Osteoporosis Study suggested that, although this finding may hold true for the femoral neck, which consists of both cortical and trabecular bone, it is not true for the largely trabecular lumbar spine.22 Furthermore, the use of cross-sectional data to estimate changes over time has fundamental limitations: the effect of age cannot be separated from the effect of birth cohort and survivorship, and estimates are based on between-group differences rather than changes in an individual participant.The use of longitudinal data would allow examination of the rate of change of bone mineral density over time with and without antiresorptive therapy. We sought to assess the average rate of change in bone density as a function of age among Canadians aged 25–85, stratified by sex and use of antiresorptive agents.  相似文献   

11.

Background

A recent Cochrane meta-analysis did not confirm the benefits of fish and fish oil in the secondary prevention of cardiac death and myocardial infarction. We performed a meta-analysis of randomized controlled trials that examined the effect of fish-oil supplementation on ventricular fibrillation and ventricular tachycardia to determine the overall effect and to assess whether heterogeneity exists between trials.

Methods

We searched electronic databases (MEDLINE, EMBASE, The Cochrane Central Register of Controlled Trials, CINAHL) from inception to May 2007. We included randomized controlled trials of fish-oil supplementation on ventricular fibrillation or ventricular tachycardia in patients with implantable cardioverter defibrillators. The primary outcome was implantable cardioverter defibrillator discharge. We calculated relative risk [RR] for outcomes at 1-year follow-up for each study. We used the DerSimonian and Laird random-effects methods when there was significant heterogeneity between trials and the Mantel-Hanzel fixed-effects method when heterogeneity was negligible.

Results

We identified 3 trials of 1–2 years'' duration. These trials included a total of 573 patients who received fish oil and 575 patients who received a control. Meta-analysis of data collected at 1 year showed no overall effect of fish oil on the relative risk of implantable cardioverter defibrillator discharge. There was significant heterogeneity between trials. The second largest study showed a significant benefit of fish oil (relative risk [RR] 0.74, 95% confidence interval [CI] 0.56–0.98). The smallest showed an adverse tendency at 1 year (RR 1.23, 95% CI 0.92–1.65) and significantly worse outcome at 2 years among patients with ventricular tachycardia at study entry (log rank p = 0.007).

Conclusion

These data indicate that there is heterogeneity in the response of patients to fish-oil supplementation. Caution should be used when prescribing fish-oil supplementation for patients with ventricular tachycardia.There is a public perception that fish and fish oil can be recommended uniformly for the prevention of coronary artery disease.1–3 However, the scientific evidence is divided4,5 and official agencies have called for more research.6It is estimated that 0.5% of patients with coronary heart disease, 1% of patients with diabetes or hypertension and 2% of the general population at low risk of coronary heart disease take fish-oil supplements.7 In 2004, the price of fish oils overtook that of vegetable oils, and in 2006, the price rose to US$750 per ton.8 The value of fish oil as a nutraceutical in the European market was US$194 million in 2004, and it is anticipated that the price will continue to rise as availability declines.8 Canada is both a consumer and an exporter of fish oil, and it exported 15 000 tons in 2006.9The scientific debate over the clinical value of fish oil is highlighted by a recent Cochrane review, which concluded that long-chain omega-3 fatty acids (eicosapentaenoic acid and docosahexaenoic acid) had no clear effect on total mortality, combined cardiovascular events or cancer.4 Furthermore, another recent meta-analysis10 only showed a significant positive association between fish-oil consumption and prevention of restenosis after coronary angioplasty in a select subgroup after excluding key negative papers.11 Finally, the antiarrhythmic effect, which is proposed to be the principal mechanism of their benefit in cardiovascular disease, has not been demonstrated clearly in clinical trials.12–14We therefore performed a meta-analysis of randomized controlled trials that examined the effect of fish-oil supplementation in patients with implantable cardioverter defibrillators who are at risk of ventricular arrhythmia to determine the overall effect of fish oils. We also sought to investigate whether there was significant heterogeneity between trials.  相似文献   

12.

Background

Up to 50% of adverse events that occur in hospitals are preventable. Language barriers and disabilities that affect communication have been shown to decrease quality of care. We sought to assess whether communication problems are associated with an increased risk of preventable adverse events.

Methods

We randomly selected 20 general hospitals in the province of Quebec with at least 1500 annual admissions. Of the 145 672 admissions to the selected hospitals in 2000/01, we randomly selected and reviewed 2355 charts of patients aged 18 years or older. Reviewers abstracted patient characteristics, including communication problems, and details of hospital admission, and assessed the cause and preventability of identified adverse events. The primary outcome was adverse events.

Results

Of 217 adverse events, 63 (29%) were judged to be preventable, for an overall population rate of 2.7% (95% confidence interval [CI] 2.1%–3.4%). We found that patients with preventable adverse events were significantly more likely than those without such events to have a communication problem (odds ratio [OR] 3.00; 95% CI 1.43–6.27) or a psychiatric disorder (OR 2.35; 95% CI 1.09–5.05). Patients who were admitted urgently were significantly more likely than patients whose admissions were elective to experience an event (OR 1.64, 95% CI 1.07–2.52). Preventable adverse events were mainly due to drug errors (40%) or poor clinical management (32%). We found that patients with communication problems were more likely than patients without these problems to experience multiple preventable adverse events (46% v. 20%; p = 0.05).

Interpretation

Patients with communication problems appeared to be at highest risk for preventable adverse events. Interventions to reduce the risk for these patients need to be developed and evaluated.Patient safety is a priority in modern health care systems. From 3% to 17% of hospital admissions result in an adverse event,1–8 and almost 50% of these events are considered to be preventable.3,9–12 An adverse event is an unintended injury or complication caused by delivery of clinical care rather than by the patient''s condition. The occurrence of adverse events has been well documented; however, identifying modifiable risk factors that contribute to the occurrence of preventable adverse events is critical. Studies of preventable adverse events have focused on many factors, but researchers have only recently begun to evaluate the role of patient characteristics.2,9,12,13 Older patients and those with a greater number of health problems have been shown to be at increased risk for preventable adverse events.10,11 However, previous studies have repeatedly suggested the need to investigate more diverse, modifiable risk factors.3,6,7,10,11,14–16Language barriers and disabilities that affect communication have been shown to decrease quality of care;16–20 however, their impact on preventable adverse events needs to be investigated. Patients with physical and sensory disabilities, such as deafness and blindness, have been shown to face considerable barriers when communicating with health care professionals.20–24 Communication disorders are estimated to affect 5%–10% of the general population,25 and in one study more than 15% of admissions to university hospitals involved patients with 1 or more disabilities severe enough to prevent almost any form of communication.26 In addition, patients with communication disabilities are already at increased risk for depression and other comorbidities.27–29 Determining whether they are at increased risk for preventable adverse events would permit risk stratification at the time of admission and targeted preventive strategies.We sought to estimate the extent to which preventable adverse events that occurred in hospital could be predicted by conditions that affect a patient''s ability to communicate.  相似文献   

13.

Background

The underuse of total joint arthroplasty in appropriate candidates is more than 3 times greater among women than among men. When surveyed, physicians report that the patient''s sex has no effect on their decision-making; however, what occurs in clinical practice may be different. The purpose of our study was to determine whether patients'' sex affects physicians'' decisions to refer a patient for, or to perform, total knee arthroplasty.

Methods

Seventy-one physicians (38 family physicians and 33 orthopedic surgeons) in Ontario performed blinded assessments of 2 standardized patients (1 man and 1 woman) with moderate knee osteoarthritis who differed only by sex. The standardized patients recorded the physicians'' final recommendations about total knee arthroplasty. Four surgeons did not consent to the inclusion of their data. After detecting an overall main effect, we tested for an interaction with physician type (family physician v. orthopedic surgeon). We used a binary logistic regression analysis with a generalized estimating equation approach to assess the effect of patients'' sex on physicians'' recommendations for total knee arthroplasty.

Results

In total, 42% of physicians recommended total knee arthroplasty to the male but not the female standardized patient, and 8% of physicians recommended total knee arthroplasty to the female but not the male standardized patient (odds ratio [OR] 4.2, 95% confidence interval [CI] 2.4–7.3, p < 0.001; risk ratio [RR] 2.1, 95% CI 1.5–2.8, p < 0.001). The odds of an orthopedic surgeon recommending total knee arthroplasty to a male patient was 22 times (95% CI 6.4–76.0, p < 0.001) that for a female patient. The odds of a family physician recommending total knee arthroplasty to a male patient was 2 times (95% CI 1.04–4.71, p = 0.04) that for a female patient.

Interpretation

Physicians were more likely to recommend total knee arthroplasty to a male patient than to a female patient, suggesting that gender bias may contribute to the sex-based disparity in the rates of use of total knee arthroplasty.Disparity in the use of medical or surgical interventions based on patient characteristics, such as sex, ethnic background or socioeconomic status, is an important health care issue.1 Women are less likely than men to receive lipid-lowering medication after a myocardial infarction,2 receive kidney dialysis,3 be admitted to an intensive care unit,4 or undergo cardiac catheterization,5 renal transplantation6 or total joint arthroplasty.7 Although women''s preferences for surgery or the information needed to make an informed decision may differ from men and explain sex-based differences in care,8,9 subtle or overt gender bias may inappropriately influence physicians'' clinical decision-making.2,5,7 A more pronounced gender bias might be expected when the clinical decision involves an elective surgical procedure such as total joint arthroplasty.Total hip and knee arthroplasty is the definitive treatment for relieving pain and restoring function in people with moderate to severe osteoarthritis for whom medical therapy has failed.10 Although age-adjusted rates of total joint arthroplasty are higher among women than among men,11 based on a population-based epidemiologic survey, underuse of arthroplasty is 3 times greater in women.7 In prior opinion surveys, more than 93% of referring physicians and orthopedic surgeons have reported that patients'' sex has no effect on their decision to refer a patient for, or perform, total knee arthroplasty.12,13 However, there may be a difference between what is reported in a survey and what occurs in clinical practice. The purpose of our study was to determine whether physicians would provide the same recommendation about total knee arthroplasty to a male and a female standardized patient presenting to their offices with identical clinical scenarios that differed only by sex.  相似文献   

14.

Background

Rosiglitazone and pioglitazone may increase the incidence of fractures. We aimed to determine systematically the risk of fractures associated with thiazolidinedione therapy and to evaluate the effect of the therapy on bone density.

Methods

We searched MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials (CENTRAL), other trial registries and product information sheets through June 2008. We selected long-term (≥ 1 year) randomized controlled trials involving patients with type 2 diabetes and controlled observational studies that described the risk of fractures or changes in bone density with thiazolidinediones. We calculated pooled odds ratios (ORs) for fractures and the weighted mean difference in bone density.

Results

We analyzed data from 10 randomized controlled trials involving 13 715 participants and from 2 observational studies involving 31 679 participants. Rosiglitazone and pioglitazone were associated with a significantly increased risk of fractures overall in the 10 randomized controlled trials (OR 1.45, 95% confidence interval [CI] 1.18–1.79; p < 0.001). Five randomized controlled trials showed a significantly increased risk of fractures among women (OR 2.23, 95% CI 1.65–3.01; p < 0.001) but not among men (OR 1.00, 95% CI 0.73–1.39; p = 0.98). The 2 observational studies demonstrated an increased risk of fractures associated with rosiglitazone and pioglitazone. Bone mineral density in women exposed to thiazolidinediones was significantly reduced at the lumbar spine (weighted mean difference –1.11%, 95% CI –2.08% to –0.14%; p = 0.02) and hip (weighted mean difference –1.24%, 95%CI –2.34% to –0.67%; p < 0.001) in 2 randomized controlled trials.

Interpretation

Long-term thiazolidinedione use doubles the risk of fractures among women with type 2 diabetes, without a significant increase in risk of fractures among men with type 2 diabetes.Recent systematic reviews have focused on the adverse cardiovascular effects of the thiazolidinediones rosiglitazone and pioglitazone.1–5 In late 2006, the risk of fractures with the use of rosiglitazone was raised in a footnote of the report of the A Diabetes Outcome and Progression Trial (ADOPT).6 The manufacturers of rosiglitazone7 and pioglitazone followed this up by issuing warning letters about the risk of fractures.7–9Women with type 2 diabetes are at an increased risk of nonvertebral fractures,10 with a near doubling in the risk of hip fractures.11 Any additional risk from thiazolidinedione therapy could have a considerable impact. Our primary objective was to determine systematically the relative and absolute risks of fractures with long-term thiazolidinedione therapy for type 2 diabetes. We also reviewed the effect of thiazolidinedione therapy on bone mineral density to ascertain its biological plausibility.  相似文献   

15.
To better understand the influence of environmental conditions on the adsorption of extracellular chromosomal DNA and its availability for natural transformation, the amount and conformation of adsorbed DNA were monitored under different conditions in parallel with transformation assays using the soil bacterium Azotobacter vinelandii. DNA adsorption was monitored using the technique of quartz crystal microbalance with dissipation (QCM-D). Both silica and natural organic matter (NOM) surfaces were evaluated in solutions containing either 100 mM NaCl or 1 mM CaCl2. The QCM-D data suggest that DNA adsorbed to silica surfaces has a more compact and rigid conformation in Ca2+ solution than in Na+ solution and that the reverse is true when DNA is adsorbed to NOM surfaces. While the amounts of DNA adsorbed on a silica surface were similar for Ca2+ and Na+ solutions, the amount of DNA adsorbed on an NOM-coated surface was higher in Ca2+ solution than in Na+ solution. Transformation frequencies for dissolved DNA and DNA adsorbed to silica and to NOM were 6 × 10−5, 5 × 10−5, and 2.5 × 10−4, respectively. For NOM-coated surfaces, transformation frequencies from individual experiments were 2- to 50-fold higher in the presence of Ca2+ than in the presence of Na+. The results suggest that groundwater hardness (i.e., Ca2+ concentration) will affect the amount of extracellular DNA adsorbed to the soil surface but that neither adsorption nor changes in the conformation of the adsorbed DNA will have a strong effect on the frequency of natural transformation of A. vinelandii.Horizontal gene transfer contributes to microbial evolution and provides mechanisms for the spread of both antimicrobial resistance genes and genetically engineered DNA. While most studies on horizontal gene transfer focus on conjugation, recent reviews on extracellular DNA (16, 27) document the need to consider also natural transformation. The amount of extracellular DNA in the soil is on the order of hundreds of ng/g of dry soil (27). Extracellular DNA adsorbs to many common soil constituents, including sand, clay, and natural organic matter (NOM) (16, 27), and once adsorbed may persist for days or years (16).As first reported by Lorenz et al. (18), adsorbed DNA is available for natural transformation and therefore represents a potential environmental reservoir facilitating horizontal gene transfer. DNA adsorbed on sand surfaces has been successfully transferred to both Gram-positive and Gram-negative soil bacteria, including Bacillus subtilis (18), Pseudomonas stutzeri (19), and Acinetobacter calcoaceticus (4). Other studies have focused on transformation with DNA adsorbed to other surfaces (for example, clay minerals [15], humic acids [6], and intact soils [25, 31]). In most previous studies, adsorbed DNA transformed at a lower frequency than dissolved DNA (see, for example, references 4 and 11). However, higher transformation frequencies for adsorbed DNA than for dissolved DNA have been reported in two studies (18, 19). In addition, Demaneche et al. were unable to detect Pseudomonas fluorescens transformants in a variety of liquid media despite successful transformations in sterile soil columns (8).Several studies have evaluated the influence of nutrients (24, 34) and soil (8, 11, 23, 31) on transformation efficiency. Less information is available on the influence of adsorbed DNA conformation on transformation efficiency. Pietramellara et al. speculated that the decrease in transformation rates they observed upon repeated wetting and drying cycles of adsorbed DNA was due to conformational changes (28). Cai et al. also speculated that differences in the conformation of adsorbed DNA could be responsible for lower transformation efficiencies for DNA bound to kaolinte and inorganic clays (2), based on their previous work characterizing adsorption to different surfaces (3). Detailed characterizations of the conformation of adsorbed DNA only recently became feasible, and the influence of the conformation of adsorbed DNA on transformation frequencies has not, to our knowledge, been systematically investigated.Characterization of the mass and conformation of DNA adsorbed on different surfaces can be accomplished using quartz crystal microbalance with dissipation (QCM-D) (7, 22). QCM-D measurements are based on the shift in frequency (ΔF) and the decay in vibrating energy (ΔD) that occur as molecules adsorb to piezoelectric sensors (29, 33). Viscosity, elastic shear modulus, and effective thickness of the adsorbed material can be determined by fitting the frequency and dissipation data to the viscoelastic Voigt model (7). Based on Nguyen and colleagues'' previous work with plasmid DNA adsorption on silica and NOM-coated surfaces, increasing electrolyte concentrations and the presence of divalent cations favor DNA adsorption (20-22). In addition, inner sphere complexation by Ca2+ with DNA phosphate backbone allows the formation of DNA-adsorbed layers that are more compact than the DNA-adsorbed layer formed by charge shielding in solution with a high Na+ concentration (20-22).The objective of this study was to investigate both the adsorption of chromosomal DNA to representative soil particle surfaces and the effect of such adsorption on natural transformation. We used QCM-D to characterize the conformation of adsorbed DNA on silica and NOM surfaces under two different solution chemistries (100 mM Na+ and 1 mM Ca2+). The influences of adsorption and of differences in the conformation of the adsorbed DNA on transformation frequencies were tested in a common soil bacterium, Azotobacter vinelandii. A. vinelandii is naturally competent (26) but had not been previously reported to be transformed with adsorbed DNA.  相似文献   

16.

Background

Whether to continue oral anticoagulant therapy beyond 6 months after an “unprovoked” venous thromboembolism is controversial. We sought to determine clinical predictors to identify patients who are at low risk of recurrent venous thromboembolism who could safely discontinue oral anticoagulants.

Methods

In a multicentre prospective cohort study, 646 participants with a first, unprovoked major venous thromboembolism were enrolled over a 4-year period. Of these, 600 participants completed a mean 18-month follow-up in September 2006. We collected data for 69 potential predictors of recurrent venous thromboembolism while patients were taking oral anticoagulation therapy (5–7 months after initiation). During follow-up after discontinuing oral anticoagulation therapy, all episodes of suspected recurrent venous thromboembolism were independently adjudicated. We performed a multivariable analysis of predictor variables (p < 0.10) with high interobserver reliability to derive a clinical decision rule.

Results

We identified 91 confirmed episodes of recurrent venous thromboembolism during follow-up after discontinuing oral anticoagulation therapy (annual risk 9.3%, 95% CI 7.7%–11.3%). Men had a 13.7% (95% CI 10.8%–17.0%) annual risk. There was no combination of clinical predictors that satisfied our criteria for identifying a low-risk subgroup of men. Fifty-two percent of women had 0 or 1 of the following characteristics: hyperpigmentation, edema or redness of either leg; D-dimer ≥ 250 μg/L while taking warfarin; body mass index ≥ 30 kg/m2; or age ≥ 65 years. These women had an annual risk of 1.6% (95% CI 0.3%–4.6%). Women who had 2 or more of these findings had an annual risk of 14.1% (95% CI 10.9%–17.3%).

Interpretation

Women with 0 or 1 risk factor may safely discontinue oral anticoagulant therapy after 6 months of therapy following a first unprovoked venous thromboembolism. This criterion does not apply to men. (http://Clinicaltrials.gov trial register number NCT00261014)Venous thromboembolism is a common, potentially fatal, yet treatable, condition. The risk of a recurrent venous thromboembolic event after 3–6 months of oral anticoagulant therapy varies. Some groups of patients (e.g., those who had a venous thromboembolism after surgery) have a very low annual risk of recurrence (< 1%),1 and they can safely discontinue anticoagulant therapy.2 However, among patients with an unprovoked thromboembolism who discontine anticoagulation therapy after 3–6 months, the risk of a recurrence in the first year is 5%–27%.3–6 In the second year, the risk is estimated to be 5%,3 and it is estimated to be 2%–3.8% for each subsequent year.5,7 The case-fatality rate for recurrent venous thromboembolism is between 5% and 13%.8,9 Oral anticoagulation therapy is very effective for reducing the risk of recurrence during therapy (> 90% relative risk [RR] reduction);3,4,10,11 however, this benefit is lost after therapy is discontinued.3,10,11 The risk of major bleeding with ongoing oral anticoagulation therapy among venous thromboembolism patients is 0.9–3.0% per year,3,4,6,12 with an estimated case-fatality rate of 13%.13Given that the long-term risk of fatal hemorrhage appears to balance the risk of fatal recurrent pulmonary embolism among patients with an unprovoked venous thromboembolism, clinicians are unsure if continuing oral anticoagulation therapy beyond 6 months is necessary.2,14 Identifying subgroups of patients with an annual risk of less than 3% will help clinicians decide which patients can safely discontinue anticoagulant therapy.We sought to determine the clinical predictors or combinations of predictors that identify patients with an annual risk of venous thromboembolism of less than 3% after taking an oral anticoagulant for 5–7 months after a first unprovoked event.  相似文献   

17.

Background

People aged 65 years or more represent a growing group of emergency department users. We investigated whether characteristics of primary care (accessibility and continuity) are associated with emergency department use by elderly people in both urban and rural areas.

Methods

We conducted a cross-sectional study using information for a random sample of 95 173 people aged 65 years or more drawn from provincial administrative databases in Quebec for 2000 and 2001. We obtained data on the patients'' age, sex, comorbidity, rate of emergency department use (number of days on which a visit was made to an amergency department per 1000 days at risk [i.e., alive and not in hospital] during the 2-year study period), use of hospital and ambulatory physician services, residence (urban v. rural), socioeconomic status, access (physician: population ratio, presence of primary physician) and continuity of primary care.

Results

After adjusting for age, sex and comorbidity, we found that an increased rate of emergency department use was associated with lack of a primary physician (adjusted rate ratio [RR] 1.45, 95% confidence interval [CI] 1.41–1.49) and low or medium (v. high) levels of continuity of care with a primary physician (adjusted RR 1.46, 95% CI 1.44–1.48, and 1.27, 95% CI 1.25–1.29, respectively). Other significant predictors of increased use of emergency department services were residence in a rural area, low socioeconomic status and residence in a region with a higher physician:population ratio. Among the patients who had a primary physician, continuity of care had a stronger protective effect in urban than in rural areas.

Interpretation

Having a primary physician and greater continuity of care with this physician are factors associated with decreased emergency department use by elderly people, particularly those living in urban areas.Canada is reforming its health care system, with primary care as a major focus.1 The population of Canadians aged 65 years or older is expected to double by 20262 and already accounts for the largest share of total health care expenditures.3 Thus, it is important to evaluate primary care services in this population. Because the emergency department often acts as a safety net for patients receiving inadequate primary care,4 emergency department use may be an important indicator of the adequacy of primary care services.The main determinants of emergency department use by elderly people are the severity and the nature of the medical needs of the patient (overall and specific comorbidities).5 After adjustment for need, increased access to and continuity of primary care may also be associated with lower emergency department use.5 However, most studies that investigated the impact of access and continuity of primary care were carried out in the United States, where the health care system is fundamentally different from Canada''s.5–8 Furthermore, most of these studies used self- reported measures of access and continuity of primary care.5,7,9We sought to identify determinants of emergency department use in a population-based sample of elderly people in Quebec, with particular focus on measures of access to and continuity of primary care. Access was defined by 2 measures: (a) presence of a primary physician and (b) physician: population ratio. Relational continuity was defined as the proportion of primary care visits with the primary physician.10,11 Finally, because primary care services in Quebec are organized differently in urban and rural areas,12 we also compared the association between emergency department use and continuity of care for urban and rural areas.  相似文献   

18.
19.
20.

Background

Concern has been raised about the efficacy of antidepressant therapy for major depression in adults. We undertook a systematic review of published and unpublished clinical trial data to determine the effectiveness and acceptability of paroxetine.

Methods

We searched the Cochrane Collaboration Depression, Anxiety and Neurosis Controlled Trials Register, the Cochrane Central Register of Controlled Trials, the GlaxoSmithKline Clinical Trial Register, MEDLINE and EMBASE up to December 2006. Published and unpublished randomized trials comparing paroxetine with placebo in adults with major depression were eligible for inclusion. We selected the proportion of patients who left a study early for any reason as the primary outcome measure because it represents a hard measure of treatment effectiveness and acceptability.

Results

We included in our review 29 published and 11 unpublished clinical trials, with a total of 3704 patients who received paroxetine and 2687 who received with placebo. There was no difference between paroxetine and placebo in terms of the proportion of patients who left the study early for any reason (random effect relative risk [RR] 0.99, 99% confidence interval [CI] 0.88–1.11). Paroxetine was more effective than placebo, with fewer patients who did not experience improvement in symptoms of at least 50% (random effect RR 0.83, 99% CI 0.77–0.90). Significantly more patients in the paroxetine group than in the placebo group left their respective studies because of side effects (random effect RR 1.77, 95% CI 1.44–2.18) or experienced suicidal tendencies (odds ratio 2.55, 95% CI 1.17–5.54).

Interpretation

Among adults with moderate to severe major depression in the clinical trials we reviewed, paroxetine was not superior to placebo in terms of overall treatment effectiveness and acceptability. These results were not biased by selective inclusion of published studies.Although most treatment guidelines recommend one of the selective serotonin reuptake inhibitors in the pharmacologic treatment of moderate to severe depression in adults,1,2 concerns have been raised in recent years about the efficacy of these drugs in alleviating symptoms of depression.3,4First, in trials of antidepressants, the choice of the outcome of interest is often problematic. Whereas in other fields of medicine the definition of outcome measures may be relatively straightforward, efficacy in the treatment of depression may be an elusive concept, typically measured by rating scales. The use of rating scales as outcome measures has often been questioned by physicians, who seldom use them to define patients'' improvement under clinical circumstances.5 In addition, improvement in symptoms is typically documented as the difference between baseline and post-treatment scores. Although from a practical viewpoint this approach seems reasonable, in that it allows physicians to make a reasoned judgment in terms of proportions of patients (and not in terms of means and standard deviations), it may systematically magnify the effect of new drugs relative to placebo.3 As a consequence of this criticism, the field of mental health has seen a renewal of interest in “hard measures” of treatment effectiveness.6–8 Hard measures include suicide attempts, treatment switching, hospital admissions, job loss or dropping out of the trial itself. Hard outcomes may also have a role in re-analyses of clinical trial data, where they may offer a “down-to-earth” evaluation of effectiveness and acceptability.A second concern is that the modest differences between active antidepressants and placebo, as calculated in systematic reviews of clinical trial data,9–12 might be explained in part by the selective inclusion of specific subsets of studies, such as those submitted to regulatory authorities by drug companies, those that were eventually published or those supported by the manufacturers of selective serotonin reuptake inhibitors.13In the present systematic review, we used a hard measure to determine the effectiveness and acceptability of paroxetine, a commonly prescribed antidepressant belonging to the group of selective serotonin reuptake inhibitors. Paroxetine was chosen for 3 reasons: first, it is one of the most frequently prescribed antidepressant drugs both in primary and secondary care (as indicated by a search of the US National Center for Health Statistics database, www2.cdc.gov/drugs/); second, GlaxoSmithKline, the company that launched paroxetine, has recently adopted and implemented a disclosure policy to provide easy access to all published and unpublished data from clinical trials that it has sponsored; and third, the effectiveness and acceptability of paroxetine are particularly relevant in view of recent concerns about its safety profile, especially in terms of suicidal tendencies among pediatric and adult patients with major depression.14–17  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号