首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 663 毫秒
1.

Background

The use of proton pump inhibitors has been associated with an increased risk of hip fracture. We sought to further explore the relation between duration of exposure to proton pump inhibitors and osteoporosis-related fractures.

Methods

We used administrative claims data to identify patients with a fracture of the hip, vertebra or wrist between April 1996 and March 2004. Cases were each matched with 3 controls based on age, sex and comorbidities. We calculated adjusted odds ratios (OR) for the risk of hip fracture and all osteoporosis-related fractures for durations of proton pump inhibitor exposure ranging from 1 or more years to more than 7 years.

Results

We matched 15 792 cases of osteoporosis-related fractures with 47 289 controls. We did not detect a significant association between the overall risk of an osteoportic fracture and the use of proton pump inhibitors for durations of 6 years or less. However, exposure of 7 or more years was associated with increased risk of an osteoporosis-related fracture (adjusted OR 1.92, 95% confidence interval [CI] 1.16–3.18, p = 0.011). We also found an increased risk of hip fracture after 5 or more years of exposure (adjusted OR 1.62, 95% CI 1.02–2.58, p = 0.04), with even higher risk after 7 or more years exposure (adjusted OR 4.55, 95% CI 1.68–12.29, p = 0.002).

Interpretation

Use of proton pump inhibitors for 7 or more years is associated with a significantly increased risk of an osteoporosis-related fracture. There is an increased risk of hip fracture after 5 or more years exposure. Further study is required to determine the clinical importance of this finding and to determine the value of osteoprotective medications for patients with long-term use of proton pump inhibitors.Osteoporosis is a common condition throughout the developed world, affecting up to 16% of women and 7% of men aged 50 years and older.1 The presence of underlying osteoporosis is a major risk factor for the development of fractures of the hip, proximal femur, spinal vertebra and forearm. In 2000, the estimated number of people with fractures worldwide was 56 million, and about 9 million new osteoporotic fractures occur each year.2 In 1993/94, the number of hip fractures in Canada was 23 375.3 This number is predicted to increase to 88 124 by the year 2041, with a parallel increase in the number of days in hospital (465 000 patient-days in 1993/94 to 1.8 million in 2041).3 Moreover, the case-fatality rate for hip fractures can exceed 20%,4 and all osteoporosis-related fractures can lead to substantial long-term disability and decreased quality of life.5Many risk factors for the development of osteoporosis-related fracture have been identified, including white ethnic background, low body mass index, physical inactivity and female sex.6–8 There are also a number of medication classes, including corticosteroids and serotonin selective reuptake inhibitors, whose use has been linked to higher rates of osteoporosis.9–11 Furthermore, any condition or drug that increases the risk of falls and injury also increases the risk of an osteoporosis-related fracture.12,13One medication class that may affect bone mineral metabolism is proton pump inhibitors. This class of drugs inhibits the production and intragastric secretion of hydrochloric acid, which is believed to be an important mediator of calcium absorption in the small intestine.14 Recent studies have suggested that the use of proton pump inhibitors for 1 or more years is associated with hip fracture and other osteoporotic fractures; however, there is limited data on additional risk beyond 4 years exposure.15,16Because proton pump inhibitors are commonly prescribed to control and prevent symptoms of chronic unrelenting conditions, it is likely that many patients will use these medications for more than 4 years. Therefore, we used an adminstrative database to examine the effects of longer durations of proton pump inhibitor use on the development of osteoporosis-related fractures.  相似文献   

2.

Background

Treatment of osteoarthritis with oral NSAID therapy provides pain relief but carries a substantial risk of adverse effects. Topical NSAID therapy offers an alternative to oral treatment, with the potential for a reduced risk of side effects. The objective of this trial was to assess the safety and efficacy of a topical diclofenac solution in relieving the symptoms of primary osteoarthritis of the knee.

Methods

We identified 248 men and women from southern Ontario with primary osteoarthritis of the knee and at least moderate pain. The patients were randomly assigned to apply 1 of 3 solutions to their painful knee for 4 weeks: a topical diclofenac solution (1.5% wt/wt diclofenac sodium in a carrier containing dimethyl sulfoxide [DMSO]); a vehicle-control solution (the carrier containing DMSO but no diclofenac); and a placebo solution (a modified carrier with a token amount of DMSO for blinding purposes but no diclofenac). The primary efficacy end point was pain relief, measured by the Western Ontario and McMaster Universities (WOMAC) LK3.0 Osteoarthritis Index pain subscale. Secondary end points were improved physical function and reduced stiffness (measured by the WOMAC subscales), reduced pain on walking and patient global assessment (PGA). Safety was evaluated with clinical and laboratory assessments.

Results

In the intent-to-treat group the mean change (and 95% confidence interval [CI]) in pain score from baseline to final assessment was significantly greater for the patients who applied the topical diclofenac solution (–3.9 [– 4.8 to –2.9]) than for those who applied the vehicle-control solution (–2.5 [– 3.3 to –1.7]; p = 0.023) or the placebo solution (–2.5 [–3.3 to –1.7]; p = 0.016). For the secondary variables the topical diclofenac solution also revealed superiority to the vehicle-control and placebo solutions, leading to mean changes (and 95% CIs) of –11.6 (–14.7 to –8.4; p = 0.002 and 0.014, respectively) in physical function, –1.5 (–1.9 to –1.1; p = 0.015 and 0.002, respectively) in stiffness and –0.8 (–1.1 to –0.6; p = 0.003 and 0.015, respectively) in pain on walking. The PGA scores were significantly better for the patients who applied the topical diclofenac solution than for those who applied the other 2 solutions (p = 0.039 and 0.025, respectively). The topical diclofenac solution caused some skin irritation, mostly minor local skin dryness, in 30 (36%) of the 84 patients, but this led to discontinuation of treatment in only 5 (6%) of the cases. The incidence of gastrointestinal events did not differ between the treatment groups. No serious gastrointestinal or renal adverse events were reported or detected by means of laboratory testing.

Interpretation

This topical diclofenac solution can provide safe, site-specific treatment for osteoarthritic pain, with only minor local skin irritation and minimal systemic side effects.Osteoarthritis is a degenerative joint disease affecting articular cartilage and underlying bone, commonly of the knee.1 Current treatment includes the oral use of NSAIDs, either nonselective or cyclooxygenase-2 (COX-2)-selective. These agents carry a substantial risk of clinically significant adverse effects, particularly on the gastrointestinal2,3 and renal systems.4 Although the incidence of gastrointestinal complications has been reported to be lower with COX-2-selective NSAIDs than with nonselective NSAIDs,5,6,7 the former have been linked to adverse renal effects8 and an increased risk of cardiovascular complications.9The need for safer treatment of osteoarthritis has led to research into the topical use of NSAIDs.10,11,12 Recent reviews of the few published placebo-controlled studies suggest that topical NSAID therapy can relieve pain13,14,15 with few gastrointestinal side effects.16 Current practice guidelines advocate the use of topical therapy, including NSAIDs, in the management of osteoarthritis.17,18,19 A diclofenac solution containing the absorption enhancer dimethyl sulfoxide (DMSO) was developed for site-specific topical application. The objective of this study was to demonstrate that applying this solution to a painful knee with primary osteoarthritis could provide symptom relief with minimal systemic side effects.  相似文献   

3.
4.

Background

Up to 50% of adverse events that occur in hospitals are preventable. Language barriers and disabilities that affect communication have been shown to decrease quality of care. We sought to assess whether communication problems are associated with an increased risk of preventable adverse events.

Methods

We randomly selected 20 general hospitals in the province of Quebec with at least 1500 annual admissions. Of the 145 672 admissions to the selected hospitals in 2000/01, we randomly selected and reviewed 2355 charts of patients aged 18 years or older. Reviewers abstracted patient characteristics, including communication problems, and details of hospital admission, and assessed the cause and preventability of identified adverse events. The primary outcome was adverse events.

Results

Of 217 adverse events, 63 (29%) were judged to be preventable, for an overall population rate of 2.7% (95% confidence interval [CI] 2.1%–3.4%). We found that patients with preventable adverse events were significantly more likely than those without such events to have a communication problem (odds ratio [OR] 3.00; 95% CI 1.43–6.27) or a psychiatric disorder (OR 2.35; 95% CI 1.09–5.05). Patients who were admitted urgently were significantly more likely than patients whose admissions were elective to experience an event (OR 1.64, 95% CI 1.07–2.52). Preventable adverse events were mainly due to drug errors (40%) or poor clinical management (32%). We found that patients with communication problems were more likely than patients without these problems to experience multiple preventable adverse events (46% v. 20%; p = 0.05).

Interpretation

Patients with communication problems appeared to be at highest risk for preventable adverse events. Interventions to reduce the risk for these patients need to be developed and evaluated.Patient safety is a priority in modern health care systems. From 3% to 17% of hospital admissions result in an adverse event,1–8 and almost 50% of these events are considered to be preventable.3,9–12 An adverse event is an unintended injury or complication caused by delivery of clinical care rather than by the patient''s condition. The occurrence of adverse events has been well documented; however, identifying modifiable risk factors that contribute to the occurrence of preventable adverse events is critical. Studies of preventable adverse events have focused on many factors, but researchers have only recently begun to evaluate the role of patient characteristics.2,9,12,13 Older patients and those with a greater number of health problems have been shown to be at increased risk for preventable adverse events.10,11 However, previous studies have repeatedly suggested the need to investigate more diverse, modifiable risk factors.3,6,7,10,11,14–16Language barriers and disabilities that affect communication have been shown to decrease quality of care;16–20 however, their impact on preventable adverse events needs to be investigated. Patients with physical and sensory disabilities, such as deafness and blindness, have been shown to face considerable barriers when communicating with health care professionals.20–24 Communication disorders are estimated to affect 5%–10% of the general population,25 and in one study more than 15% of admissions to university hospitals involved patients with 1 or more disabilities severe enough to prevent almost any form of communication.26 In addition, patients with communication disabilities are already at increased risk for depression and other comorbidities.27–29 Determining whether they are at increased risk for preventable adverse events would permit risk stratification at the time of admission and targeted preventive strategies.We sought to estimate the extent to which preventable adverse events that occurred in hospital could be predicted by conditions that affect a patient''s ability to communicate.  相似文献   

5.

Background

Whether to continue oral anticoagulant therapy beyond 6 months after an “unprovoked” venous thromboembolism is controversial. We sought to determine clinical predictors to identify patients who are at low risk of recurrent venous thromboembolism who could safely discontinue oral anticoagulants.

Methods

In a multicentre prospective cohort study, 646 participants with a first, unprovoked major venous thromboembolism were enrolled over a 4-year period. Of these, 600 participants completed a mean 18-month follow-up in September 2006. We collected data for 69 potential predictors of recurrent venous thromboembolism while patients were taking oral anticoagulation therapy (5–7 months after initiation). During follow-up after discontinuing oral anticoagulation therapy, all episodes of suspected recurrent venous thromboembolism were independently adjudicated. We performed a multivariable analysis of predictor variables (p < 0.10) with high interobserver reliability to derive a clinical decision rule.

Results

We identified 91 confirmed episodes of recurrent venous thromboembolism during follow-up after discontinuing oral anticoagulation therapy (annual risk 9.3%, 95% CI 7.7%–11.3%). Men had a 13.7% (95% CI 10.8%–17.0%) annual risk. There was no combination of clinical predictors that satisfied our criteria for identifying a low-risk subgroup of men. Fifty-two percent of women had 0 or 1 of the following characteristics: hyperpigmentation, edema or redness of either leg; D-dimer ≥ 250 μg/L while taking warfarin; body mass index ≥ 30 kg/m2; or age ≥ 65 years. These women had an annual risk of 1.6% (95% CI 0.3%–4.6%). Women who had 2 or more of these findings had an annual risk of 14.1% (95% CI 10.9%–17.3%).

Interpretation

Women with 0 or 1 risk factor may safely discontinue oral anticoagulant therapy after 6 months of therapy following a first unprovoked venous thromboembolism. This criterion does not apply to men. (http://Clinicaltrials.gov trial register number NCT00261014)Venous thromboembolism is a common, potentially fatal, yet treatable, condition. The risk of a recurrent venous thromboembolic event after 3–6 months of oral anticoagulant therapy varies. Some groups of patients (e.g., those who had a venous thromboembolism after surgery) have a very low annual risk of recurrence (< 1%),1 and they can safely discontinue anticoagulant therapy.2 However, among patients with an unprovoked thromboembolism who discontine anticoagulation therapy after 3–6 months, the risk of a recurrence in the first year is 5%–27%.3–6 In the second year, the risk is estimated to be 5%,3 and it is estimated to be 2%–3.8% for each subsequent year.5,7 The case-fatality rate for recurrent venous thromboembolism is between 5% and 13%.8,9 Oral anticoagulation therapy is very effective for reducing the risk of recurrence during therapy (> 90% relative risk [RR] reduction);3,4,10,11 however, this benefit is lost after therapy is discontinued.3,10,11 The risk of major bleeding with ongoing oral anticoagulation therapy among venous thromboembolism patients is 0.9–3.0% per year,3,4,6,12 with an estimated case-fatality rate of 13%.13Given that the long-term risk of fatal hemorrhage appears to balance the risk of fatal recurrent pulmonary embolism among patients with an unprovoked venous thromboembolism, clinicians are unsure if continuing oral anticoagulation therapy beyond 6 months is necessary.2,14 Identifying subgroups of patients with an annual risk of less than 3% will help clinicians decide which patients can safely discontinue anticoagulant therapy.We sought to determine the clinical predictors or combinations of predictors that identify patients with an annual risk of venous thromboembolism of less than 3% after taking an oral anticoagulant for 5–7 months after a first unprovoked event.  相似文献   

6.

Background

Postoperative delirium after elective surgery is frequent and potentially serious. We sought to determine whether the use of statin medications was associated with a higher risk of postoperative delirium than other medications that do not alter microvascular autoregulation.

Methods

We conducted a retrospective cohort analysis of 284 158 consecutive patients in Ontario aged 65 years and older who were admitted for elective surgery. We identified exposure to statins from outpatient pharmacy records before admission. We identified delirium by examining hospital records after surgery.

Results

About 7% (n = 19 501) of the patients were taking statins. Overall, 3195 patients experienced postoperative delirium; the rate was significantly higher among patients taking statins (14 per 1000) than among those not taking statins (11 per 1000) (odds ratio [OR] 1.30, 95% confidence interval [CI] 1.15–1.47, p < 0.001). The increased risk of postoperative delirium persisted after we adjusted for multiple demographic, medical and surgical factors (OR 1.28, 95% CI 1.12–1.46) and exceeded the increased risk of delirium associated with prolonging surgery by 30 minutes (OR 1.20, 95% CI 1.19– 1.21). The relative risk associated with statin use was somewhat higher among patients who had noncardiac surgery than among those who had cardiac surgery (adjusted OR 1.33, 95% CI 1.16–1.53), and extended to more complicated cases of delirium. We did not observe an increased risk of delirium with 20 other cardiac or noncardiac medications.

Interpretation

The use of statins is associated with an increased risk of postoperative delirium among elderly patients undergoing elective surgery.Delirium is an acute change in mental status that is worrisome to patients and families, especially after elective surgery. This condition may contribute to delays in extubation, a prolonged need for intensive care, increased risk of nosocomial infections and about a 1-week rise in total length of stay in hospital for the average patient.1,2 Delirium also disrupts many specific aspects of care, including the administration of medications, treatment of wounds, physiotherapy, nutrition, hygiene, discharge planning and dignity.3 The management of delirium is awkward and may lead to a cascade of nonspecific testing and sedation, with an average net increase in hospital costs of $2500 per patient.4 In some cases, the delirium never completely disappears, and the patient is left with a degree of permanent disability.5The causes of postoperative delirium are not well understood. Hypoglycemia, hypoxemia and hypotension are all possible and correctable, but they rarely have an immediate resolution.6 Medical imaging studies typically do not show specific changes; however, they may show markers of prior stroke or other lesions. One underlying factor may be cerebral ischemia secondary to inadequate perfusion. Altered cerebral perfusion may result in altered metabolism, an increased predisposition to drug toxicity or other factors during anesthesia and surgery.7 Cerebral ischemia may also explain commonly observed risk factors for postoperative delirium, including advanced age, baseline cognitive dysfunction and the failure of drug antagonists, major tranquilizers or modern volatile anesthetics to prevent postoperative delirium.8,9,10Statins have pleiotropic properties that alter the tone of smooth muscle in small blood vessels. Experiments on endothelial cells indicate that these changes are mediated by expression of endothelial nitric oxide synthase that is unrelated to cholesterol levels or vascular disease.11 In turn, activity of endothelial nitric oxide synthase contributes to arteriolar vasodilation by relaxing the surrounding smooth-muscle cells, thereby shifting the distribution of blood flow in the microvasculature of the brain. This can compromise individual neurons even if aggregate blood flow is maintained.12 These effects can be beneficial for reducing the size of stroke or other long-term neurologic disorders; however, altered cerebral blood flow autoregulation might predispose patients to delirium after anesthesia.13–15We sought to determine whether the use of statins was associated with postoperative delirium among elderly patients undergoing elective surgery.  相似文献   

7.

Background

Clinical trials have shown the benefits of statins after acute myocardial infarction (AMI). However, it is unclear whether different statins exert a similar effect in reducing the incidence of recurrent AMI and death when used in clinical practice.

Methods

We conducted a retrospective cohort study (1997–2002) to compare 5 statins using data from medical administrative databases in 3 provinces (Quebec, Ontario and British Columbia). We included patients aged 65 years and over who were discharged alive after their first AMI-related hospital stay and who began statin treatment within 90 days after discharge. The primary end point was the combined outcome of recurrent AMI or death from any cause. The secondary end point was death from any cause. Adjusted hazard ratios (HRs) for each statin compared with atorvastatin as the reference drug were estimated using Cox proportional hazards regression analysis.

Results

A total of 18 637 patients were prescribed atorvastatin (n = 6420), pravastatin (n = 4480), simvastatin (n = 5518), lovastatin (n = 1736) or fluvastatin (n = 483). Users of different statins showed similar baseline characteristics and patterns of statin use. The adjusted HRs (and 95% confidence intervals) for the combined outcome of AMI or death showed that each statin had similar effects when compared with atorvastatin: pravastatin 1.00 (0.90–1.11), simvastatin 1.01 (0.91– 1.12), lovastatin 1.09 (0.95–1.24) and fluvastatin 1.01 (0.80– 1.27). The results did not change when death alone was the end point, nor did they change after adjustment for initial daily dose or after censoring of patients who switched or stopped the initial statin treatment.

Interpretation

Our results suggest that, under current usage, statins are equally effective for secocondary prevention in elderly patients after AMI.Randomized controlled trials (RCTs) have shown that the use of statins after acute myocardial infarction (AMI) are effective in reducing the incidence of both fatal and nonfatal cardiovascular events.1,2,3,4,5,6,7,8 Although these trials have significantly influenced post-AMI treatment,9,10,11,12 it remains unclear whether all statins are equally effective in preventing recurrent AMI and death. Drugs in the same class are generally thought to be therapeutically equivalent because of similar mechanisms of action (class effect).13,14,15 However, in the absence of comparative data, this assumption requires evaluation. Statins differ in multiple characteristics, including liver and renal metabolism, half-life, effect on other serum lipid components, bioavailability and potency.16,17,18,19 These differences could potentially influence the extent to which the drugs are beneficial. Despite limited evidence in support of a differential benefit of statins for secondary prevention, preferential prescribing already occurs in practice and cannot be fully explained by the existing evidence or guidelines.20 Comparative data of statins are thus required to inform health care decision-making.A number of RCTs have directly compared statins using surrogate end points, such as lipid reduction,21,22,23 markers of hemostasis and inflammation24,25,26 or reduction in number of atherotic plaques.27 However, the extent to which these results can be extrapolated to clinically relevant outcomes remains to be established. The newly released PROVE IT– TIMI 22 trial28 was the first trial to compare 2 statins for cardiovascular prevention. The study showed that atorvastatin used at a maximal dose of 80 mg (intensive therapy) was better than pravastatin at a dose of 40 mg (standard therapy) in decreasing the incidence of cardiovascular events and procedures. The study was, however, conducted to show the benefit associated with increased treatment intensity. It did not compare the drugs by milligram-equivalent doses or by cholesterol-lowering equivalent doses. Moreover, no difference was detected when death alone or the combined outcome of death or AMI was evaluated. Other than the PROVE IT–TIMI 22 trial, few data are currently available from RCTs that compare statins for cardiovascular prevention.29We conducted a population-based study to examine the relative effectiveness of different statins for long-term secondary prevention after AMI. We used retrospective cohorts of elderly patients prescribed statins after AMI in 3 provinces. Five statins were studied: atorvastatin, pravastatin, simvastatin, lovastatin and fluvastatin. The newest statin, rosuvastatin, was not available during the study period and was not considered in this study.  相似文献   

8.

Background

Canadian First Nations people have unique cultural, socioeconomic and health-related factors that may affect fracture rates. We sought to determine the overall and site-specific fracture rates of First Nations people compared with non-First Nations people.

Methods

We studied fracture rates among First Nations people aged 20 years and older (n = 32 692) using the Manitoba administrative health database (1987–1999). We used federal and provincial sources to identify ethnicity, and we randomly matched each First Nations person with 3 people of the same sex and year of birth who did not meet this definition of First Nations ethnicity (n = 98 076). We used a provincial database of hospital separations and physician billing claims to calculate standardized incidence ratios (SIRs) and 95% confidence intervals (CIs) for each fracture type based on a 5-year age strata.

Results

First Nations people had significantly higher rates of any fracture (age- and sex-adjusted SIR 2.23, 95% CI 2.18–2.29). Hip fractures (SIR 1.88, 95% CI 1.61–2.14), wrist fractures (SIR 3.01, 95% CI 2.63–3.42) and spine fractures (SIR 1.93, 95% CI 1.79–2.20) occurred predominantly in older people and women. In contrast, craniofacial fractures (SIR 5.07, 95% CI 4.74–5.42) were predominant in men and younger adults.

Interpretation

First Nations people are a previously unidentified group at high risk for fracture.Most of the epidemiologic data describing fractures have been derived from white populations,1 although it is known that there is ethnic variation in the epidemiology of fractures.2,3,4 Canadian First Nations people are known to suffer from a heavy burden of medical and social problems that may affect fracture rates.5 To date, however, there have been no satisfactory studies of fracture rates among North American Aboriginal groups. We sought to determine the overall and site-specific fracture rates of First Nations people compared with non-First Nations people in Manitoba.  相似文献   

9.
BackgroundA pregnant woman''s psychological health is a significant predictor of postpartum outcomes. The Antenatal Psychosocial Health Assessment (ALPHA) form incorporates 15 risk factors associated with poor postpartum outcomes of woman abuse, child abuse, postpartum depression and couple dysfunction. We sought to determine whether health care providers using the ALPHA form detected more antenatal psychosocial concerns among pregnant women than providers practising usual prenatal care.MethodsA randomized controlled trial was conducted in 4 communities in Ontario. Family physicians, obstetricians and midwives who see at least 10 prenatal patients a year enrolled 5 eligible women each. Providers in the intervention group attended an educational workshop on using the ALPHA form and completed the form with enrolled women. The control group provided usual care. After the women delivered, both groups of providers identified concerns related to the 15 risk factors on the ALPHA form for each patient and rated the level of concern. The primary outcome was the number of psychosocial concerns identified. Results were controlled for clustering.ResultsThere were 21 (44%) providers randomly assigned to the ALPHA group and 27 (56%) to the control group. A total of 227 patients participated: 98 (43%) in the ALPHA group and 129 (57%) in the control group. ALPHA group providers were more likely than control group providers to identify psychosocial concerns (odds ratio [OR] 1.8, 95% confidence interval [CI] 1.1–3.0; p = 0.02) and to rate the level of concern as “high” (OR 4.8, 95% CI 1.1–20.2; p = 0.03). ALPHA group providers were also more likely to detect concerns related to family violence (OR 4.8, 95% CI 1.9–12.3; p = 0.001).InterpretationUsing the ALPHA form helped health care providers detect more psychosocial risk factors for poor postpartum outcomes, especially those related to family violence. It is a useful prenatal tool, identifying women who would benefit from additional support and interventions.The psychosocial health of a pregnant woman and her family is a significant predictor of intrapartum, newborn and postpartum outcomes.1,2,3,4 A critical review of the literature has identified an association between antenatal psychosocial risk factors and the poor postpartum outcomes of woman abuse, child abuse, postpartum depression and couple dysfunction.5Clinicians have indicated that a practical tool to help them systematically collect and record prenatal psychosocial information would be helpful.6Although specific and often well-validated tools are available to predict or detect child abuse, woman abuse or depression,7,8,9,10 clinicians are unlikely to use them because of time constraints.1 Other forms aid in collecting more comprehensive antenatal psychosocial data,11,12,13,14 but they are not evidence-based, and were developed to predict obstetric or newborn rather than psychosocial outcomes.In contrast, the Antenatal Psychosocial Health Assessment (ALPHA) form (Appendix 1) was designed to identify antenatal psychosocial risk factors for poor postnatal psychosocial outcomes. It incorporates 15 risk factors found through critical literature review5 to be associated with woman abuse, child abuse, postpartum depression and couple dysfunction.15 These risk factors are grouped intuitively by topic, with suggested questions, into 4 categories: family factors, maternal factors, substance use and family violence. The ALPHA form has been field-tested by obstetricians, family physicians, midwives and nurses,15,16 who have found using it to be feasible and useful.15 Pregnant women appreciate and feel comfortable with the psychosocial enquiry.15 The ALPHA form was developed as a screening tool to help providers systematically identify areas of psychosocial concern. Once feasibility was established,15 the next step was to determine whether using it in regular practice would increase the number of concerns identified.We sought to determine whether health care providers using the ALPHA form detected more antenatal psychosocial concerns in their pregnant patients than clinicians practising usual prenatal care. A secondary objective was to determine women''s and providers'' satisfaction with the ALPHA form.  相似文献   

10.

Background

Medication-related visits to the emergency department are an important but poorly understood phenomenon. We sought to evaluate the frequency, severity and preventability of drug-related visits to the emergency department.

Methods

We performed a prospective observational study of randomly selected adults presenting to the emergency department over a 12-week period. Emergency department visits were identified as drug-related on the basis of assessment by a pharmacist research assistant and an emergency physician; discrepancies were adjudicated by 2 independent reviewers.

Results

Among the 1017 patients included in the study, the emergency department visit was identified as drug-related for 122 patients (12.0%, 95% confidence interval [CI] 10.1%–14.2%); of these, 83 visits (68.0%, 95% CI 59.0%–76.2%) were deemed preventable. Severity was classified as mild in 15.6% of the 122 cases, moderate in 74.6% and severe in 9.8%. The most common reasons for drug-related visits were adverse drug reactions (39.3%), nonadherence (27.9%) and use of the wrong or suboptimal drug (11.5%). The probability of admission was significantly higher among patients who had a drug-related visit than among those whose visit was not drug-related (OR 2.18, 95% CI 1.46–3.27, p < 0.001), and among those admitted, the median length of stay was longer (8.0 [interquartile range 23.5] v. 5.5 [interquartile range 10.0] days, p = 0.06).

Interpretation

More than 1 in 9 emergency department visits are due to drug-related adverse events, a potentially preventable problem in our health care system.Adverse drug-related events are unfavourable occurrences related to the use or misuse of medications.1 It has been estimated that such events account for 17 million emergency department visits and 8.7 million hospital admissions annually in the United States.2,3 Between 1995 and 2000, costs associated with adverse drug-related events rose from US$76.6 billion to over US$177.4 billion.3,4Adverse drug-related events have recently been evaluated in ambulatory care settings and among patients admitted to hospital,5–9 and it has been estimated that 5%–25% of hospital admissions are drug-related.7,8 Unfortunately, emergency department visits are not reflected in most hospital studies, because patients seen in the emergency department for an adverse drug-related event are typically not admitted.10 In addition, most research evaluating drug-related visits to the emergency department has involved retrospective studies or analysis of administrative data.11–13 Retrospective studies may underestimate the incidence of drug-related visits because information may be missing or inaccurately documented.14 Finally, studies performed to date have used variable definitions of “drug-related events,”1,10 which limits comparative evaluation and generalizability.Despite the burden of drug-related morbidity and mortality, prospective research characterizing drug-related visits to the emergency department has been limited.15–17 We sought to overcome some of the limitations of research in this area by using a prospective design and a comprehensive definition of adverse drug-related events. The purpose of this study was to evaluate the frequency, severity and preventability of drug-related visits to the emergency department of a large tertiary care hospital, to classify the visits by type of drug-related problem and to identify patient, prescriber, drug and system factors associated with these visits.  相似文献   

11.

Background

Patients undergoing hip or knee replacement are at high risk of developing a postoperative venous thromboembolism even after discharge from hospital. We sought to identify hospital and patient characteristics associated with receiving thromboprophylaxis after discharge and to compare the risk of short-term mortality among those who did or did not receive thromboprophylaxis.

Methods

We conducted a retrospective cohort study using system-wide hospital discharge summary records, physician billing information, medication reimbursement claims and demographic records. We included patients aged 65 years and older who received a hip or knee replace ment and who were discharged home after surgery.

Results

In total we included 10 744 patients. Of these, 7058 patients who received a hip replacement and 3686 who received a knee replacement. The mean age was 75.4 (standard deviation [SD] 6.8) years and 38% of patients were men. In total, 2059 (19%) patients received thomboprophylaxis at discharge. Patients discharged from university teaching hospitals were less likely than those discharged from community hospitals to received thromboprophylaxis after discharge (odds ratio [OR] 0.89, 95% confidence interval [CI] 0.80–1.00). Patients were less likely to receive thromboprophylaxis after discharge if they had a longer hospital stay (15–30 days v. 1–7 days, OR 0.69, 95% CI 0.59–0.81). Patients were more likely to receive thromboprophylaxis if they had hip (v. knee) replacement, osteoarthritis, heart failure, atrial fibrillation or hypertension, higher (v. lower) income or if they were treated at medium-volume hospitals (69–116 hip and knee replacements per year). In total, 223 patients (2%) died in the 3-month period after discharge. The risk of short-term mortality was lower among those who received thromboprophylaxis after discharge (hazard ratio [HR] 0.34, 95% CI 0.20–0.57).

Interpretation

Fewer than 1 in 5 elderly patients discharged home after a hip-or knee-replacement surgery received postdischarge thromboprophylaxis. Those prescribed these medications had a lower risk of short-term mortality. The benefits of and barriers to thromboprophylaxis therapy after discharge in this population requires further study.Venous thromboembolism is a leading cause of mortality among patients in hospital.1,2 Major orthopedic surgery (e.g., hip or knee replacement) is associated with a high risk for postoperative venous thromboembolism.1,3,4 Because the clinical diagnosis of venous thromboembolism is unreliable and its first manifestation may be a life-threatening pulmonary embolism,5 it is recommended that patients undergoing hip or knee replacement receive routine thromboprophylaxis with anticoagulant therapy after surgery unless they have contraindications to anticoagulant therapy.1,3,5,6Thromboprophylaxis is commonly administered for the entire hospital stay, which is usually between 4 and 14 days.7 Expert consensus guidelines recommend that patients undergoing hip or knee replacement receive thromboprophylaxis medications for at least 10 days after surgery.6 These guidelines also recommend extended thromboprophylaxis for up to 28–35 days after surgery for patients undergoing hip replacement.6 Although there is evidence that extended thromboprophylaxis after hospital discharge is effective for reducing the risk of venous thromboembolism among patients who undergo hip replacement,8 the benefit among patients who undergo knee replacement has not been established.6 Thromboprophylaxis after discharge is likely to most benefit patients at high risk for venous thromboembolism, such as those with cancer, heart failure or major respiratory disease.6–9 However, given that patients who undergo joint replacement are often elderly and have multiple comorbidities, the risks associated with extended thromboprophylaxis, particularly gastrointestinal bleeding and hemorrhagic strokes, may be substantial and may be relative contraindications for this therapy.10Among patients discharged home after hip-or knee-replacement surgery, we sought characterize the use of thromboprophylaxis after discharge and its consequences on risk of short-term mortality.  相似文献   

12.
An extracellular β-fructofuranosidase from the yeast Xanthophyllomyces dendrorhous was characterized biochemically, molecularly, and phylogenetically. This enzyme is a glycoprotein with an estimated molecular mass of 160 kDa, of which the N-linked carbohydrate accounts for 60% of the total mass. It displays optimum activity at pH 5.0 to 6.5, and its thermophilicity (with maximum activity at 65 to 70°C) and thermostability (with a T50 in the range 66 to 71°C) is higher than that exhibited by most yeast invertases. The enzyme was able to hydrolyze fructosyl-β-(2→1)-linked carbohydrates such as sucrose, 1-kestose, or nystose, although its catalytic efficiency, defined by the kcat/Km ratio, indicates that it hydrolyzes sucrose approximately 4.2 times more efficiently than 1-kestose. Unlike other microbial β-fructofuranosidases, the enzyme from X. dendrorhous produces neokestose as the main transglycosylation product, a potentially novel bifidogenic trisaccharide. Using a 41% (wt/vol) sucrose solution, the maximum fructooligosaccharide concentration reached was 65.9 g liter−1. In addition, we isolated and sequenced the X. dendrorhous β-fructofuranosidase gene (Xd-INV), showing that it encodes a putative mature polypeptide of 595 amino acids and that it shares significant identity with other fungal, yeast, and plant β-fructofuranosidases, all members of family 32 of the glycosyl-hydrolases. We demonstrate that the Xd-INV could functionally complement the suc2 mutation of Saccharomyces cerevisiae and, finally, a structural model of the new enzyme based on the homologous invertase from Arabidopsis thaliana has also been obtained.The basidiomycetous yeast Xanthophyllomyces dendrorhous (formerly Phaffia rhodozyma) produces astaxanthin (3-3′-dihydroxy-β,β-carotene-4,4 dione [17, 25]). Different industries have displayed great interest in this carotenoid pigment due to its attractive red-orange color and antioxidant properties, which has intensified the molecular and genetic study of this yeast. As a result, several genes involved in the astaxanthin biosynthetic pathway have been cloned and/or characterized, as well as some other genes such as those encoding actin (60), glyceraldehyde-3-phosphate dehydrogenase (56), endo-β-1,3-glucanase, and aspartic protease (4). In terms of the use of carbon sources, a β-amylase (9), and an α-glucosidase (33) with glucosyltransferase activity (12), as well as a yeast cell-associated invertase (41), have also been reported.Invertases or β-fructofuranosidases (EC 3.2.1.26) catalyze the release of β-fructose from the nonreducing termini of various β-d-fructofuranoside substrates. Yeast β-fructofuranosidases have been widely studied, including that of Saccharomyces cerevisiae (11, 14, 45, 46), Schizosaccharomyces pombe (36), Pichia anomala (40, 49), Candida utilis (5, 8), or Schwanniomyces occidentalis (2). They generally exhibit strong similarities where sequences are available, and they have been classified within family 32 of the glycosyl-hydrolases (GH) on the basis of their amino acid sequences. The catalytic mechanism proposed for the S. cerevisiae enzyme implies that an aspartate close to the N terminus (Asp-23) acts as a nucleophile, and a glutamate (Glu-204) acts as the acid/base catalyst (46). In addition, the three-dimensional structures of some enzymes in this family have been resolved, such as that of an exoinulinase from Aspergillus niger (var. awamori; 37) and the invertase from Arabidopsis thaliana (55).As well as hydrolyzing sucrose, β-fructofuranosidases from microorganisms may also catalyze the synthesis of short-chain fructooligosaccharides (FOS), in which one to three fructosyl moieties are linked to the sucrose skeleton by different glycosidic bonds depending on the source of the enzyme (3, 52). FOS are one of the most promising ingredients for functional foods since they act as prebiotics (44), and they exert a beneficial effect on human health, participating in the prevention of cardiovascular diseases, colon cancer, or osteoporosis (28). Currently, Aspergillus fructosyltransferase is the main industrial producer of FOS (15, 52), producing a mixture of FOS with an inulin-type structure, containing β-(2→1)-linked fructose-oligomers (1F-FOS: 1-kestose, nystose, or 1F-fructofuranosylnystose). However, there is certain interest in the development of novel molecules that may have better prebiotic and physiological properties. In this context, β-(2→6)-linked FOS, where this link exits between two fructose units (6F-FOS: 6-kestose) or between fructose and the glucosyl moiety (6G-FOS: neokestose, neonystose, and neofructofuranosylnystose), may have enhanced prebiotic properties compared to commercial FOS (29, 34, 54). The enzymatic synthesis of 6-kestose and other related β-(2→6)-linked fructosyl oligomers has already been reported in yeasts such as S. cerevisiae (11) or Schwanniomyces occidentalis (2) and in fungi such as Thermoascus aurantiacus (26) or Sporotrichum thermophile (27). However, the production of FOS included in the 6G-FOS series has not been widely reported in microorganisms, probably because they are not generally produced (2, 15) or because they represent only a minor biosynthetic product (e.g., with baker''s yeast invertase) (11). Most research into neo-FOS production has been carried out with Penicillium citrinum cells (19, 31, 32, 39). In this context, neokestose is the main transglycosylation product accumulated by whole X. dendrorhous cells from sucrose (30), although the enzyme responsible for this reaction remained uncharacterized.Here, we describe the molecular, phylogenetic, and biochemical characterization of an extracellular β-fructofuranosidase from X. dendrorhous. Kinetic studies of its hydrolytic activity were performed using different substrates, and we investigated its fructosyltransferase capacity. The functionality of the gene analyzed was verified through its heterologous expression, and a structural model of this enzyme based on the homologous invertase from A. thaliana has also been obtained.  相似文献   

13.

Background

Quinolone-resistant Neisseria gonorrhoeae has swiftly emerged in Canada. We sought to determine its prevalence in the province of Ontario and to investigate risk factors for quinolone-resistant N. gonorrhoeae infection in a Canadian setting.

Methods

We used records from the Public Health Laboratory of the Ontario Agency for Health Protection and Promotion in Toronto, Ontario, and the National Microbiology Laboratory in Winnipeg, Manitoba, to generate epidemic curves for N. gonorrhoeae infection. We extracted limited demographic data from 2006 quinolone-resistant N. gonorrhoeae isolates and from a random sample of quinolone-susceptible isolates. We also extracted minimum inhibitory concentrations for commonly tested antibiotics.

Results

Between 2002 and 2006, the number of N. gonorrhoeae infections detected by culture decreased by 26% and the number of cases detected by nucleic acid amplification testing increased 6-fold. The proportion of N. gonorrhoeae isolates with resistance to quinolones increased from 4% to 28% over the same period. Analysis of 695 quinolone-resistant N. gonorrhoeae isolates and 688 quinolone-susceptible control isolates from 2006 showed a higher proportion of men (odds ratio [OR] 3.1, 95% confidence interval [CI] 2.3–4.1) and patients over 30 years of age (OR 3.1, 95% CI 2.4–3.8) in the quinolone-resistant group. The proportion of men who have sex with men appeared to be relatively similar in both groups (OR 1.4, 95% CI 1.1–1.8). Quinolone-resistant strains were more resistant to penicillin (p < 0.001), tetracycline (p < 0.001) and erythromycin (p < 0.001). All isolates were susceptible to cefixime, ceftriaxone, azithromycin and spectinomycin.

Interpretation

During 2006 in Ontario, 28% of N. gonorrhoeae isolates were resistant to quinolones. Infections in heterosexual men appear to have contributed significantly to the quinolone resistance rate. Medical practitioners should be aware of the widespread prevalence of quinolone-resistant N. gonorrhoeae and avoid quinolone use for empiric therapy.After declining for a number of years, Neisseria gonorrhoeae infections are once more on the rise in Canada. Between 1997 and 2007, reported incidence of the disease more than doubled, from 15 to 35 cases per 100 000.1 To address the emergence of quinolone-resistant N. gonorrhoeae strains, the empiric treatment regimens for N. gonorrhoeae infection were recently revised in the 2006 Canadian Guidelines on Sexually Transmitted Infections.2,3 Quinolones are no longer recommended for empiric therapy for N. gonorrhoeae infection.3In Canada, quinolone resistance in N. gonorrhoeae isolates increased from an estimated 2% in 2001 to 16% in 2005.4 Demographic risk factors for quinolone-resistant N. gonorrhoeae infection have not been studied. American studies have associated quinolone-resistant N. gonorrhoeae infection with men who have sex with men,5,6 antibiotic use,5,7 age above 35 years,5 HIV infection5 and travel to Asia.6 Public health data from the provinces of Quebec8 and Alberta2 have also suggested an association between quinolone-resistant infection and men who have sex with men. In this study we generated epidemic curves for N. gonorrhoeae and quinolone-resistant N. gonorrhoeae infection in the province of Ontario. We also investigated demographic risk factors for quinolone-resistant N. gonorrhoeae infection.  相似文献   

14.

Background

The presumed superiority of newer fluoroquinolones for the treatment of acute bacterial sinusitis is based on laboratory data but has not yet been established on clinical grounds.

Methods

We performed a meta-analysis of randomized controlled trials comparing the effectiveness and safety of fluoroquinolones and β-lactams in acute bacterial sinusitis.

Results

We identified 8 randomized controlled trials investigating the newer “respiratory” fluoroquinolones moxifloxacin, levofloxacin and gatifloxacin. In the primary effectiveness analysis involving 2133 intention-to-treat patients from 5 randomized controlled trials, the extent of clinical cure and improvement did not differ between fluoroquinolones and β-lactams (odds ratio [OR] 1.09, 95% confidence interval [CI] 0.85–1.39) at the test-of-cure assessment, which varied from 10 to 31 days after the start of treatment. Fluoroquinolones were associated with an increased chance of clinical success among the clinically evaluable patients in all of the randomized controlled trials (OR 1.29, 95% CI 1.03–1.63) and in 4 blinded randomized controlled trials (OR 1.45, 95% CI 1.05–2.00). There was no statistically significant difference between fluoroquinolones and amoxicillin–clavulanate (OR 1.24, 95% CI 0.93–1.65). Eradication or presumed eradication of the pathogens isolated before treatment was more likely with fluoroquinolone treatment than with β-lactam treatment (OR 2.11, 95% CI 1.09–4.08). In the primary safety analysis, adverse events did not differ between treatments (OR 1.17, 95% CI 0.86–1.59). However, more adverse events occurred with fluoroquinolone use than with β-lactam use in 2 blinded randomized controlled trials. The associations described here were generally consistent when we included 3 additional studies involving other fluoroquinolones (ciprofloxacin and sparfloxacin) in the analysis.

Interpretation

In the treatment of acute bacterial sinusitis, newer fluoroquinolones conferred no benefit over β-lactam antibiotics. The use of fluoroquinolones as first-line therapy cannot be endorsed.Acute bacterial sinusitis (more accurately known as rhinosinusitis, given that the nasal mucosa is commonly involved1) is one of the most frequent health disorders;2 it has an adverse impact on patients'' quality of life3 and accounts for nearly 3 million ambulatory care visits in the United States annually4 and substantial health care costs.5Acute bacterial sinusitis typically follows an episode of viral upper respiratory tract illness.2 The diagnosis of bacterial disease in routine clinical practice is usually based on the presence of a constellation of clinical manifestations.1 The bacterial pathogens most commonly involved are Streptococcus pneumoniae, Haemophilus influenzae and, to a lesser degree, Moraxella catarrhalis.6,7 Over the years, these pathogens have acquired various degrees of resistance to many traditional antibiotics.8The benefit of older antibiotics over placebo in the treatment of acute bacterial sinusitis appears limited, mostly because of the high success rate achieved with placebo.9,10 However, newer, third-and fourth-generation fluoroquinolones possess excellent in vitro activity against the most common respiratory pathogens,11 and for this reason these drugs are often designated as “respiratory.” Based on analysis of the available laboratory data, current guidelines give the newer fluoroquinolones the highest ranking, in terms of expected clinical effectiveness, among the antimicrobials used to treat acute bacterial sinusitis (although admittedly the difference is marginal).7The presumed clinical advantage of the respiratory fluoroquinolones over other classes of antimicrobials has not been clearly demonstrated in comparative clinical trials or meta-analyses.9,10 We aimed to comprehensively reassess the role of fluoroquinolones in the treatment of acute bacterial sinusitis, in terms of effectiveness and safety, by performing a meta-analysis of relevant randomized controlled trials.  相似文献   

15.

Background

Although octogenarians are being referred for coronary artery bypass grafting (CABG) with increasing frequency, contemporary outcomes have not been well described. We examined data from 4 Canadian centres to determine outcomes of CABG in this age group.

Methods

Data for the years 1996 to 2001 were examined in a comparison of octogenarians with patients less than 80 years of age. Logistic regression analysis was used to adjust for preoperative factors and to generate adjusted rates of mortality and postoperative stroke.

Results

A total of 15 070 consecutive patients underwent isolated CABG during the study period. Overall, 725 (4.8%) were 80 years of age or older, the proportion increasing from 3.8% in 1996 to 6.2% in 2001 (p for linear trend = 0.03). The crude rate of death was higher among the octogenarians (9.2% v. 3.8%; p < 0.001), as was the rate of stroke (4.7% v. 1.6%, p < 0.001). The octogenarians had a significantly greater burden of comorbid conditions and more urgent presentation at surgery. After adjustment, the octogenarians remained at greater risk for in-hospital death (odds ratio [OR] 2.64, 95% confidence interval [CI] 1.95–3.57) and stroke (OR 3.25, 95% CI 2.15–4.93). Mortality declined over time for both age groups (p for linear trend < 0.001 for both groups), but the incidence of postoperative stroke did not change (p for linear trend = 0.61 [age < 80 years] and 0.08 [age ≥ 80 years]). Octogenarians who underwent elective surgery had crude and adjusted rates of death (OR 1.31, 95% CI 0.60–2.90) and stroke (OR 1.59, 95% CI 0.57–4.44) that were higher than but not significantly different from those for non-octogenarians who underwent elective surgery.

Interpretation

In this study, rates of death and stroke were higher among octogenarians, although the adjusted differences in mortality over time were decreasing. The rate of adverse outcomes in association with elective surgery was similar for older and younger patients.The population is rapidly aging, and an increasing number of octogenarians are being referred for coronary artery bypass grafting (CABG).1,2 Previous single-centre reports from Canada3,4,5 and from abroad1,2,6,7,8have concluded that elderly patients undergoing cardiac surgery have worse outcomes than younger patients. In addition, these studies have reported higher costs and slower recovery for octogenarians undergoing CABG, a finding that has generated debate over the appropriate use of health care resources.1,5,7,9It has become increasingly clear that the results of CABG among octogenarians, although worse than among younger patients, are better than for percutaneous coronary interventions or medical therapy alone when the extent of the patient''s coronary disease is such that revascularization with CABG is indicated.10,11 Similarly, the superior results of percutaneous coronary intervention relative to medical therapy in elderly patients with coronary disease will likely continue to increase the total number of octogenarians undergoing coronary angiography, which in turn will probably increase the number of patients being referred for CABG.10,12 Contemporary outcomes for octogenarians undergoing CABG in Canada have not been well described. If we are to have an informed debate and determine appropriate policy, it is important for these outcomes to be known.We aimed to describe the characteristics and outcomes of patients 80 years of age and older undergoing CABG in Canada and to compare their outcomes with those of younger patients. In addition, we examined changes in results over time.  相似文献   

16.

Background

Patients taking oral anticoagulant therapy balance the risks of hemorrhage and thromboembolism. We sought to determine the association between anticoagulation intensity and the risk of hemorrhagic and thromboembolic events. We also sought to determine how under-or overanticoagulation would influence patient outcomes.

Methods

We reviewed the MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials and CINAHL databases to identify studies involving patients taking anticoagulants that reported person-years of observation and the number of hemorrhages or thromboemboli in 3 or more discrete ranges of international normalized ratios. We estimated the overall relative and absolute risks of events specific to anticoagulation intensity.

Results

We included 19 studies. The risk of hemorrhage increased significantly at high international normalized ratios. Compared with the therapeutic ratio of 2–3, the relative risk (RR) of hemorrhage (and 95% confidence intervals [CIs]) were 2.7 (1.8–3.9; p < 0.01) at a ratio of 3–5 and 21.8 (12.1–39.4; p < 0.01) at a ratio greater than 5. The risk of thromboemboli increased significantly at ratios less than 2, with a relative risk of 3.5 (95% CI 2.8–4.4; p < 0.01). The risk of hemorrhagic or thromboembolic events was lower at ratios of 3–5 (RR 1.8, 95% CI 1.2–2.6) than at ratios of less than 2 (RR 2.4, 95% CI 1.9–3.1; p = 0.10). We found that a ratio of 2–3 had the lowest absolute risk (AR) of events (AR 4.3%/yr, 95% CI 3.0%–6.3%).

Conclusions

The risks of hemorrhage and thromboemboli are minimized at international normalized ratios of 2–3. Ratios that are moderately higher than this therapeutic range appear safe and more effective than subtherapeutic ratios.Oral anticoagulant therapy is essential for the treatment and prevention of many thromboembolic disorders. Since anticoagulants can cause serious adverse events,1–3 physicians monitor the international normalized ratios of patients taking these drugs to ensure that their ratios fall within a target range.An international normalized ratio of 2–3 is the most common target range. Results of previous studies revealed an increased risk of bleeding among patients whose ratios exceeded 4, an increased risk of stroke among patients whose ratios were 1.5–2 and a decreased risk of stroke at a ratio of 2.4.4,5 However, the evidence supporting the range of 2–3 has some deficiencies. We sought to determine whether the risk of hemorrhagic and thromboembolic events is minimized at an international normalized ratio of 2–3 among patients taking anticoagulants. In addition, it has been observed that patients spend more time with a ratio below 2 than above 3.6,7 The impact of such systematic underanticoagulation on patient outcomes is unknown. We sought to determine the effect of under-or overanticoagulation on the risk of thromboemboli and hemorrhage.  相似文献   

17.
18.

Background

Ethnic disparities in access to health care and health outcomes are well documented. It is unclear whether similar differences exist between Aboriginal and non-Aboriginal people with chronic kidney disease in Canada. We determined whether access to care differed between status Aboriginal people (Aboriginal people registered under the federal Indian Act) and non-Aboriginal people with chronic kidney disease.

Methods

We identified 106 511 non-Aboriginal and 1182 Aboriginal patients with chronic kidney disease (estimated glomerular filtration rate less than 60 mL/min/1.73 m2). We compared outcomes, including hospital admissions, that may have been preventable with appropriate outpatient care (ambulatory-care–sensitive conditions) as well as use of specialist services, including visits to nephrologists and general internists.

Results

Aboriginal people were almost twice as likely as non-Aboriginal people to be admitted to hospital for an ambulatory-care–sensitive condition (rate ratio 1.77, 95% confidence interval [CI] 1.46–2.13). Aboriginal people with severe chronic kidney disease (estimated glomerular filtration rate < 30 mL/min/1.73 m2) were 43% less likely than non-Aboriginal people with severe chronic kidney disease to visit a nephrologist (hazard ratio 0.57, 95% CI 0.39–0.83). There was no difference in the likelihood of visiting a general internist (hazard ratio 1.00, 95% CI 0.83–1.21).

Interpretation

Increased rates of hospital admissions for ambulatory-care–sensitive conditions and a reduced likelihood of nephrology visits suggest potential inequities in care among status Aboriginal people with chronic kidney disease. The extent to which this may contribute to the higher rate of kidney failure in this population requires further exploration.Ethnic disparities in access to health care are well documented;1,2 however, the majority of studies include black and Hispanic populations in the United States. The poorer health status and increased mortality among Aboriginal populations than among non-Aboriginal populations,3,4 particularly among those with chronic medical conditions,5,6 raise the question as to whether there is differential access to health care and management of chronic medical conditions in this population.The prevalence of end-stage renal disease, which commonly results from chronic kidney disease, is about twice as common among Aboriginal people as it is among non-Aboriginal people.7,8 Given that the progression of chronic kidney disease can be delayed by appropriate therapeutic interventions9,10 and that delayed referral to specialist care is associated with increased mortality,11,12 issues such as access to health care may be particularly important in the Aboriginal population. Although previous studies have suggested that there is decreased access to primary and specialist care in the Aboriginal population,13–15 these studies are limited by the inclusion of patients from a single geographically isolated region,13 the use of survey data,14 and the inability to differentiate between different types of specialists and reasons for the visit.15In addition to physician visits, admission to hospital for ambulatory-care–sensitive conditions (conditions that, if managed effectively in an outpatient setting, do not typically result in admission to hospital) has been used as a measure of access to appropriate outpatient care.16,17 Thus, admission to hospital for an ambulatory-care–sensitive condition reflects a potentially preventable complication resulting from inadequate access to care. Our objective was to determine whether access to health care differs between status Aboriginal (Aboriginal people registered under the federal Indian Act) and non-Aboriginal people with chronic kidney disease. We assess differences in care by 2 measures: admission to hospital for an ambulatory-care–sensitive condition related to chronic kidney disease; and receipt of nephrology care for severe chronic kidney disease as recommended by clinical practice guidelines.18  相似文献   

19.
Phenoxyalkanoic acid (PAA) herbicides are widely used in agriculture. Biotic degradation of such herbicides occurs in soils and is initiated by α-ketoglutarate- and Fe2+-dependent dioxygenases encoded by tfdA-like genes (i.e., tfdA and tfdAα). Novel primers and quantitative kinetic PCR (qPCR) assays were developed to analyze the diversity and abundance of tfdA-like genes in soil. Five primer sets targeting tfdA-like genes were designed and evaluated. Primer sets 3 to 5 specifically amplified tfdA-like genes from soil, and a total of 437 sequences were retrieved. Coverages of gene libraries were 62 to 100%, up to 122 genotypes were detected, and up to 389 genotypes were predicted to occur in the gene libraries as indicated by the richness estimator Chao1. Phylogenetic analysis of in silico-translated tfdA-like genes indicated that soil tfdA-like genes were related to those of group 2 and 3 Bradyrhizobium spp., Sphingomonas spp., and uncultured soil bacteria. Soil-derived tfdA-like genes were assigned to 11 clusters, 4 of which were composed of novel sequences from this study, indicating that soil harbors novel and diverse tfdA-like genes. Correlation analysis of 16S rRNA and tfdA-like gene similarity indicated that any two bacteria with D > 20% of group 2 tfdA-like gene-derived protein sequences belong to different species. Thus, data indicate that the soil analyzed harbors at least 48 novel bacterial species containing group 2 tfdA-like genes. Novel qPCR assays were established to quantify such new tfdA-like genes. Copy numbers of tfdA-like genes were 1.0 × 106 to 65 × 106 per gram (dry weight) soil in four different soils, indicating that hitherto-unknown, diverse tfdA-like genes are abundant in soils.Phenoxyalkanoic acid (PAA) herbicides such as MCPA (4-chloro-2-methyl-phenoxyacetic acid) and 2,4-D (2,4-dichlorophenoxyacetic acid) are widely used to control broad-leaf weeds in agricultural as well as nonagricultural areas (19, 77). Degradation occurs primarily under oxic conditions in soil, and microorganisms play a key role in the degradation of such herbicides in soil (62, 64). Although relatively rapidly degraded in soil (32, 45), both MCPA and 2,4-D are potential groundwater contaminants (10, 56, 70), accentuating the importance of bacterial PAA herbicide-degrading bacteria in soils (e.g., references 3, 5, 6, 20, 41, 59, and 78).Degradation can occur cometabolically or be associated with energy conservation (15, 54). The first step in the degradation of 2,4-D and MCPA is initiated by the product of cadAB or tfdA-like genes (29, 30, 35, 67), which constitutes an α-ketoglutarate (α-KG)- and Fe2+-dependent dioxygenase. TfdA removes the acetate side chain of 2,4-D and MCPA to produce 2,4-dichlorophenol and 4-chloro-2-methylphenol, respectively, and glyoxylate while oxidizing α-ketoglutarate to CO2 and succinate (16, 17).Organisms capable of PAA herbicide degradation are phylogenetically diverse and belong to the Alpha-, Beta-, and Gammproteobacteria and the Bacteroidetes/Chlorobi group (e.g., references 2, 14, 29-34, 39, 60, 68, and 71). These bacteria harbor tfdA-like genes (i.e., tfdA or tfdAα) and are categorized into three groups on an evolutionary and physiological basis (34). The first group consists of beta- and gammaproteobacteria and can be further divided into three distinct classes based on their tfdA genes (30, 46). Class I tfdA genes are closely related to those of Cupriavidus necator JMP134 (formerly Ralstonia eutropha). Class II tfdA genes consist of those of Burkholderia sp. strain RASC and a few strains that are 76% identical to class I tfdA genes. Class III tfdA genes are 77% identical to class I and 80% identical to class II tfdA genes and linked to MCPA degradation in soil (3). The second group consists of alphaproteobacteria, which are closely related to Bradyrhizobium spp. with tfdAα genes having 60% identity to tfdA of group 1 (18, 29, 34). The third group also harbors the tfdAα genes and consists of Sphingomonas spp. within the alphaproteobacteria (30).Diverse PAA herbicide degraders of all three groups were identified in soil by cultivation-dependent studies (32, 34, 41, 78). Besides CadAB, TfdA and certain TfdAα proteins catalyze the conversion of PAA herbicides (29, 30, 35). All groups of tfdA-like genes are potentially linked to the degradation of PAA herbicides, although alternative primary functions of group 2 and 3 TfdAs have been proposed (30, 35). However, recent cultivation-independent studies focused on 16S rRNA genes or solely on group 1 tfdA sequences in soil (e.g., references 3-5, 13, and 41). Whether group 2 and 3 tfdA-like genes are also quantitatively linked to the degradation of PAA herbicides in soils is unknown. Thus, tools to target a broad range of tfdA-like genes are needed to resolve such an issue. Primers used to assess the diversity of tfdA-like sequences used in previous studies were based on the alignment of approximately 50% or less of available sequences to date (3, 20, 29, 32, 39, 47, 58, 73). Primers specifically targeting all major groups of tfdA-like genes to assess and quantify a broad diversity of potential PAA degraders in soil are unavailable. Thus, the objectives of this study were (i) to develop primers specific for all three groups of tfdA-like genes, (ii) to establish quantitative kinetic PCR (qPCR) assays based on such primers for different soil samples, and (iii) to assess the diversity and abundance of tfdA-like genes in soil.  相似文献   

20.

Background

Assessing marijuana''s impact on intelligence quotient (IQ) has been hampered by a lack of evaluation of subjects before they begin to use this substance. Using data from a group of young people whom we have been following since birth, we examined IQ scores before, during and after cessation of regular marijuana use to determine any impact of the drug on this measure of cognitive function.

Methods

We determined marijuana use for seventy 17- to 20-year-olds through self-reporting and urinalysis. IQ difference scores were calculated by subtracting each person''s IQ score at 9–12 years (before initiation of drug use) from his or her score at 17–20 years. We then compared the difference in IQ scores of current heavy users (at least 5 joints per week), current light users (less than 5 joints per week), former users (who had not smoked regularly for at least 3 months) and non-users (who never smoked more than once per week and no smoking in the past two weeks).

Results

Current marijuana use was significantly correlated (p < 0.05) in a dose- related fashion with a decline in IQ over the ages studied. The comparison of the IQ difference scores showed an average decrease of 4.1 points in current heavy users (p < 0.05) compared to gains in IQ points for light current users (5.8), former users (3.5) and non-users (2.6).

Interpretation

Current marijuana use had a negative effect on global IQ score only in subjects who smoked 5 or more joints per week. A negative effect was not observed among subjects who had previously been heavy users but were no longer using the substance. We conclude that marijuana does not have a long-term negative impact on global intelligence. Whether the absence of a residual marijuana effect would also be evident in more specific cognitive domains such as memory and attention remains to be ascertained.Marijuana produces well-documented, acute cognitive changes that last for several hours after the drug has been ingested.1,2,3 Whether it produces cognitive dysfunction beyond this period of acute intoxication is much more difficult to establish. Approaches to investigating long-lasting effects include clinical assessment of long-term users,4,5,6 observations of subcultures in countries where long-term daily use of cannabis has been the cultural norm for decades7,8,9 and marijuana administration studies in which subjects with a history of use ranging from infrequent to extensive are given the drug in controlled laboratory settings after various periods of abstinence.10,11,12 As discussed in several reviews of the literature,1,13,14 the findings have been equivocal.Most studies that examined heavy marijuana users for possible cognitive dysfunction lasting beyond the acute intoxication period assessed subjects after an abstinence period of only a day or two.10,12,15,16 The fact that cannabinoid metabolites have been detected in the urine of long-term marijuana users after weeks or even months of abstinence17,18,19 compromises the interpretation of these studies. To account for potential pre-existing differences between users and non-users, studies have typically matched the comparison group with the user group in terms of non-marijuana variables.6,20 Suggestions for improving study designs13,14 have emphasized both the need for comparison groups to be as similar as possible to the drug-using group and the need for a prolonged abstinence period. The most desirable procedure would involve a longitudinal, prospective design in which cognitive measures were available for all non-using and using subjects before and after marijuana consumption had been initiated by the users.15The Ottawa Prenatal Prospective Study (OPPS), underway since 1978, satisfies these criteria. This study permits both within-subject and between-subject comparisons among relatively low-risk non-users and users before, during and after quitting regular marijuana use. The primary objective of the OPPS is the neuropsychologic assessment of children exposed prenatally to marijuana or cigarettes. Women who used and did not use marijuana and cigarettes volunteered to participate during their pregnancy, and their children, now between the ages of 17 and 20 years, have been assessed since birth. Details of the recruitment of the largely middle-class families, the assessment procedures and the findings for the children from birth to adolescence have been summarized elsewhere.21,22The objectives of the current study were as follows: to determine if current, regular marijuana use is predictive of decline in IQ from pre-usage levels, to determine if a differential effect on IQ occurs with heavy versus light current, regular marijuana use, and to determine if any IQ effects persist after subjects cease using marijuana for at least 3 months.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号