首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.

Background

Treatment of osteoarthritis with oral NSAID therapy provides pain relief but carries a substantial risk of adverse effects. Topical NSAID therapy offers an alternative to oral treatment, with the potential for a reduced risk of side effects. The objective of this trial was to assess the safety and efficacy of a topical diclofenac solution in relieving the symptoms of primary osteoarthritis of the knee.

Methods

We identified 248 men and women from southern Ontario with primary osteoarthritis of the knee and at least moderate pain. The patients were randomly assigned to apply 1 of 3 solutions to their painful knee for 4 weeks: a topical diclofenac solution (1.5% wt/wt diclofenac sodium in a carrier containing dimethyl sulfoxide [DMSO]); a vehicle-control solution (the carrier containing DMSO but no diclofenac); and a placebo solution (a modified carrier with a token amount of DMSO for blinding purposes but no diclofenac). The primary efficacy end point was pain relief, measured by the Western Ontario and McMaster Universities (WOMAC) LK3.0 Osteoarthritis Index pain subscale. Secondary end points were improved physical function and reduced stiffness (measured by the WOMAC subscales), reduced pain on walking and patient global assessment (PGA). Safety was evaluated with clinical and laboratory assessments.

Results

In the intent-to-treat group the mean change (and 95% confidence interval [CI]) in pain score from baseline to final assessment was significantly greater for the patients who applied the topical diclofenac solution (–3.9 [– 4.8 to –2.9]) than for those who applied the vehicle-control solution (–2.5 [– 3.3 to –1.7]; p = 0.023) or the placebo solution (–2.5 [–3.3 to –1.7]; p = 0.016). For the secondary variables the topical diclofenac solution also revealed superiority to the vehicle-control and placebo solutions, leading to mean changes (and 95% CIs) of –11.6 (–14.7 to –8.4; p = 0.002 and 0.014, respectively) in physical function, –1.5 (–1.9 to –1.1; p = 0.015 and 0.002, respectively) in stiffness and –0.8 (–1.1 to –0.6; p = 0.003 and 0.015, respectively) in pain on walking. The PGA scores were significantly better for the patients who applied the topical diclofenac solution than for those who applied the other 2 solutions (p = 0.039 and 0.025, respectively). The topical diclofenac solution caused some skin irritation, mostly minor local skin dryness, in 30 (36%) of the 84 patients, but this led to discontinuation of treatment in only 5 (6%) of the cases. The incidence of gastrointestinal events did not differ between the treatment groups. No serious gastrointestinal or renal adverse events were reported or detected by means of laboratory testing.

Interpretation

This topical diclofenac solution can provide safe, site-specific treatment for osteoarthritic pain, with only minor local skin irritation and minimal systemic side effects.Osteoarthritis is a degenerative joint disease affecting articular cartilage and underlying bone, commonly of the knee.1 Current treatment includes the oral use of NSAIDs, either nonselective or cyclooxygenase-2 (COX-2)-selective. These agents carry a substantial risk of clinically significant adverse effects, particularly on the gastrointestinal2,3 and renal systems.4 Although the incidence of gastrointestinal complications has been reported to be lower with COX-2-selective NSAIDs than with nonselective NSAIDs,5,6,7 the former have been linked to adverse renal effects8 and an increased risk of cardiovascular complications.9The need for safer treatment of osteoarthritis has led to research into the topical use of NSAIDs.10,11,12 Recent reviews of the few published placebo-controlled studies suggest that topical NSAID therapy can relieve pain13,14,15 with few gastrointestinal side effects.16 Current practice guidelines advocate the use of topical therapy, including NSAIDs, in the management of osteoarthritis.17,18,19 A diclofenac solution containing the absorption enhancer dimethyl sulfoxide (DMSO) was developed for site-specific topical application. The objective of this study was to demonstrate that applying this solution to a painful knee with primary osteoarthritis could provide symptom relief with minimal systemic side effects.  相似文献   

3.

Background

Up to 50% of adverse events that occur in hospitals are preventable. Language barriers and disabilities that affect communication have been shown to decrease quality of care. We sought to assess whether communication problems are associated with an increased risk of preventable adverse events.

Methods

We randomly selected 20 general hospitals in the province of Quebec with at least 1500 annual admissions. Of the 145 672 admissions to the selected hospitals in 2000/01, we randomly selected and reviewed 2355 charts of patients aged 18 years or older. Reviewers abstracted patient characteristics, including communication problems, and details of hospital admission, and assessed the cause and preventability of identified adverse events. The primary outcome was adverse events.

Results

Of 217 adverse events, 63 (29%) were judged to be preventable, for an overall population rate of 2.7% (95% confidence interval [CI] 2.1%–3.4%). We found that patients with preventable adverse events were significantly more likely than those without such events to have a communication problem (odds ratio [OR] 3.00; 95% CI 1.43–6.27) or a psychiatric disorder (OR 2.35; 95% CI 1.09–5.05). Patients who were admitted urgently were significantly more likely than patients whose admissions were elective to experience an event (OR 1.64, 95% CI 1.07–2.52). Preventable adverse events were mainly due to drug errors (40%) or poor clinical management (32%). We found that patients with communication problems were more likely than patients without these problems to experience multiple preventable adverse events (46% v. 20%; p = 0.05).

Interpretation

Patients with communication problems appeared to be at highest risk for preventable adverse events. Interventions to reduce the risk for these patients need to be developed and evaluated.Patient safety is a priority in modern health care systems. From 3% to 17% of hospital admissions result in an adverse event,1–8 and almost 50% of these events are considered to be preventable.3,9–12 An adverse event is an unintended injury or complication caused by delivery of clinical care rather than by the patient''s condition. The occurrence of adverse events has been well documented; however, identifying modifiable risk factors that contribute to the occurrence of preventable adverse events is critical. Studies of preventable adverse events have focused on many factors, but researchers have only recently begun to evaluate the role of patient characteristics.2,9,12,13 Older patients and those with a greater number of health problems have been shown to be at increased risk for preventable adverse events.10,11 However, previous studies have repeatedly suggested the need to investigate more diverse, modifiable risk factors.3,6,7,10,11,14–16Language barriers and disabilities that affect communication have been shown to decrease quality of care;16–20 however, their impact on preventable adverse events needs to be investigated. Patients with physical and sensory disabilities, such as deafness and blindness, have been shown to face considerable barriers when communicating with health care professionals.20–24 Communication disorders are estimated to affect 5%–10% of the general population,25 and in one study more than 15% of admissions to university hospitals involved patients with 1 or more disabilities severe enough to prevent almost any form of communication.26 In addition, patients with communication disabilities are already at increased risk for depression and other comorbidities.27–29 Determining whether they are at increased risk for preventable adverse events would permit risk stratification at the time of admission and targeted preventive strategies.We sought to estimate the extent to which preventable adverse events that occurred in hospital could be predicted by conditions that affect a patient''s ability to communicate.  相似文献   

4.

Background

Elevated waist circumference and body mass index (BMI), both traditional measures of obesity, are accepted risk factors for type 2 diabetes mellitus. Girls who are obese experience earlier onset of puberty and possibly greater breast development. We sought to evaluate whether a woman''s breast size in late adolescence is associated with an increased risk of type 2 diabetes mellitus in adulthood.

Methods

In conjunction with the ongoing Nurses'' Health Study II, which began to study risk factors for breast cancer among women in 1989, we conducted a prospective cohort study involving 92 106 of the participants. We assessed the risk of type 2 diabetes mellitus in relation to self-reported bra cup sizes, categorized as ≤ A, B, C and ≥ D cups, among participants at age 20.

Results

The mean age of participants at baseline was 38.1 years. A total of 1844 new cases of type 2 diabetes mellitus arose at a mean age of 44.9 years during 886 443 person-years of follow-up. Relative to bra cup size ≤ A, the respective age-adjusted hazard ratios (and 95% confidence intervals [CIs]) were 2.30 (1.99–2.66) for B cup, 4.32 (3.71–5.04) for C cup and 4.99 (4.12–6.05) for ≥ D cup. Upon further adjustments for age at menarche, parity, physical activity, smoking status, diet, multivitamin use, family history of diabetes mellitus, BMI at age 18 and current BMI, the corresponding hazard ratios (and 95% CIs) were 1.37 (1.18–1.59) for B cup, 1.80 (1.53- 2.11) for C cup and 1.64 (1.34–2.01) for ≥ D cup. The addition of waist circumference to this model minimally changed the hazard ratios (and 95% CIs): 1.32 (1.14–1.53) for B cup, 1.71 (1.46–2.01) for C cup and 1.58 (1.29–1.94) for ≥ D cup.

Interpretation

A large bra cup size at age 20 may be a predictor of type 2 diabetes mellitus in middle-aged women. Whether this relation is independent of traditional indicators of obesity remains to be determined.Obesity is an established risk factor for type 2 diabetes mellitus.1,2 Affected individuals show signs of insulin resistance and hyperinsulinemia, a process that may begin in childhood.3,4 Pre-adolescent obesity is also an important predictor of age of onset of breast development in young women, and of breast size after puberty.5,6 Premature onset of puberty is preceded by childhood insulin resistance, hyperinsulinemia and hyperandrogenemia,7 which may persist after puberty8 and continue into early adulthood.9Although an elevated body mass index (BMI)10,11 and central adiposity12 are established risk factors for insulin resistance and the onset of type 2 diabetes mellitus, little is known about the contribution of extra-abdominal adipose tissue, including breast tissue, about 60% of which is fatty tissue, to this process.13,14 We hypothesized that a woman''s breast size in late adolescence reflects her predisposition to insulin resistance and type 2 diabetes mellitus that is both additive to, and independent of, BMI. We explored this hypothesis in conjunction with the Nurses'' Health Study II by relating bra cup size, a proxy for breast size, to the onset of type 2 diabetes mellitus.  相似文献   

5.

Background

Tools for early identification of workers with back pain who are at high risk of adverse occupational outcome would help concentrate clinical attention on the patients who need it most, while helping reduce unnecessary interventions (and costs) among the others. This study was conducted to develop and validate clinical rules to predict the 2-year work disability status of people consulting for nonspecific back pain in primary care settings.

Methods

This was a 2-year prospective cohort study conducted in 7 primary care settings in the Quebec City area. The study enrolled 1007 workers (participation, 68.4% of potential participants expected to be eligible) aged 18–64 years who consulted for nonspecific back pain associated with at least 1 day''s absence from work. The majority (86%) completed 5 telephone interviews documenting a large array of variables. Clinical information was abstracted from the medical files. The outcome measure was “return to work in good health” at 2 years, a variable that combined patients'' occupational status, functional limitations and recurrences of work absence. Predictive models of 2-year outcome were developed with a recursive partitioning approach on a 40% random sample of our study subjects, then validated on the rest.

Results

The best predictive model included 7 baseline variables (patient''s recovery expectations, radiating pain, previous back surgery, pain intensity, frequent change of position because of back pain, irritability and bad temper, and difficulty sleeping) and was particularly efficient at identifying patients with no adverse occupational outcome (negative predictive value 78%– 94%).

Interpretation

A clinical prediction rule accurately identified a large proportion of workers with back pain consulting in a primary care setting who were at a low risk of an adverse occupational outcome.Since the 1950s, back pain has taken on the proportions of a veritable epidemic, counting now among the 5 most frequent reasons for visits to physicians'' offices in North America1,2,3 and ranking sixth among health problems generating the highest direct medical costs.4 Because of its high incidence and associated expense, effective intervention for back pain has great potential for improving population health and for freeing up extensive societal resources.So-called red flags to identify pain that is specific (i.e., pain in the back originating from tumours, fractures, infections, cauda equina syndrome, visceral pain and systemic disease)5 account for about 3% of all cases of back pain.6 The overwhelming majority of back-pain problems are thus nonspecific. One important feature of nonspecific back pain among workers is that a small proportion of cases (< 10%) accounts for most of the costs (> 70%).7,8,9,10,11,12,13,14 This fact has led investigators to focus on the early identification of patients who are at higher risk of disability, so that specialized interventions can be provided earlier, whereas other patients can be expected to recover with conservative care.9,15,16,17,18,19,20,21,22,23,24,25 Although this goal has become much sought-after in back-pain research, most available studies in this area have 3 methodological problems:
  • Potential predictors are often limited to administrative or clinical data, whereas it is clear that back pain is a multidimensional health problem.
  • The outcome variable is most often a 1-point dichotomous measure of return to work, time off work or duration of compensation, although some authors have warned against the use of first return to work as a measure of recovery. Baldwin and colleagues,26 for instance, point out that first return to work is frequently followed by recurrences of work absence.
  • Most published prediction rules developed for back pain have not been successfully validated on any additional samples of patients.
Our study aimed to build a simple predictive tool that could be used by primary care physicians to identify workers with nonspecific back pain who are at higher risk of long-term adverse occupational outcomes, and then to validate this tool on a fresh sample of subjects.  相似文献   

6.

Background

Although repeat induced abortion is common, data concerning characteristics of women undergoing this procedure are lacking. We conducted this study to identify the characteristics, including history of physical abuse by a male partner and history of sexual abuse, of women who present for repeat induced abortion.

Methods

We surveyed a consecutive series of women presenting for initial or repeat pregnancy termination to a regional provider of abortion services for a wide geographic area in southwestern Ontario between August 1998 and May 1999. Self-reported demographic characteristics, attitudes and practices regarding contraception, history of relationship violence, history of sexual abuse or coercion, and related variables were assessed as potential correlates of repeat induced abortion. We used χ2 tests for linear trend to examine characteristics of women undergoing a first, second, or third or subsequent abortion. We analyzed significant correlates of repeat abortion using stepwise multivariate multinomial logistic regression to identify factors uniquely associated with repeat abortion.

Results

Of the 1221 women approached, 1145 (93.8%) consented to participate. Data regarding first versus repeat abortion were available for 1127 women. A total of 68.2%, 23.1% and 8.7% of the women were seeking a first, second, or third or subsequent abortion respectively. Adjusted odds ratios for undergoing repeat versus a first abortion increased significantly with increased age (second abortion: 1.08, 95% confidence interval [CI] 1.04–1.09; third or subsequent abortion: 1.11, 95% CI 1.07–1.15), oral contraceptive use at the time of conception (second abortion: 2.17, 95% CI 1.52–3.09; third or subsequent abortion: 2.60, 95% CI 1.51–4.46), history of physical abuse by a male partner (second abortion: 2.04, 95% CI 1.39–3.01; third or subsequent abortion: 2.78, 95% CI 1.62–4.79), history of sexual abuse or violence (second abortion: 1.58, 95% CI 1.11–2.25; third or subsequent abortion: 2.53, 95% CI 1.50–4.28), history of sexually transmitted disease (second abortion: 1.50, 95% CI 0.98–2.29; third or subsequent abortion: 2.26, 95% CI 1.28–4.02) and being born outside Canada (second abortion: 1.83, 95% CI 1.19–2.79; third or subsequent abortion: 1.75, 95% CI 0.90–3.41).

Interpretation

Among other factors, a history of physical or sexual abuse was associated with repeat induced abortion. Presentation for repeat abortion may be an important indication to screen for a current or past history of relationship violence and sexual abuse.Repeat pregnancy termination procedures are common in Canada (where 35.5% of all induced abortions are repeat procedures)1,2 and the United States (where 48% of induced abortions are repeat procedures).3,4,5,6,7 Rates of repeat induced abortion increased in both countries for an initial period after abortion was legalized, as a result of an increase in the number of women who had access to a first, and consequently to repeat, legal induced abortion.1,6,8,9 At present, rates of initial and repeat abortion in Canada and the United States appear to be stabilizing.2,7Research concerning characteristics of women who undergo repeat induced abortions has been limited in scope. In a literature search we identified fewer than 20 studies in this area published over the past 3 decades. However, available research has shown several consistent findings. Women undergoing repeat abortions are more likely than those undergoing a first abortion to report using a method of contraception at the time of conception.7,8,10,11 In addition, women seeking repeat abortions report more challenging family situations than women seeking initial abortions: they are more likely to be separated, divorced, widowed or living in a common-law marriage, and to report difficulties with their male partner.1,5,8,11,12 They also are older,7,13 have more children1,5,13 and are more often non-white7,11,13 than women seeking initial abortions.There is little evidence to suggest that women seeking repeat abortion are using pregnancy termination as a method of birth control.1,5,6,8,11 Evidence also does not indicate that women seeking repeat abortion are psychologically maladjusted.8,13Our literature review showed that many studies of repeat abortion are 20 to 30 years old and are based on data collected when abortion was a newly legalized procedure.5,11 Furthermore, in studies of correlates of repeat abortion the investigators did not examine a range of personality characteristics that are known to influence women''s reproductive health outcomes,14,15 including attitudes about sexuality,14 health locus of control,16,17 degree of social integration,16 attitudes about contraception18,19 and history of sexual or physical abuse.20,21,22 The objective of the current study was to identify characteristics of women who undergo repeat induced abortion.  相似文献   

7.

Background

Clinical trials have shown the benefits of statins after acute myocardial infarction (AMI). However, it is unclear whether different statins exert a similar effect in reducing the incidence of recurrent AMI and death when used in clinical practice.

Methods

We conducted a retrospective cohort study (1997–2002) to compare 5 statins using data from medical administrative databases in 3 provinces (Quebec, Ontario and British Columbia). We included patients aged 65 years and over who were discharged alive after their first AMI-related hospital stay and who began statin treatment within 90 days after discharge. The primary end point was the combined outcome of recurrent AMI or death from any cause. The secondary end point was death from any cause. Adjusted hazard ratios (HRs) for each statin compared with atorvastatin as the reference drug were estimated using Cox proportional hazards regression analysis.

Results

A total of 18 637 patients were prescribed atorvastatin (n = 6420), pravastatin (n = 4480), simvastatin (n = 5518), lovastatin (n = 1736) or fluvastatin (n = 483). Users of different statins showed similar baseline characteristics and patterns of statin use. The adjusted HRs (and 95% confidence intervals) for the combined outcome of AMI or death showed that each statin had similar effects when compared with atorvastatin: pravastatin 1.00 (0.90–1.11), simvastatin 1.01 (0.91– 1.12), lovastatin 1.09 (0.95–1.24) and fluvastatin 1.01 (0.80– 1.27). The results did not change when death alone was the end point, nor did they change after adjustment for initial daily dose or after censoring of patients who switched or stopped the initial statin treatment.

Interpretation

Our results suggest that, under current usage, statins are equally effective for secocondary prevention in elderly patients after AMI.Randomized controlled trials (RCTs) have shown that the use of statins after acute myocardial infarction (AMI) are effective in reducing the incidence of both fatal and nonfatal cardiovascular events.1,2,3,4,5,6,7,8 Although these trials have significantly influenced post-AMI treatment,9,10,11,12 it remains unclear whether all statins are equally effective in preventing recurrent AMI and death. Drugs in the same class are generally thought to be therapeutically equivalent because of similar mechanisms of action (class effect).13,14,15 However, in the absence of comparative data, this assumption requires evaluation. Statins differ in multiple characteristics, including liver and renal metabolism, half-life, effect on other serum lipid components, bioavailability and potency.16,17,18,19 These differences could potentially influence the extent to which the drugs are beneficial. Despite limited evidence in support of a differential benefit of statins for secondary prevention, preferential prescribing already occurs in practice and cannot be fully explained by the existing evidence or guidelines.20 Comparative data of statins are thus required to inform health care decision-making.A number of RCTs have directly compared statins using surrogate end points, such as lipid reduction,21,22,23 markers of hemostasis and inflammation24,25,26 or reduction in number of atherotic plaques.27 However, the extent to which these results can be extrapolated to clinically relevant outcomes remains to be established. The newly released PROVE IT– TIMI 22 trial28 was the first trial to compare 2 statins for cardiovascular prevention. The study showed that atorvastatin used at a maximal dose of 80 mg (intensive therapy) was better than pravastatin at a dose of 40 mg (standard therapy) in decreasing the incidence of cardiovascular events and procedures. The study was, however, conducted to show the benefit associated with increased treatment intensity. It did not compare the drugs by milligram-equivalent doses or by cholesterol-lowering equivalent doses. Moreover, no difference was detected when death alone or the combined outcome of death or AMI was evaluated. Other than the PROVE IT–TIMI 22 trial, few data are currently available from RCTs that compare statins for cardiovascular prevention.29We conducted a population-based study to examine the relative effectiveness of different statins for long-term secondary prevention after AMI. We used retrospective cohorts of elderly patients prescribed statins after AMI in 3 provinces. Five statins were studied: atorvastatin, pravastatin, simvastatin, lovastatin and fluvastatin. The newest statin, rosuvastatin, was not available during the study period and was not considered in this study.  相似文献   

8.

Background

Vitamin D is required for normal bone growth and mineralization. We sought to determine whether vitamin D deficiency at birth is associated with bone mineral content (BMC) of Canadian infants.

Methods

We measured plasma 25-hydroxyvitamin D [25(OH)D] as an indicator of vitamin D status in 50 healthy mothers and their newborn term infants. In the infants, anthropometry and lumbar, femur and whole-body BMC were measured within 15 days of delivery. Mothers completed a 24-hour recall and 3-day food and supplement record. We categorized the vitamin D status of mothers and infants as deficient or adequate and then compared infant bone mass in these groups using nonpaired t tests. Maternal and infant variables known to be related to bone mass were tested for their relation to BMC using backward stepwise regression analysis.

Results

Twenty-three (46%) of the mothers and 18 (36%) of the infants had a plasma 25(OH)D concentration consistent with deficiency. Infants who were vitamin D deficient were larger at birth and follow-up. Absolute lumbar spine, femur and whole-body BMC were not different between infants with adequate vitamin D and those who were deficient, despite larger body size in the latter group. In the regression analysis, higher whole-body BMC was associated with greater gestational age and weight at birth as well as higher infant plasma 25(OH)D.

Conclusion

A high rate of vitamin D deficiency was observed among women and their newborn infants. Among infants, vitamin D deficiency was associated with greater weight and length but lower bone mass relative to body weight. Whether a return to normal vitamin D status, achieved through supplements or fortified infant formula, can reset the trajectory for acquisition of BMC requires investigation.In northern countries, endogenous synthesis of vitamin D is thought to be limited to the months of April through September.1 During the winter months, dietary or supplemental vitamin D intake at values similar to the recommended intake of 200 IU/day (5 μg/day) is not enough to prevent vitamin D deficiency in young women.2 Vitamin D deficiency is well documented among Canadian women3,4,5,6,7 and young children4,8,9,10,11 and has been reported at levels as high as 76% of women and 43% of children (3–24 months) in northern Manitoba4 and 48.4%–88.6% of Aboriginal women and 15.1%–63.5% of non-Aboriginal women in the Inuvik zone of the former Northwest Territories.3 Vitamin D dependent rickets in children and osteomalacia in adults are the most commonly reported features of deficiency.12 We sought to determine whether maternal or infant vitamin D deficiency at birth is associated with BMC of Canadian infants.  相似文献   

9.

Background

Ethnic disparities in access to health care and health outcomes are well documented. It is unclear whether similar differences exist between Aboriginal and non-Aboriginal people with chronic kidney disease in Canada. We determined whether access to care differed between status Aboriginal people (Aboriginal people registered under the federal Indian Act) and non-Aboriginal people with chronic kidney disease.

Methods

We identified 106 511 non-Aboriginal and 1182 Aboriginal patients with chronic kidney disease (estimated glomerular filtration rate less than 60 mL/min/1.73 m2). We compared outcomes, including hospital admissions, that may have been preventable with appropriate outpatient care (ambulatory-care–sensitive conditions) as well as use of specialist services, including visits to nephrologists and general internists.

Results

Aboriginal people were almost twice as likely as non-Aboriginal people to be admitted to hospital for an ambulatory-care–sensitive condition (rate ratio 1.77, 95% confidence interval [CI] 1.46–2.13). Aboriginal people with severe chronic kidney disease (estimated glomerular filtration rate < 30 mL/min/1.73 m2) were 43% less likely than non-Aboriginal people with severe chronic kidney disease to visit a nephrologist (hazard ratio 0.57, 95% CI 0.39–0.83). There was no difference in the likelihood of visiting a general internist (hazard ratio 1.00, 95% CI 0.83–1.21).

Interpretation

Increased rates of hospital admissions for ambulatory-care–sensitive conditions and a reduced likelihood of nephrology visits suggest potential inequities in care among status Aboriginal people with chronic kidney disease. The extent to which this may contribute to the higher rate of kidney failure in this population requires further exploration.Ethnic disparities in access to health care are well documented;1,2 however, the majority of studies include black and Hispanic populations in the United States. The poorer health status and increased mortality among Aboriginal populations than among non-Aboriginal populations,3,4 particularly among those with chronic medical conditions,5,6 raise the question as to whether there is differential access to health care and management of chronic medical conditions in this population.The prevalence of end-stage renal disease, which commonly results from chronic kidney disease, is about twice as common among Aboriginal people as it is among non-Aboriginal people.7,8 Given that the progression of chronic kidney disease can be delayed by appropriate therapeutic interventions9,10 and that delayed referral to specialist care is associated with increased mortality,11,12 issues such as access to health care may be particularly important in the Aboriginal population. Although previous studies have suggested that there is decreased access to primary and specialist care in the Aboriginal population,13–15 these studies are limited by the inclusion of patients from a single geographically isolated region,13 the use of survey data,14 and the inability to differentiate between different types of specialists and reasons for the visit.15In addition to physician visits, admission to hospital for ambulatory-care–sensitive conditions (conditions that, if managed effectively in an outpatient setting, do not typically result in admission to hospital) has been used as a measure of access to appropriate outpatient care.16,17 Thus, admission to hospital for an ambulatory-care–sensitive condition reflects a potentially preventable complication resulting from inadequate access to care. Our objective was to determine whether access to health care differs between status Aboriginal (Aboriginal people registered under the federal Indian Act) and non-Aboriginal people with chronic kidney disease. We assess differences in care by 2 measures: admission to hospital for an ambulatory-care–sensitive condition related to chronic kidney disease; and receipt of nephrology care for severe chronic kidney disease as recommended by clinical practice guidelines.18  相似文献   

10.
11.

Background

The number of births attended by individual family physicians who practice intrapartum care varies. We wanted to determine if the practice–volume relations that have been shown in other fields of medical practice also exist in maternity care practice by family doctors.

Methods

For the period April 1997 to August 1998, we analyzed all singleton births at a major maternity teaching hospital for which the family physician was the responsible physician. Physicians were grouped into 3 categories on the basis of the number of births they attended each year: fewer than 12, 12 to 24, and 25 or more. Physicians with a low volume of deliveries (72 physicians, 549 births), those with a medium volume of deliveries (34 physicians, 871 births) and those with a high volume of deliveries (46 physicians, 3024 births) were compared in terms of maternal and newborn outcomes. The main outcome measures were maternal morbidity, 5-minute Apgar score and admission of the baby to the neonatal intensive care unit or special care unit. Secondary outcomes were obstetric procedures and consultation patterns.

Results

There was no difference among the 3 volume cohorts in terms of rates of maternal complications of delivery, 5-minute Apgar scores of less than 7 or admissions to the neonatal intensive care unit or the special care unit, either before or after adjustment for parity, pregnancy-induced hypertension, diabetes, ethnicity, lone parent status, maternal age, gestational age, newborn birth weight and newborn head circumference at birth. High- and medium-volume family physicians consulted with obstetricians less often than low-volume family physicians (adjusted odds ratio [OR] 0.586 [95% confidence interval, CI, 0.479–0.718] and 0.739 [95% CI 0.583–0.935] respectively). High- and medium-volume family physicians transferred the delivery to an obstetrician less often than low-volume family physicians (adjusted OR 0.668 [95% CI 0.542–0.823] and 0.776 [95% CI 0.607–0.992] respectively). Inductions were performed by medium-volume family physicians more often than by low-volume family physicians (adjusted OR 1.437 [95% CI 1.036–1.992].

Interpretation

Family physicians'' delivery volumes were not associated with adverse outcomes for mothers or newborns. Low-volume family physicians referred patients and transferred deliveries to obstetricians more frequently than high- or medium-volume family physicians. Further research is needed to validate these findings in smaller facilities, both urban and rural.More than 20 years ago, Luft and associates1 conducted one of the earliest volume–outcome studies. Since then, many studies addressing the relation between volume of procedures and patient outcomes have been published.2,3 In some of these studies, either the hospital size or the physician procedural volume was used as a surrogate for physician expertise. Among studies analyzing hospital volumes and outcomes, better outcomes have been associated with higher patient volumes in some instances4,5,6,7 but not others.3,8,9 Some studies of individual provider volume have shown a positive relation between volume and outcomes,10,11 whereas others have shown no relation or inconsistent results.3,12 Finally, a few studies analyzing both hospital volume and provider volume have reported a positive volume–outcome relation.13,14Criticism levelled at the methods used in volume–outcome studies have addressed the lack of adjustment for case mix, different cutoff points for volume categories and retrospective design.3 Other factors that have an effect on patient outcomes but that have not been included in previous volume analyses include health maintenance organization status, physician certification and years since graduation, and patient socioeconomic status, age and ethnicity. Furthermore, most of the studies on volume have covered surgical or oncology specialities.The few studies that have been done on volume and outcome in maternity care have shown variable effects. Rural health care is often associated with lower volumes of obstetric procedures. However, no differences in maternal or newborn outcomes have been shown in some comparisons of births in urban and rural locations.15,16,17,18 Other studies have shown poorer maternal and newborn outcomes in low-volume hospitals, neonatal intensive care units (NICUs) and rural locations.19,20,21,22 Conversely, higher volume (hospitals with more than 1000 deliveries per year) has been associated with more maternal lacerations or complications.23When the health care provider has been the unit of analysis, a relation between volume and maternal or newborn outcome has been demonstrated in at least one study24 but not in others.25,26 Low volume has been defined as 20 to 24 deliveries per year.24,26 Hass and colleagues24 reported an adjusted odds ratio (OR) of 1.4 for low birth weight for infants delivered by low-volume non-board-certified physicians relative to high-volume non-board-certified physicians; the adjusted OR was 1.56 for low-volume board-certified physicians relative to high-volume board-certified physicians (98.7% of whom were obstetricians).Possible explanations for the differences among studies include differences in health care delivery systems, insurance coverage, experience and training of providers, maternal risk factors, triage or transfer of high-risk cases, choice of outcome measures, and changes over time in access to care, quality assurance and standard of living. Relations have been reported between maternal or newborn outcomes and smoking, maternal history of low birth weight (for previous pregnancies), pregnancy–induced hypertension, diabetes, prepregnancy weight, gestational weight gain, maternal height and age, multiple gestation, previous vaginal birth after cesarean section, history of previous delivery problems, parity, large-for-date fetus, ethnicity and fetal sex.25,27,28,29 Few studies of the relation between volume of births and obstetric outcome have been able to control for these potentially confounding variables and adjust for maternal risk factors.Our database of detailed accounts of births in one hospital setting allowed us to examine this issue more rigorously. We posed 2 research questions: Is there a relation between the volume of deliveries attended by individual family physicians and maternal and newborn outcomes? If there are differences in outcomes, are they related to different physician practice styles and consultation patterns?  相似文献   

12.
13.
The structural precursor polyprotein, Gag, encoded by all retroviruses, including the human immunodeficiency virus type 1 (HIV-1), is necessary and sufficient for the assembly and release of particles that morphologically resemble immature virus particles. Previous studies have shown that the addition of Ca2+ to cells expressing Gag enhances virus particle production. However, no specific cellular factor has been implicated as mediator of Ca2+ provision. The inositol (1,4,5)-triphosphate receptor (IP3R) gates intracellular Ca2+ stores. Following activation by binding of its ligand, IP3, it releases Ca2+ from the stores. We demonstrate here that IP3R function is required for efficient release of HIV-1 virus particles. Depletion of IP3R by small interfering RNA, sequestration of its activating ligand by expression of a mutated fragment of IP3R that binds IP3 with very high affinity, or blocking formation of the ligand by inhibiting phospholipase C-mediated hydrolysis of the precursor, phosphatidylinositol-4,5-biphosphate, inhibited Gag particle release. These disruptions, as well as interference with ligand-receptor interaction using antibody targeted to the ligand-binding site on IP3R, blocked plasma membrane accumulation of Gag. These findings identify IP3R as a new determinant in HIV-1 trafficking during Gag assembly and introduce IP3R-regulated Ca2+ signaling as a potential novel cofactor in viral particle release.Assembly of the human immunodeficiency virus (HIV) is determined by a single gene that encodes a structural polyprotein precursor, Gag (71), and may occur at the plasma membrane or within late endosomes/multivesicular bodies (LE/MVB) (7, 48, 58; reviewed in reference 9). Irrespective of where assembly occurs, the assembled particle is released from the plasma membrane of the host cell. Release of Gag as virus-like particles (VLPs) requires the C-terminal p6 region of the protein (18, 19), which contains binding sites for Alix (60, 68) and Tsg101 (17, 37, 38, 41, 67, 68). Efficient release of virus particles requires Gag interaction with Alix and Tsg101. Alix and Tsg101 normally function to sort cargo proteins to LE/MVB for lysosomal degradation (5, 15, 29, 52). Previous studies have shown that addition of ionomycin, a calcium ionophore, and CaCl2 to the culture medium of cells expressing Gag or virus enhances particle production (20, 48). This is an intriguing observation, given the well-documented positive role for Ca2+ in exocytotic events (33, 56). It is unclear which cellular factors might regulate calcium availability for the virus release process.Local and global elevations in the cytosolic Ca2+ level are achieved by ion release from intracellular stores and by influx from the extracellular milieu (reviewed in reference 3). The major intracellular Ca2+ store is the endoplasmic reticulum (ER); stores also exist in MVB and the nucleus. Ca2+ release is regulated by transmembrane channels on the Ca2+ store membrane that are formed by tetramers of inositol (1,4,5)-triphosphate receptor (IP3R) proteins (reviewed in references 39, 47, and 66). The bulk of IP3R channels mediate release of Ca2+ from the ER, the emptying of which signals Ca2+ influx (39, 51, 57, 66). The few IP3R channels on the plasma membrane have been shown to be functional as well (13). Through proteomic analysis, we identified IP3R as a cellular protein that was enriched in a previously described membrane fraction (18) which, in subsequent membrane floatation analyses, reproducibly cofractionated with Gag and was enriched in the membrane fraction only when Gag was expressed. That IP3R is a major regulator of cytosolic calcium concentration (Ca2+) is well documented (39, 47, 66). An IP3R-mediated rise in cytosolic Ca2+ requires activation of the receptor by a ligand, inositol (1,4,5)-triphosphate (IP3), which is produced when phospholipase C (PLC) hydrolyzes phosphatidylinositol-4,5-bisphosphate [PI(4,5)P2] at the plasma membrane (16, 25, 54). Paradoxically, PI(4,5)P2 binds to the matrix (MA) domain in Gag (8, 55, 59), and the interaction targets Gag to PI(4,5)P2-enriched regions on the plasma membrane; these events are required for virus release (45). We hypothesized that PI(4,5)P2 binding might serve to target Gag to plasma membrane sites of localized Ca2+ elevation resulting from PLC-mediated PI(4,5)P2 hydrolysis and IP3R activation. This idea prompted us to investigate the role of IP3R in Gag function.Here, we show that HIV-1 Gag requires steady-state levels of IP3R for its efficient release. Three isoforms of IP3R, types 1, 2, and 3, are encoded in three independent genes (39, 47). Types 1 and 3 are expressed in a variety of cells and have been studied most extensively (22, 39, 47, 73). Depletion of the major isoforms in HeLa or COS-1 cells by small interfering RNA (siRNA) inhibited viral particle release. Moreover, we show that sequestration of the IP3R activating ligand or blocking ligand formation also inhibited Gag particle release. The above perturbations, as well as interfering with receptor expression or activation, led to reduced Gag accumulation at the cell periphery. The results support the conclusion that IP3R activation is required for efficient HIV-1 viral particle release.  相似文献   

14.
Highly active antiretroviral therapy (HAART) can reduce human immunodeficiency virus type 1 (HIV-1) viremia to clinically undetectable levels. Despite this dramatic reduction, some virus is present in the blood. In addition, a long-lived latent reservoir for HIV-1 exists in resting memory CD4+ T cells. This reservoir is believed to be a source of the residual viremia and is the focus of eradication efforts. Here, we use two measures of population structure—analysis of molecular variance and the Slatkin-Maddison test—to demonstrate that the residual viremia is genetically distinct from proviruses in resting CD4+ T cells but that proviruses in resting and activated CD4+ T cells belong to a single population. Residual viremia is genetically distinct from proviruses in activated CD4+ T cells, monocytes, and unfractionated peripheral blood mononuclear cells. The finding that some of the residual viremia in patients on HAART stems from an unidentified cellular source other than CD4+ T cells has implications for eradication efforts.Successful treatment of human immunodeficiency virus type 1 (HIV-1) infection with highly active antiretroviral therapy (HAART) reduces free virus in the blood to levels undetectable by the most sensitive clinical assays (18, 36). However, HIV-1 persists as a latent provirus in resting, memory CD4+ T lymphocytes (6, 9, 12, 16, 48) and perhaps in other cell types (45, 52). The latent reservoir in resting CD4+ T cells represents a barrier to eradication because of its long half-life (15, 37, 40-42) and because specifically targeting and purging this reservoir is inherently difficult (8, 25, 27).In addition to the latent reservoir in resting CD4+ T cells, patients on HAART also have a low amount of free virus in the plasma, typically at levels below the limit of detection of current clinical assays (13, 19, 35, 37). Because free virus has a short half-life (20, 47), residual viremia is indicative of active virus production. The continued presence of free virus in the plasma of patients on HAART indicates either ongoing replication (10, 13, 17, 19), release of virus after reactivation of latently infected CD4+ T cells (22, 24, 31, 50), release from other cellular reservoirs (7, 45, 52), or some combination of these mechanisms. Finding the cellular source of residual viremia is important because it will identify the cells that are still capable of producing virus in patients on HAART, cells that must be targeted in any eradication effort.Detailed analysis of this residual viremia has been hindered by technical challenges involved in working with very low concentrations of virus (13, 19, 35). Recently, new insights into the nature of residual viremia have been obtained through intensive patient sampling and enhanced ultrasensitive sequencing methods (1). In a subset of patients, most of the residual viremia consisted of a small number of viral clones (1, 46) produced by a cell type severely underrepresented in the peripheral circulation (1). These unique viral clones, termed predominant plasma clones (PPCs), persist unchanged for extended periods of time (1). The persistence of PPCs indicates that in some patients there may be another major cellular source of residual viremia (1). However, PPCs were observed in a small group of patients who started HAART with very low CD4 counts, and it has been unclear whether the PPC phenomenon extends beyond this group of patients. More importantly, it has been unclear whether the residual viremia generally consists of distinct virus populations produced by different cell types.Since the HIV-1 infection in most patients is initially established by a single viral clone (23, 51), with subsequent diversification (29), the presence of genetically distinct populations of virus in a single individual can reflect entry of viruses into compartments where replication occurs with limited subsequent intercompartmental mixing (32). Sophisticated genetic tests can detect such population structure in a sample of viral sequences (4, 39, 49). Using two complementary tests of population structure (14, 43), we analyzed viral sequences from multiple sources within individual patients in order to determine whether a source other than circulating resting CD4+ T cells contributes to residual viremia and viral persistence. Our results have important clinical implications for understanding HIV-1 persistence and treatment failure and for improving eradication strategies, which are currently focusing only on the latent CD4+ T-cell reservoir.  相似文献   

15.
An extracellular β-fructofuranosidase from the yeast Xanthophyllomyces dendrorhous was characterized biochemically, molecularly, and phylogenetically. This enzyme is a glycoprotein with an estimated molecular mass of 160 kDa, of which the N-linked carbohydrate accounts for 60% of the total mass. It displays optimum activity at pH 5.0 to 6.5, and its thermophilicity (with maximum activity at 65 to 70°C) and thermostability (with a T50 in the range 66 to 71°C) is higher than that exhibited by most yeast invertases. The enzyme was able to hydrolyze fructosyl-β-(2→1)-linked carbohydrates such as sucrose, 1-kestose, or nystose, although its catalytic efficiency, defined by the kcat/Km ratio, indicates that it hydrolyzes sucrose approximately 4.2 times more efficiently than 1-kestose. Unlike other microbial β-fructofuranosidases, the enzyme from X. dendrorhous produces neokestose as the main transglycosylation product, a potentially novel bifidogenic trisaccharide. Using a 41% (wt/vol) sucrose solution, the maximum fructooligosaccharide concentration reached was 65.9 g liter−1. In addition, we isolated and sequenced the X. dendrorhous β-fructofuranosidase gene (Xd-INV), showing that it encodes a putative mature polypeptide of 595 amino acids and that it shares significant identity with other fungal, yeast, and plant β-fructofuranosidases, all members of family 32 of the glycosyl-hydrolases. We demonstrate that the Xd-INV could functionally complement the suc2 mutation of Saccharomyces cerevisiae and, finally, a structural model of the new enzyme based on the homologous invertase from Arabidopsis thaliana has also been obtained.The basidiomycetous yeast Xanthophyllomyces dendrorhous (formerly Phaffia rhodozyma) produces astaxanthin (3-3′-dihydroxy-β,β-carotene-4,4 dione [17, 25]). Different industries have displayed great interest in this carotenoid pigment due to its attractive red-orange color and antioxidant properties, which has intensified the molecular and genetic study of this yeast. As a result, several genes involved in the astaxanthin biosynthetic pathway have been cloned and/or characterized, as well as some other genes such as those encoding actin (60), glyceraldehyde-3-phosphate dehydrogenase (56), endo-β-1,3-glucanase, and aspartic protease (4). In terms of the use of carbon sources, a β-amylase (9), and an α-glucosidase (33) with glucosyltransferase activity (12), as well as a yeast cell-associated invertase (41), have also been reported.Invertases or β-fructofuranosidases (EC 3.2.1.26) catalyze the release of β-fructose from the nonreducing termini of various β-d-fructofuranoside substrates. Yeast β-fructofuranosidases have been widely studied, including that of Saccharomyces cerevisiae (11, 14, 45, 46), Schizosaccharomyces pombe (36), Pichia anomala (40, 49), Candida utilis (5, 8), or Schwanniomyces occidentalis (2). They generally exhibit strong similarities where sequences are available, and they have been classified within family 32 of the glycosyl-hydrolases (GH) on the basis of their amino acid sequences. The catalytic mechanism proposed for the S. cerevisiae enzyme implies that an aspartate close to the N terminus (Asp-23) acts as a nucleophile, and a glutamate (Glu-204) acts as the acid/base catalyst (46). In addition, the three-dimensional structures of some enzymes in this family have been resolved, such as that of an exoinulinase from Aspergillus niger (var. awamori; 37) and the invertase from Arabidopsis thaliana (55).As well as hydrolyzing sucrose, β-fructofuranosidases from microorganisms may also catalyze the synthesis of short-chain fructooligosaccharides (FOS), in which one to three fructosyl moieties are linked to the sucrose skeleton by different glycosidic bonds depending on the source of the enzyme (3, 52). FOS are one of the most promising ingredients for functional foods since they act as prebiotics (44), and they exert a beneficial effect on human health, participating in the prevention of cardiovascular diseases, colon cancer, or osteoporosis (28). Currently, Aspergillus fructosyltransferase is the main industrial producer of FOS (15, 52), producing a mixture of FOS with an inulin-type structure, containing β-(2→1)-linked fructose-oligomers (1F-FOS: 1-kestose, nystose, or 1F-fructofuranosylnystose). However, there is certain interest in the development of novel molecules that may have better prebiotic and physiological properties. In this context, β-(2→6)-linked FOS, where this link exits between two fructose units (6F-FOS: 6-kestose) or between fructose and the glucosyl moiety (6G-FOS: neokestose, neonystose, and neofructofuranosylnystose), may have enhanced prebiotic properties compared to commercial FOS (29, 34, 54). The enzymatic synthesis of 6-kestose and other related β-(2→6)-linked fructosyl oligomers has already been reported in yeasts such as S. cerevisiae (11) or Schwanniomyces occidentalis (2) and in fungi such as Thermoascus aurantiacus (26) or Sporotrichum thermophile (27). However, the production of FOS included in the 6G-FOS series has not been widely reported in microorganisms, probably because they are not generally produced (2, 15) or because they represent only a minor biosynthetic product (e.g., with baker''s yeast invertase) (11). Most research into neo-FOS production has been carried out with Penicillium citrinum cells (19, 31, 32, 39). In this context, neokestose is the main transglycosylation product accumulated by whole X. dendrorhous cells from sucrose (30), although the enzyme responsible for this reaction remained uncharacterized.Here, we describe the molecular, phylogenetic, and biochemical characterization of an extracellular β-fructofuranosidase from X. dendrorhous. Kinetic studies of its hydrolytic activity were performed using different substrates, and we investigated its fructosyltransferase capacity. The functionality of the gene analyzed was verified through its heterologous expression, and a structural model of this enzyme based on the homologous invertase from A. thaliana has also been obtained.  相似文献   

16.
The immune correlates of human/simian immunodeficiency virus control remain elusive. While CD8+ T lymphocytes likely play a major role in reducing peak viremia and maintaining viral control in the chronic phase, the relative antiviral efficacy of individual virus-specific effector populations is unknown. Conventional assays measure cytokine secretion of virus-specific CD8+ T cells after cognate peptide recognition. Cytokine secretion, however, does not always directly translate into antiviral efficacy. Recently developed suppression assays assess the efficiency of virus-specific CD8+ T cells to control viral replication, but these assays often use cell lines or clones. We therefore designed a novel virus production assay to test the ability of freshly ex vivo-sorted simian immunodeficiency virus (SIV)-specific CD8+ T cells to suppress viral replication from SIVmac239-infected CD4+ T cells. Using this assay, we established an antiviral hierarchy when we compared CD8+ T cells specific for 12 different epitopes. Antiviral efficacy was unrelated to the disease status of each animal, the protein from which the tested epitopes were derived, or the major histocompatibility complex (MHC) class I restriction of the tested epitopes. Additionally, there was no correlation with the ability to suppress viral replication and epitope avidity, epitope affinity, CD8+ T-cell cytokine multifunctionality, the percentage of central and effector memory cell populations, or the expression of PD-1. The ability of virus-specific CD8+ T cells to suppress viral replication therefore cannot be determined using conventional assays. Our results suggest that a single definitive correlate of immune control may not exist; rather, a successful CD8+ T-cell response may be comprised of several factors.CD8+ T cells may play a critical role in blunting peak viremia and controlling human immunodeficiency virus (HIV) and simian immunodeficiency virus (SIV) replication. The transient depletion of CD8+ cells in SIV-infected macaques results in increased viral replication (26, 31, 51, 70). The emergence of virus-specific CD8+ T cells coincides with the reduction of peak viremia (12, 39, 42, 63), and CD8+ T-cell pressure selects for escape mutants (6, 9, 13, 28, 29, 38, 60, 61, 85). Furthermore, particular major histocompatibility complex (MHC) class I alleles are overrepresented in SIV- and HIV-infected elite controllers (15, 29, 33, 34, 46, 56, 88).Because it has been difficult to induce broadly neutralizing antibodies (Abs), the AIDS vaccine field is currently focused on developing a vaccine designed to elicit HIV-specific CD8+ T cells (8, 52, 53, 82). Investigators have tried to define the immune correlates of HIV control. Neither the magnitude nor the breadth of epitopes recognized by virus-specific CD8+ T-cell responses correlates with the control of viral replication (1). The quality of the immune response may, however, contribute to the antiviral efficacy of the effector cells. It has been suggested that the number of cytokines that virus-specific CD8+ T cells secrete may correlate with viral control, since HIV-infected nonprogressors appear to maintain CD8+ T cells that secrete several cytokines, compared to HIV-infected progressors (11, 27). An increased amount of perforin secretion may also be related to the proliferation of HIV-specific CD8+ T cells in HIV-infected nonprogressors (55). While those studies offer insight into the different immune systems of progressors and nonprogressors, they did not address the mechanism of viral control. Previously, we found no association between the ability of SIV-specific CD8+ T-cell clones to suppress viral replication in vitro and their ability to secrete gamma interferon (IFN-γ), tumor necrosis factor alpha (TNF-α), or interleukin-2 (IL-2) (18).Evidence suggests that some HIV/SIV proteins may be better vaccine targets than others. CD8+ T cells recognize epitopes derived from Gag as early as 2 h postinfection, whereas CD8+ T cells specific for epitopes in Env recognize infected cells only at 18 h postinfection (68). Additionally, a previously reported study of HIV-infected individuals showed that an increased breadth of Gag-specific responses was associated with lower viral loads (35, 59, 65, 66). CD8+ T-cell responses specific for Env, Rev, Tat, Vif, Vpr, Vpu, and Nef were associated with higher viral loads, with increased breadth of Env in particular being significantly associated with a higher chronic-phase viral set point.None of the many sophisticated methods employed for analyzing the characteristics of HIV- or SIV-specific immune responses clearly demarcate the critical qualities of an effective antiviral response. In an attempt to address these questions, we developed a new assay to measure the antiviral efficacy of individual SIV-specific CD8+ T-cell responses sorted directly from fresh peripheral blood mononuclear cells (PBMC). Using MHC class I tetramers specific for the epitope of interest, we sorted freshly isolated virus-specific CD8+ T cells and determined their ability to suppress virus production from SIV-infected CD4+ T cells. We then looked for a common characteristic of efficacious epitope-specific CD8+ T cells using traditional methods.  相似文献   

17.

Background

Whether to continue oral anticoagulant therapy beyond 6 months after an “unprovoked” venous thromboembolism is controversial. We sought to determine clinical predictors to identify patients who are at low risk of recurrent venous thromboembolism who could safely discontinue oral anticoagulants.

Methods

In a multicentre prospective cohort study, 646 participants with a first, unprovoked major venous thromboembolism were enrolled over a 4-year period. Of these, 600 participants completed a mean 18-month follow-up in September 2006. We collected data for 69 potential predictors of recurrent venous thromboembolism while patients were taking oral anticoagulation therapy (5–7 months after initiation). During follow-up after discontinuing oral anticoagulation therapy, all episodes of suspected recurrent venous thromboembolism were independently adjudicated. We performed a multivariable analysis of predictor variables (p < 0.10) with high interobserver reliability to derive a clinical decision rule.

Results

We identified 91 confirmed episodes of recurrent venous thromboembolism during follow-up after discontinuing oral anticoagulation therapy (annual risk 9.3%, 95% CI 7.7%–11.3%). Men had a 13.7% (95% CI 10.8%–17.0%) annual risk. There was no combination of clinical predictors that satisfied our criteria for identifying a low-risk subgroup of men. Fifty-two percent of women had 0 or 1 of the following characteristics: hyperpigmentation, edema or redness of either leg; D-dimer ≥ 250 μg/L while taking warfarin; body mass index ≥ 30 kg/m2; or age ≥ 65 years. These women had an annual risk of 1.6% (95% CI 0.3%–4.6%). Women who had 2 or more of these findings had an annual risk of 14.1% (95% CI 10.9%–17.3%).

Interpretation

Women with 0 or 1 risk factor may safely discontinue oral anticoagulant therapy after 6 months of therapy following a first unprovoked venous thromboembolism. This criterion does not apply to men. (http://Clinicaltrials.gov trial register number NCT00261014)Venous thromboembolism is a common, potentially fatal, yet treatable, condition. The risk of a recurrent venous thromboembolic event after 3–6 months of oral anticoagulant therapy varies. Some groups of patients (e.g., those who had a venous thromboembolism after surgery) have a very low annual risk of recurrence (< 1%),1 and they can safely discontinue anticoagulant therapy.2 However, among patients with an unprovoked thromboembolism who discontine anticoagulation therapy after 3–6 months, the risk of a recurrence in the first year is 5%–27%.3–6 In the second year, the risk is estimated to be 5%,3 and it is estimated to be 2%–3.8% for each subsequent year.5,7 The case-fatality rate for recurrent venous thromboembolism is between 5% and 13%.8,9 Oral anticoagulation therapy is very effective for reducing the risk of recurrence during therapy (> 90% relative risk [RR] reduction);3,4,10,11 however, this benefit is lost after therapy is discontinued.3,10,11 The risk of major bleeding with ongoing oral anticoagulation therapy among venous thromboembolism patients is 0.9–3.0% per year,3,4,6,12 with an estimated case-fatality rate of 13%.13Given that the long-term risk of fatal hemorrhage appears to balance the risk of fatal recurrent pulmonary embolism among patients with an unprovoked venous thromboembolism, clinicians are unsure if continuing oral anticoagulation therapy beyond 6 months is necessary.2,14 Identifying subgroups of patients with an annual risk of less than 3% will help clinicians decide which patients can safely discontinue anticoagulant therapy.We sought to determine the clinical predictors or combinations of predictors that identify patients with an annual risk of venous thromboembolism of less than 3% after taking an oral anticoagulant for 5–7 months after a first unprovoked event.  相似文献   

18.

Background

Imported malaria is an increasing problem. The arrival of 224 African refugees presented the opportunity to investigate the diagnosis and management of imported malaria within the Quebec health care system.

Methods

The refugees were visited at home 3–4 months after arrival in Quebec. For 221, a questionnaire was completed and permission obtained for access to health records; a blood sample for malaria testing was obtained from 210.

Results

Most of the 221 refugees (161 [73%]) had had at least 1 episode of malaria while in the refugee camps. Since arrival in Canada, 87 (39%) had had symptoms compatible with malaria for which medical care was sought. Complete or partial records were obtained for 66 of these refugees and for 2 asymptomatic adults whose children were found to have malaria: malaria had been appropriately investigated in 55 (81%); no malaria smear was requested for the other 13. Smears were reported as positive for 20 but confirmed for only 15 of the 55; appropriate therapy was verified for 10 of the 15. Of the 5 patients with a false-positive diagnosis of malaria, at least 3 received unnecessary therapy. Polymerase chain reaction testing of the blood sample obtained at the home visit revealed malaria parasites in 48 of the 210 refugees (23%; 95% confidence interval [CI] 17%– 29%). The rate of parasite detection was more than twice as high among the 19 refugees whose smears were reported as negative but not sent for confirmation (47%; 95% CI 25%– 71%).

Interpretation

This study has demonstrated errors of both omission and commission in the response to refugees presenting with possible malaria. Smears were not consistently requested for patients whose presenting complaints were not “typical” of malaria, and a large proportion of smears read locally as “negative” were not sent for confirmation. Further effort is required to ensure optimal malaria diagnosis and care in such high-risk populations.In many industrialized countries, the incidence of imported malaria is rising because of changing immigration patterns and refugee policies as well as increased travel to malaria-endemic regions.1,2,3,4,5,6,7,8,9,10 Imported malaria is not rare in Canada (300–1000 cases per year),3 the United States2,3,4 or other industrialized countries.5,6,7,8,9,10 Malaria can be a serious challenge in these countries because of its potentially rapid and lethal course.11,12,13,14 The task of front-line health care providers is made particularly difficult by the protean clinical presentations of malaria. Classic periodic fevers (tertian or quartan) are seen infrequently.9,15,16,17,18,19 Atypical and subtle presentations are especially common in individuals who have partial immunity (e.g., immigrants and refugees from disease-endemic areas) or are taking malaria prophylaxis (e.g., travellers).9,16,17 Even when malaria is considered, an accurate diagnosis can remain elusive or can be delayed as a result of inadequate or distant specialized laboratory support.19,20In Quebec, the McGill University Centre for Tropical Diseases collaborates with the Laboratoire de santé publique du Québec to raise awareness of imported malaria, to offer training and quality-assurance testing, and to provide reference diagnostic services. A preliminary diagnosis is typically made by the local laboratory, and smears (with or without staining) are sent to the McGill centre, where they are reviewed within 2–48 hours, depending on the urgency of the request. Initial medical decisions are usually based on local findings and interpretations. Although malaria is a reportable disease, there is no requirement to use the reference service.On Aug. 9, 2000, 224 refugees from Tanzanian camps landed in Montréal aboard an airplane chartered by Canadian immigration authorities. Over the ensuing 5 weeks, the McGill University Centre for Tropical Diseases noted an increase in demand for malaria reference services and an apparent small “epidemic” of imported malaria. This “epidemic” prompted us to investigate the performance of the health care system in the diagnosis and management of imported malaria.  相似文献   

19.
A broad Gag-specific CD8+ T-cell response is associated with effective control of adult human immunodeficiency virus (HIV) infection. The association of certain HLA class I molecules, such as HLA-B*57, -B*5801, and -B*8101, with immune control is linked to mutations within Gag epitopes presented by these alleles that allow HIV to evade the immune response but that also reduce viral replicative capacity. Transmission of such viruses containing mutations within Gag epitopes results in lower viral loads in adult recipients. In this study of pediatric infection, we tested the hypothesis that children may tend to progress relatively slowly if either they themselves possess one of the protective HLA-B alleles or the mother possesses one of these alleles, thereby transmitting a low-fitness virus to the child. We analyzed HLA type, CD8+ T-cell responses, and viral sequence changes for 61 mother-child pairs from Durban, South Africa, who were monitored from birth. Slow progression was significantly associated with the mother or child possessing one of the protective HLA-B alleles, and more significantly so when the protective allele was not shared by mother and child (P = 0.007). Slow progressors tended to make CD8+ T-cell responses to Gag epitopes presented by the protective HLA-B alleles, in contrast to progressors expressing the same alleles (P = 0.07; Fisher''s exact test). Mothers expressing the protective alleles were significantly more likely to transmit escape variants within the Gag epitopes presented by those alleles than mothers not expressing those alleles (75% versus 21%; P = 0.001). Reversion of transmitted escape mutations was observed in all slow-progressing children whose mothers possessed protective HLA-B alleles. These data show that HLA class I alleles influence disease progression in pediatric as well as adult infection, both as a result of the CD8+ T-cell responses generated in the child and through the transmission of low-fitness viruses by the mother.Human immunodeficiency virus (HIV)-specific CD8+ T cells play a central role in controlling viral replication (12). It is the specificity of the CD8+ T-cell response, particularly the response to Gag, that is associated with low viral loads in HIV infection (7, 17, 34). Although immune control is undermined by the selection of viral mutations that prevent recognition by the CD8+ T cells, evasion of Gag-specific responses mediated by protective class I HLA-B alleles typically brings a reduction in viral replicative capacity, facilitating subsequent immune control of HIV (2, 20, 21). The same principle has been demonstrated in studies of simian immunodeficiency virus infection (18, 22).Recent studies showed that the class I HLA-B alleles that protect against disease progression present more Gag-specific CD8+ T-cell epitopes and drive the selection of more Gag-specific escape mutations than those alleles that are associated with high viral loads (23). These protective HLA-B alleles not only are beneficial to infected individuals expressing those alleles but also benefit a recipient following transmission, since the transmitted virus carrying multiple Gag escape mutations may have substantially reduced fitness (3, 4, 8). However, there is no benefit to the recipient if he or she shares the same protective allele as the donor because the transmitted virus carries escape mutations in the Gag epitopes that would otherwise be expected to mediate successful immune control in the recipient (8, 11).The sharing of HLA alleles between donor and recipient occurs frequently in mother-to-child transmission (MTCT). The risk of MTCT is related to viral load in the mother, and a high viral load is associated with nonprotective alleles, such as HLA-B*18 and -B*5802. This may contribute in two distinct ways to the more rapid progression observed in pediatric HIV infection (24, 26, 27). First, because infected children share 50% or more of their HLA alleles with the transmitting mother, they are less likely than adults to carry protective HLA alleles (16). Thus, infected children as a group carry fewer protective HLA alleles and more nonprotective HLA alleles. Second, even when the child has a protective allele, such as HLA-B*27, this allele does not offer protection if the maternally transmitted virus carries escape mutations within the key Gag epitopes that are presented by the protective allele (11, 19).However, it is clear that infected children who possess protective alleles, such as HLA-B*27 or HLA-B*57, can achieve durable immune control of HIV infection if the virus transmitted from the mother is not preadapted to those alleles (6, 10). HIV-specific CD8+ T-cell responses are detectable from birth in infected infants (32). Furthermore, as in adult infection (3, 8), HIV-infected children have the potential to benefit from transmission of low-fitness viruses in the situation where the mother possesses protective HLA alleles and the child does not share those protective alleles. MTCT of low-fitness viruses carrying CD8+ T-cell escape mutations was recently documented (28; J. Prado et al., unpublished data).In this study, undertaken in Durban, South Africa, we set out to test the hypothesis that HIV-infected children are less likely to progress rapidly to disease if either the infected child or the transmitting mother possesses a protective HLA allele that is not shared. The HLA alleles most strongly associated with low viral loads and high CD4 counts in a cohort of >1,200 HIV-infected adults in Durban are HLA-B*57 (-B*5702 and -B*5703), HLA-B*5801, and HLA-B*8101 (16; A. Leslie et al., unpublished data). These four alleles all present Gag-specific CD8+ T-cell epitopes, and in each case the escape mutations selected in these epitopes reduce viral replicative capacity (2-4, 8, 21, 23).Analyzing a previously described cohort of 61 HIV-infected children in Durban (24, 26, 32), South Africa, who were all monitored from birth, we first addressed the question of whether possession of any of these four alleles by either mother or child is associated with slower disease progression in the child and then determined whether sharing of protective alleles by mother and child affects the ability of the child to make the Gag-specific CD8+ T-cell responses restricted by the shared allele.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号