首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Despite the increase in the number of Aboriginal people with end-stage renal disease around the world, little is known about their health outcomes when undergoing renal replacement therapy. We evaluated differences in survival and rate of renal transplantation among Aboriginal and white patients after initiation of dialysis.

Methods

Adult patients who were Aboriginal or white and who commenced dialysis in Alberta, Saskatchewan or Manitoba between Jan. 1, 1990, and Dec. 31, 2000, were recruited for the study and were followed until death, transplantation, loss to follow-up or the end of the study (Dec. 31, 2001). We used Cox proportional hazards models to examine the effect of race on patient survival and likelihood of transplant, with adjustment for potential confounders.

Results

Of the 4333 adults who commenced dialysis during the study period, 15.8% were Aboriginal and 72.4% were white. Unadjusted rates of death per 1000 patient-years during the study period were 158 (95% confidence interval [CI] 144–176) for Aboriginal patients and 146 (95% CI 139–153) for white patients. When follow-up was censored at the time of transplantation, the age-adjusted risk of death after initiation of dialysis was significantly higher among Aboriginal patients than among white patients (hazard ratio [HR] 1.15, 95% CI 1.02–1.30). The greater risk of death associated with Aboriginal race was no longer observed after adjustment for diabetes mellitus and other comorbid conditions (adjusted HR 0.89, 95% CI 0.77–1.02) and did not appear to be associated with socioeconomic status. During the study period, unadjusted transplantation rates per 1000 patient-years were 62 (95% CI 52–75) for Aboriginal patients and 133 (95% CI 125–142) for white patients. Aboriginal patients were significantly less likely to receive a renal transplant after commencing dialysis, even after adjustment for potential confounders (HR 0.43, 95% CI 0.35–0.53). In an additional analysis that included follow-up after transplantation for those who received renal allografts, the age-adjusted risk of death associated with Aboriginal race (HR 1.36, 95% CI 1.21–1.52) was higher than when follow-up after transplantation was not considered, perhaps because of the lower rate of transplantation among Aboriginals.

Interpretation

Survival among dialysis patients was similar for Aboriginal and white patients after adjustment for comorbidity. However, despite universal access to health care, Aboriginal people had a significantly lower rate of renal transplantation, which might have adversely affected their survival when receiving renal replacement therapy.In North America and the Antipodes, the incidence of diabetes among adolescent and adult Aboriginals has risen dramatically,1,2,3,4 with corresponding increases in the prevalence of diabetic nephropathy.5,6,7 Aboriginal people in Canada have experienced disproportionately high incidence rates of end-stage renal disease (ESRD), with an 8-fold increase in the number of prevalent dialysis patients between 1980 and 2000.8 Although the incidence of ESRD appears to have decreased in recent years, the prevalence of diabetes mellitus and its complications are rising, especially among young people.9,10,11Most work evaluating health outcomes among Aboriginal people considers either the general population12or diseases for which interventions are implemented over a short period, such as alcohol abuse,13 injury14 or critical illness.15 Death and markers of poor health are significantly more common among Aboriginal people than among North Americans of European ancestry, perhaps because of the greater prevalence of diabetes mellitus, adverse health effects due to lower socioeconomic status16 and reduced access to primary care.17 Aboriginal patients may also face unique barriers to care, including mistrust of non-Aboriginal providers, institutional discrimination or preference for traditional remedies.18 These factors may be most relevant when contact with physicians is infrequent, which obstructs development of a therapeutic relationship. In contrast, ESRD is a chronic illness that requires ongoing care from a relatively small, stable multidisciplinary team.Although recent evidence highlights racial inequalities in morbidity and mortality among North Americans with ESRD, most studies have focused on black or Hispanic populations.19We conducted this study to evaluate rates of death and renal transplantation among Aboriginal people after initiation of dialysis in Alberta, Saskatchewan and Manitoba.  相似文献   

2.

Background

Aboriginal women have been identified as having poorer pregnancy outcomes than other Canadian women, but information on risk factors and outcomes has been acquired mostly from retrospective databases. We compared prenatal risk factors and birth outcomes of First Nations and Métis women with those of other participants in a prospective study.

Methods

During the 12-month period from July 1994 to June 1995, we invited expectant mothers in all obstetric practices affiliated with a single teaching hospital in Edmonton to participate. Women were recruited at their first prenatal visit and followed through delivery. Sociodemographic and clinical data were obtained by means of a patient questionnaire, and microbiological data were collected at 3 points during gestation: in the first and second trimesters and during labour. Our primary outcomes of interest were low birth weight (birth weight less than 2500 g), prematurity (birth at less than 37 weeks'' gestation) and macrosomia (birth weight greater than 4000 g).

Results

Of the 2047 women consecutively enrolled, 1811 completed the study through delivery. Aboriginal women accounted for 70 (3.9%) of the subjects who completed the study (45 First Nations women and 25 Métis women). Known risk factors for adverse pregnancy outcome were more common among Aboriginal than among non-Aboriginal women, including previous premature infant (21% v. 11%), smoking during the current pregnancy (41% v. 13%), presence of bacterial vaginosis in midgestation (33% v. 13%) and poor nutrition as measured by meal consumption. Although Aboriginal women were less likely than non-Aboriginal women to have babies of low birth weight (odds ratio [OR] 1.46, 95% confidence interval [CI] 0.52–4.15) or who were born prematurely (OR 1.45, 95% CI 0.57–3.72) and more likely to have babies with macrosomia (OR 2.04, 95% CI 1.03–4.03), these differences were lower and statistically nonsignificant after adjustment for smoking, cervicovaginal infection and income (adjusted OR for low birth weight 0.85, 95% CI 0.19–3.78; for prematurity 0.90, 95% CI 0.21–3.89; and for macrosomia 2.12, 95% CI 0.84-5.36).

Interpretation

After adjustment for potential confounding factors, we found no statistically significant relation between Aboriginal status and birth outcome.It is generally recognized that Aboriginal women experience poorer birth outcomes than other North American women, including higher rates of stillbirth,1 low-birth-weight infants1,2,3 and prematurity.2,3 Although significant efforts have been made to reduce Aboriginal infant mortality rates, these rates remain higher than for other infants in both Canada4 and the United States.5 Little is known about the reasons for differences in birth outcomes, although social, economic, medical and prenatal care factors have been suggested. Recent publications, based on retrospective analyses of large databases, have confirmed disparities in birth outcomes between Aboriginal and all other groups,3,6,7 but there is a paucity of prospective data. In addition, although the term “Aboriginal” refers to a heterogeneous population comprising First Nations people, Métis and Inuit, there are few comparisons between specific Aboriginal groups or of Aboriginal groups with the general population.We report here the results of a prospective study in a general obstetric population, comparing birth outcomes and known pregnancy risk factors of Aboriginal women with those of non-Aboriginal Canadian women. In addition to well-recognized socioeconomic and reproductive risk factors, we investigated the prevalence of maternal cervicovaginal infections, which have been increasingly linked to prematurity.8,9  相似文献   

3.

Background

Despite the high prevalence of obesity and diabetes in the Canadian Aboriginal population, it is unknown whether the current thresholds for body mass index and waist circumference derived from white populations are appropriate for Aboriginal people. We compared the risk of cardiovascular disease among Canadian Aboriginal and European populations using the current thresholds for body mass index and waist circumference.

Methods

Healthy Aboriginal (n = 195) and European (n = 201) participants (matched for sex and body mass index range) were assessed for demographic characteristics, lifestyle factors, total and central adiposity and risk factors for cardiovascular disease. Among Aboriginal and European participants, we compared the relation between body mass index and each of the following 3 factors: percent body fat, central adiposity and cardiovascular disease risk factors. We also compared the relation between waist circumference and the same 3 factors.

Results

The use of body mass index underestimated percent body fat by 1.3% among Aboriginal participants compared with European participants (p = 0.025). The use of waist circumference overestimated abdominal adipose tissue by 26.7 cm2 among Aboriginal participants compared with European participants (p = 0.007). However, there was no difference in how waist circumference estimated subcutaneous abdominal and visceral adipose tissue among the 2 groups. At the same body mass index and waist circumference, we observed no differences in the majority of cardiovascular disease risk factors among Aboriginal and European participants. The prevalence of dyslipidemia, hypertension, impaired fasting glucose and metabolic syndrome was similar among participants in the 2 groups after adjustment for body mass index, waist circumference, age and sex.

Interpretation

We found no difference in the relation between body mass index and risk of cardiovascular disease between men and women of Aboriginal and European descent. We also found no difference between waist circumference and cardiovascular disease risk among these groups. These data support the use of current anthropometric thresholds in the Canadian Aboriginal population.The Canadian Aboriginal population has undergone a rapid social and environmental transition over the past several decades, which has led to a marked increase in the prevalence of obesity. In the general Canadian population, the prevalence of obesity (body mass index ≥ 30 kg/m2) is 23%;1 however, the prevalence in the Aboriginal population is double that amount.2–4 The increased prevalence of obesity among Aboriginal people is important because obesity is an independent risk factor for a number of chronic illnesses.5,6 Indeed, many of these illnesses are already more common in the Aboriginal population than in other Canadian populations.3,7Obesity, which is defined as an excess of body fat, is assessed by use of body mass index and waist circumference as indirect measures of total and central adiposity.8 Current thresholds for body mass index and waist circumference are based on data predominantly from white people of European descent.9,10 However, these thresholds may not be suitable for all populations. Specific thresholds have been suggested for Asian people,11 because those of Asian descent generally have more risk factors and a greater amount of body fat and visceral adipose tissue than Caucasians of the same body mass index and waist circumference.12–17 Specific thresholds may also be required for Canadian Aboriginal people because their ancestors are believed to have come from Asia more than 10 000 years ago.It is unknown whether the current thresholds for body mass index and waist circumference are relevant for Canadian Aboriginal people with respect to body fat distribution and cardiovascular disease risk factors. Thus, we investigated the relation between body mass index and total and central adiposity among people of Aboriginal and European descent. We also investigated the relation between waist circumference and total and central adiposity in these 2 groups. In addition, we examined the prevalence of risk factors among Aboriginal and European people using the current thresholds for body mass index and waist circumference.  相似文献   

4.

Background

Ethnic disparities in access to health care and health outcomes are well documented. It is unclear whether similar differences exist between Aboriginal and non-Aboriginal people with chronic kidney disease in Canada. We determined whether access to care differed between status Aboriginal people (Aboriginal people registered under the federal Indian Act) and non-Aboriginal people with chronic kidney disease.

Methods

We identified 106 511 non-Aboriginal and 1182 Aboriginal patients with chronic kidney disease (estimated glomerular filtration rate less than 60 mL/min/1.73 m2). We compared outcomes, including hospital admissions, that may have been preventable with appropriate outpatient care (ambulatory-care–sensitive conditions) as well as use of specialist services, including visits to nephrologists and general internists.

Results

Aboriginal people were almost twice as likely as non-Aboriginal people to be admitted to hospital for an ambulatory-care–sensitive condition (rate ratio 1.77, 95% confidence interval [CI] 1.46–2.13). Aboriginal people with severe chronic kidney disease (estimated glomerular filtration rate < 30 mL/min/1.73 m2) were 43% less likely than non-Aboriginal people with severe chronic kidney disease to visit a nephrologist (hazard ratio 0.57, 95% CI 0.39–0.83). There was no difference in the likelihood of visiting a general internist (hazard ratio 1.00, 95% CI 0.83–1.21).

Interpretation

Increased rates of hospital admissions for ambulatory-care–sensitive conditions and a reduced likelihood of nephrology visits suggest potential inequities in care among status Aboriginal people with chronic kidney disease. The extent to which this may contribute to the higher rate of kidney failure in this population requires further exploration.Ethnic disparities in access to health care are well documented;1,2 however, the majority of studies include black and Hispanic populations in the United States. The poorer health status and increased mortality among Aboriginal populations than among non-Aboriginal populations,3,4 particularly among those with chronic medical conditions,5,6 raise the question as to whether there is differential access to health care and management of chronic medical conditions in this population.The prevalence of end-stage renal disease, which commonly results from chronic kidney disease, is about twice as common among Aboriginal people as it is among non-Aboriginal people.7,8 Given that the progression of chronic kidney disease can be delayed by appropriate therapeutic interventions9,10 and that delayed referral to specialist care is associated with increased mortality,11,12 issues such as access to health care may be particularly important in the Aboriginal population. Although previous studies have suggested that there is decreased access to primary and specialist care in the Aboriginal population,13–15 these studies are limited by the inclusion of patients from a single geographically isolated region,13 the use of survey data,14 and the inability to differentiate between different types of specialists and reasons for the visit.15In addition to physician visits, admission to hospital for ambulatory-care–sensitive conditions (conditions that, if managed effectively in an outpatient setting, do not typically result in admission to hospital) has been used as a measure of access to appropriate outpatient care.16,17 Thus, admission to hospital for an ambulatory-care–sensitive condition reflects a potentially preventable complication resulting from inadequate access to care. Our objective was to determine whether access to health care differs between status Aboriginal (Aboriginal people registered under the federal Indian Act) and non-Aboriginal people with chronic kidney disease. We assess differences in care by 2 measures: admission to hospital for an ambulatory-care–sensitive condition related to chronic kidney disease; and receipt of nephrology care for severe chronic kidney disease as recommended by clinical practice guidelines.18  相似文献   

5.

Background

Changes to Canadian Standards Association (CSA) standards for playground equipment prompted the removal of hazardous equipment from 136 elementary schools in Toronto. We conducted a study to determine whether applying these new standards and replacing unsafe playground equipment with safe equipment reduced the number of school playground injuries.

Methods

A total of 86 of the 136 schools with hazardous play equipment had the equipment removed and replaced with safer equipment within the study period (intervention schools). Playground injury rates before and after equipment replacement were compared in intervention schools. A database of incident reports from the Ontario School Board Insurance Exchange was used to identify injury events. There were 225 schools whose equipment did not require replacement (nonintervention schools); these schools served as a natural control group for background injury rates during the study period. Injury rates per 1000 students per month, relative risks (RRs) and 95% confidence intervals (CIs) were calculated, adjusting for clustering within schools.

Results

The rate of injury in intervention schools decreased from 2.61 (95% CI 1.93–3.29) per 1000 students per month before unsafe equipment was removed to 1.68 (95% CI 1.31–2.05) after it was replaced (RR 0.70, 95% CI 0.62–0.78). This translated into 550 injuries avoided in the post-intervention period. In nonintervention schools, the rate of injury increased from 1.44 (95% CI 1.07–1.81) to 1.81 (95% CI 1.07–2.53) during the study period (RR 1.40, 95% CI 1.29–1.52).

Interpretation

The CSA standards were an effective tool in identifying hazardous playground equipment. Removing and replacing unsafe equipment is an effective strategy for preventing playground injuries.Playgrounds provide a recreational refuge for children, away from traffic and other outdoor hazards. In addition, playground activities can enhance children''s cognitive, physical and psychosocial skills. Playground safety is of concern to physicians, parents and injury prevention advocates. Of all playground injuries that result in a visit to a hospital emergency department, 27%–40% are fractures and 17% require hospital admission — a greater frequency of admission than that associated with any other cause of pediatric injury except traffic.1,2,3,4 The results of an observational study in Wales showed that 90% of all playground injuries resulting in a visit to an emergency department were related to the playground equipment.1 As might be expected, playgrounds are the location within elementary schools with the highest injury rates and the most severe injuries.5 In a study conducted in Kingston, Ont., children were 12 times more likely to be injured in school playgrounds than in municipal playgrounds.3Standards for playgrounds have been developed both in Canada6 and internationally.7,8,9,10,11,12 The Canadian Standards Association (CSA) standards for the design, installation and maintenance of playgrounds and equipment were most recently revised in 1998.6 No published data exist on the relation between equipment standards and injury rates. If applying standards can identify unsafe playgrounds and, more importantly, reduce the rate of child injury, such standards would be a useful tool for school and municipal authorities responsible for playgrounds.We sought to determine the effect of replacing unsafe playground equipment (as determined using the new CSA standards) on injury rates among school children.  相似文献   

6.

Background

In 1997, we found a higher prevalence of HIV among female than among male injection drug users in Vancouver. Factors associated with HIV incidence among women in this setting were unknown. In the present study, we sought to compare HIV incidence rates among male and female injection drug users in Vancouver and to compare factors associated with HIV seroconversion.

Methods

This analysis was based on 939 participants recruited between May 1996 and December 2000 who were seronegative at enrolment with at least one follow-up visit completed, and who were studied prospectively until March 2001. Incidence rates were calculated using the Kaplan–Meier method. The Cox proportional hazards regression model was used to identify independent predictors of time to HIV seroconversion.

Results

As of March 2001, seroconversion had occurred in 110 of 939 participants (64 men, 46 women), yielding a cumulative incidence rate of HIV at 48 months of 13.4% (95% confidence interval [CI] 11.0%–15.8%). Incidence was higher among women than among men (16.6% v. 11.7%, p = 0.074). Multivariate analysis of the female participants'' practices revealed injecting cocaine once or more per day compared with injecting less than once per day (adjusted relative risk [RR] 2.6, 95% CI 1.4–4.8), requiring help injecting compared with not requiring such assistance (adjusted RR 2.1, 95% CI 1.1–3.8), having unsafe sex with a regular partner compared with not having unsafe sex with a regular partner (adjusted RR 2.9, 95% CI 0.9–9.5) and having an HIV-positive sex partner compared with not having an HIV-positive sex partner (adjusted RR 2.7, 95% CI 1.0–7.7) to be independent predictors of time to HIV seroconversion. Among male participants, injecting cocaine once or more per day compared with injecting less than once per day (adjusted RR 3.3, 95% CI 1.9–5.6), self-reporting identification as an Aboriginal compared with not self-reporting identification as an Aboriginal (adjusted RR 2.5, 95% CI 1.4–4.2) and borrowing needles compared with not borrowing needles (adjusted RR 2.0, 95% CI 1.1–3.4) were independent predictors of HIV infection.

Interpretation

HIV incidence rates among female injection drug users in Vancouver are about 40% higher than those of male injection drug users. Different risk factors for seroconversion for women as opposed to men suggest that sex-specific prevention initiatives are urgently required.Recent reports in Canada and numerous other countries indicate that HIV is increasingly affecting women.1 Before 1995, adult women in Canada had 9.6% of all positive HIV tests for which the age and sex of the person being tested were known. By 1995, this proportion had increased to 18.5% and reached 23.9% in 2000. In addition, 39% of all new HIV infections among women in 2000 were attributed to injection drug use.2 These data are consistent with findings in the United States where, in 1999, women accounted for 23% of all reported AIDS cases in adults, of which 42% were attributed to injection drug use.3These data clearly indicate that the face of the epidemic is changing. Whereas some factors unique to the transmission of HIV to women are known, basic and behavioural research efforts addressing sex-related and drug-related vulnerabilities among female injection drug users (IDUs) are lacking.4 At a time when women''s vulnerability to HIV infection is becoming increasingly apparent worldwide,5,6 a better understanding of the processes and factors that cause drug-related harm among women in industrialized countries is urgently required.Since the mid-1990s, the Downtown Eastside of Vancouver, British Columbia, has experienced an explosive and ongoing HIV epidemic among IDUs with annual HIV incidence rates reaching as high as 19% in 1997.7,8 When subjects were enrolled in the Vancouver Injection Drug User Study (VIDUS), it was found that the baseline HIV prevalence was higher among women than men (35.2% v. 25.8%).7 Follow-up of this cohort now allows an investigation aimed at identifying the predictors of HIV seroconversion among female and male IDUs. Therefore, we sought to compare HIV incidence rates among male and female IDUs in Vancouver and to compare risk factors associated with HIV seroconversion.  相似文献   

7.

Background:

Although Aboriginal adults have a higher risk of end-stage renal disease than non-Aboriginal adults, the incidence and causes of end-stage renal disease among Aboriginal children and young adults are not well described.

Methods:

We calculated age- and sex-specific incidences of end-stage renal disease among Aboriginal people less than 22 years of age using data from a national organ failure registry. Incidence rate ratios were used to compare rates between Aboriginal and white Canadians. To contrast causes of end-stage renal disease by ethnicity and age, we calculated the odds of congenital diseases, glomerulonephritis and diabetes for Aboriginal people and compared them with those for white people in the following age strata: 0 to less than 22 years, 22 to less than 40 years, 40 to less than 60 years and older than 60 years.

Results:

Incidence rate ratios of end-stage renal disease for Aboriginal children and young adults (age < 22 yr, v. white people) were 1.82 (95% confidence interval [CI] 1.40–2.38) for boys and 3.24 (95% CI 2.60–4.05) for girls. Compared with white people, congenital diseases were less common among Aboriginal people aged less than 22 years (odds ratio [OR] 0.56, 95% CI 0.36–0.86), and glomerulonephritis was more common (OR 2.18, 95% CI 1.55–3.07). An excess of glomerulonephritis, but not diabetes, was seen among Aboriginal people aged 22 to less than 40 years. The converse was true (higher risk of diabetes, lower risk of glomerulonephritis) among Aboriginal people aged 40 years and older.

Interpretation:

The incidence of end-stage renal disease is higher among Aboriginal children and young adults than among white children and young adults. This higher incidence may be driven by an increased risk of glomerulonephritis in this population.Compared with white Canadians, Aboriginal Canadians have a higher prevalence of end-stage renal disease,1,2 which is generally attributed to their increased risk for diabetes. However, there has been limited investigation of the incidence and causes of end-stage renal disease among Aboriginal children and young adults. Because most incident cases of diabetes are identified in middle-aged adults, an excess risk of end-stage renal disease in young people would not be expected if the high risk of diabetes is responsible for higher overall rates of end-stage renal disease among Aboriginal people. About 12.3% of children with end-stage renal disease in Canada are Aboriginal,3 but only 6.1% of Canadian children (age < 19 yr) are Aboriginal.4,5A few reports suggest that nondiabetic renal disease is common among Aboriginal populations in North America.2,68 Aboriginal adults in Saskatchewan are twice as likely as white adults to have end-stage renal disease caused by glomerulonephritis,7,8 and an increased rate of mesangial proliferative glomerulonephritis has been reported among Aboriginal people in the United States.6,9 These studies suggest that diabetes may be a comorbid condition rather than the sole cause of kidney failure among some Aboriginal people in whom diabetic nephropathy is diagnosed using clinical features alone.We estimated incidence rates of end-stage renal disease among Aboriginal children and young adults in Canada and compared them with the rates seen among white children and young adults. In addition, we compared relative odds of congenital renal disease, glomerulonephritis and diabetic nephropathy in Aboriginal people with the relative odds of these conditions in white people.  相似文献   

8.
9.
10.
BackgroundTransient ischemic attacks (TIAs) often herald a stroke, but little is known about the acute natural history of TIAs. Our objective was to quantify the early risk of stroke after a TIA in patients with internal carotid artery disease.MethodsUsing patient data from the medical arm of the North American Symptomatic Carotid Endarterectomy Trial, we calculated the risk of ipsilateral stroke in the territory of the symptomatic internal carotid artery within 2 and 90 days after a first-recorded hemispheric TIA. We also studied similar outcomes among patients in the trial who had a first-recorded completed hemispheric stroke.ResultsFor patients with a first-recorded hemispheric TIA (n = 603), the 90-day risk of ipsilateral stroke was 20.1% (95% confidence interval [CI] 17.0%–23.2%), higher than the 2.3% risk (95% CI 1.0%–3.6%) for patients with a hemispheric stroke (n = 526). The 2-day risks were 5.5% and 0.0%, respectively. Patients with more severe stenosis of the internal carotid artery (> 70%) appeared to be at no greater risk of stroke than patients with lesser degrees of stenosis (adjusted hazard ratio 1.1, 95% CI 0.7–1.7). Infarct on brain imaging (adjusted hazard ratio 2.1, 95% CI 1.5–3.0) and the presence of intracranial major-artery disease (adjusted hazard ratio 1.9, 95% CI 1.3-2.7) doubled the early risk of stroke in patients with a hemispheric TIA.InterpretationPatients who had a hemispheric TIA related to internal carotid artery disease had a high risk of stroke in the first few days after the TIA. Early risk of stroke was not affected by the degree of internal carotid artery stenosis.A transient ischemic attack (TIA) is a common neurological condition that is seen by all physician groups including family and emergency physicians, internists, vascular surgeons, and neurologists. In Canada, half a million adults aged 18 and over have been diagnosed with a TIA.1 Presenting symptoms vary depending on which arterial supply is compromised, but they commonly consist of a brief episode of weakness, numbness, loss of vision or speech difficulty with complete recovery.Atherosclerotic disease of the carotid arteries outside the cranial cavity has long been recognized as the most common source of emboli that then travel to the brain causing stroke.2,3,4 TIAs are often early warning signs of atherosclerotic disease. About 10% of patients with a TIA presenting to California emergency departments returned to the emergency department with a stroke within 90 days.5 In half of the patients, the stroke occurred within the first 48 hours after the TIA. Similar 90-day results have been observed in earlier community-based studies.6,7,8 However, these studies all included some patients who had emboli from heart lesions or arrhythmias and some patients who had small-vessel disease as a cause of their TIA.Although several large stroke-prevention trials among patients with TIAs9,10,11,12,13,14,15,16,17,18 have presented some data on the risk of stroke from pre-existing atherosclerotic disease of the carotid arteries, they are limited because enrolment in the trials was delayed by 1 or more months after the TIA occurred. Small case series19,20 have examined the relation between carotid artery disease and TIA, but without assessment of stroke outcome. Thus, the influence of atherosclerotic disease in the carotid artery on early stroke occurrence among people presenting with a TIA has not been assessed in any large study.21Using patients from the medical arm of the North American Symptomatic Carotid Endarterectomy Trial (NASCET), we describe the early risk of stroke in a large number of patients with a TIA in whom internal carotid artery disease was the only presumed cause.  相似文献   

11.

Background

Survivors of out-of-hospital cardiac arrest are at high risk of recurrent arrests, many of which could be prevented with implantable cardioverter defibrillators (ICDs). We sought to determine the ICD insertion rate among survivors of out-of-hospital cardiac arrest and to determine factors associated with ICD implantation.

Methods

The Ontario Prehospital Advanced Life Support (OPALS) study is a prospective, multiphase, before–after study assessing the effectiveness of prehospital interventions for people experiencing cardiac arrest, trauma or respiratory arrest in 19 Ontario communities. We linked OPALS data describing survivors of cardiac arrest with data from all defibrillator implantation centres in Ontario.

Results

From January 1997 to April 2002, 454 patients in the OPALS study survived to hospital discharge after experiencing an out-of-hospital cardiac arrest. The mean age was 65 (standard deviation 14) years, 122 (26.9%) were women, 398 (87.7%) had a witnessed arrest, 372 (81.9%) had an initial rhythm of ventricular tachycardia or ventricular fibrillation (VT/VF), and 76 (16.7%) had asystole or another arrhythmia. The median cerebral performance category at discharge (range 1–5, 1 = normal) was 1. Only 58 (12.8%) of the 454 patients received an ICD. Patients with an initial rhythm of VT/VF were more likely than those with an initial rhythm of asystole or another rhythm to undergo device insertion (adjusted odds ratio [OR] 9.63, 95% confidence interval [CI] 1.31–71.50). Similarly, patients with a normal cerebral performance score were more likely than those with abnormal scores to undergo ICD insertion (adjusted OR 12.52, 95% CI 1.74–92.12).

Interpretation

A minority of patients who survived cardiac arrest underwent ICD insertion. It is unclear whether this low usage rate reflects referral bias, selection bias by electrophysiologists, supply constraint or patient preference.People who survive out-of-hospital cardiac arrest have an increased risk of recurrent arrest of 18%–20% in the first year.1,2 Three large randomized studies evaluated the use of implantable cardioverter defibrillators (ICDs) versus antiarrhythmic drugs in survivors of out-of-hospital cardiac arrest.3,4,5 The largest of the 3 studies involved 1016 patients and found a 39% relative risk reduction in mortality in the ICD group.3 The 2 smaller studies both reported nonsignificant reductions in mortality in the ICD group.4,5 Two recent meta-analyses showed that the use of ICDs was associated with significant and important increases in survival among cardiac arrest survivors: all-cause mortality was reduced by 23%–28% with their use for secondary prevention, and the rate of sudden cardiac death was reduced by 50% in both meta-analyses.6,7Guidelines from several national and international societies recommend insertion of ICDs in all survivors of cardiac arrest without a reversible cause.8,9 Despite advances in ICD insertion and technology, studies to date suggest that the utilization rate is low, at least in some settings.10,11 Several factors, including patient preference, physician referral, availability and cost, may contribute to the underutilization of ICDs.The Ontario Prehospital Advanced Life Support Study (OPALS)12,13 is a multiphase before–after study designed to systematically evaluate the effectiveness of various prehospital interventions for people experiencing cardiac arrest, trauma or respiratory arrest. As an extension of the OPALS study, we sought to determine the rate of ICD insertion among survivors of cardiac arrest, as well as the factors associated with ICD implantation.  相似文献   

12.

Background

The use of elective cholecystectomy has increased dramatically following the widespread adoption of laparoscopic cholecystectomy. We sought to determine whether this increase has resulted in a reduction in the incidence of severe complications of gallstone disease.

Methods

We examined longitudinal trends in the population-based rates of severe gallstone disease from 1988 to 2000, using a quasi-experimental longitudinal design to assess the effects of the large increase in elective cholecystectomy rates after 1991 among people aged 18 years and older residing in Ontario. We also measured the rate of hospital admission because of acute diverticulitis, to control for secular trends in the use of hospital care for acute abdominal diseases.

Results

The adjusted annual rate of elective cholecystectomy per 100 000 population increased from 201.3 (95% confidence interval [CI] 197.0–205.8) in 1988–1990 to 260.8 (95% CI 257.1– 264.5) in 1992–2000 (rate ratio [RR] 1.35, 95% CI 1.32– 1.38, p 0.001). An anomalously high number of elective cholecystectomies were performed in 1991. Overall, the annual rate of severe gallstone diseases (acute cholecystitis, acute biliary pancreatitis and acute cholangitis) declined by 10% (RR 0.90, 95% CI 0.88– 0.91) for 1992–2000 as compared with 1988–1991. This decline was entirely due to an 18% reduction in the rate of acute cholecystitis (RR 0.82, 95% CI 0.80–0.84).

Interpretation

The increase in the rate of elective cholecystectomy that occurred following the introduction of laparoscopic cholecystectomy in 1991 was associated with an overall reduction in the incidence of severe gallstone disease that was entirely attributable to a reduction in the incidence of acute cholecystitis.After the widespread introduction of laparoscopic cholecystectomy in 1991, the rate of cholecystectomy in North America increased by 30% to 60%,1,2,3 primarily because of higher rates of elective operations.1 Although cholecystectomy is not ordinarily indicated in people with asymptomatic gallstones,4,5 the decision to perform the procedure is highly discretionary.6 It is unclear whether the increased rate of elective cholecystectomy is due to overuse of surgery among people with asymptomatic or minimally symptomatic gallstones, or whether more patients with clinically important gallbladder disease are willing to undergo cholecystectomy with the availability of laparoscopic surgery.7,8Most cholecystectomies are done in people with uncomplicated biliary colic, the most common presentation of symptomatic gallstones.8 Severe complications of gallbladder disease, such as acute cholecystitis, acute biliary pancreatitis and acute cholangitis, are potentially life-threatening conditions that require hospital care. Greater use of elective cholecystectomy in people at risk of severe gallstone complications should result in a lower incidence of such complications. We sought to determine whether the increase in the rate of elective cholecystectomy was associated with a reduction in the incidence of severe complications of gallbladder disease.  相似文献   

13.

Background

Clinical trials and meta-analyses have produced conflicting results of the efficacy of unconjugated pneumococcal polysaccharide vaccine in adults. We sought to evaluate the vaccine''s efficacy on clinical outcomes as well as the methodologic quality of the trials.

Methods

We searched several databases and all bibliographies of reviews and meta-analyses for clinical trials that compared pneumococcal polysaccharide vaccine with a control. We examined rates of pneumonia and death, taking the methodologic quality of the trials into consideration.

Results

We included 22 trials involving 101 507 participants: 11 trials reported on presumptive pneumococcal pneumonia, 19 on all-cause pneumonia and 12 on all-cause mortality. The current 23-valent vaccine was used in 8 trials. The relative risk (RR) was 0.64 (95% confidence interval [CI] 0.43–0.96) for presumptive pneumococcal pneumonia and 0.73 (95% CI 0.56–0.94) for all-cause pneumonia. There was significant heterogeneity between the trials reporting on presumptive pneumonia (I2 = 74%, p < 0.001) and between those reporting on all-cause pneumonia (I2 = 90%, p < 0.001). The RR for all-cause mortality was 0.97 (95% CI 0.87–1.09), with moderate heterogeneity between trials (I2 = 44%, p = 0.053). Trial quality, especially regarding double blinding, explained a substantial proportion of the heterogeneity in the trials reporting on presumptive pneumonia and all-cause pneumonia. There was little evidence of vaccine protection in trials of higher methodologic quality (RR 1.20, 95% CI 0.75–1.92, for presumptive pneumonia; and 1.19, 95% CI 0.95–1.49, for all-cause pneumonia in double-blind trials; p for heterogeneity > 0.05). The results for all-cause mortality in double-blind trials were similar to those in all trials combined. There was little evidence of vaccine protection among elderly patients or adults with chronic illness in analyses of all trials (RR 1.04, 95% CI 0.78–1.38, for presumptive pneumococcal pneumonia; 0.89, 95% CI 0.69–1.14, for all-cause pneumonia; and 1.00, 95% CI 0.87–1.14, for all-cause mortality).

Interpretation

Pneumococcal vaccination does not appear to be effective in preventing pneumonia, even in populations for whom the vaccine is currently recommended.The burden of disease due to Streptococcus pneumoniae falls mainly on children, elderly people and people with underlying conditions such as HIV infection.1 Pneumococcal polysaccharide vaccines were developed more than 50 years ago and have progressed from 2-valent vaccines to the current 23-valent vaccine, which has been available since the early 1980s. The 23-valent vaccine includes serotypes accounting for 72%2 to 95%3 of invasive pneumococcal disease, depending on the geographic area. In many industrialized countries, pneumococcal vaccination is currently recommended for people aged 65 years and older and for individuals aged 2–64 who are at increased risk of pneumococcal disease.4–6Meta-analyses of controlled clinical trials have produced conflicting results of the efficacy of unconjugated pneumococcal polysaccharide vaccine.7–22 The lack of consistency between results reported from observational studies and controlled trials is another reason why the efficacy of the vaccine remains controversial. Empirical studies have shown that inadequate quality of clinical trials can lead to biases that distort their results.23 For example, inadequate allocation concealment or failure to blind patients, caregivers or those assessing outcomes may exaggerate treatment effects.23 Despite this, none of the previous reviews formally compared effect sizes in trials of high methodologic quality with effect sizes in trials of lower quality. We conducted a systematic review and meta-analysis of clinical trials examining the efficacy of pneumococcal polysaccharide vaccination on clinical outcomes, taking the quality of trials into account.  相似文献   

14.

Background

Whether to continue oral anticoagulant therapy beyond 6 months after an “unprovoked” venous thromboembolism is controversial. We sought to determine clinical predictors to identify patients who are at low risk of recurrent venous thromboembolism who could safely discontinue oral anticoagulants.

Methods

In a multicentre prospective cohort study, 646 participants with a first, unprovoked major venous thromboembolism were enrolled over a 4-year period. Of these, 600 participants completed a mean 18-month follow-up in September 2006. We collected data for 69 potential predictors of recurrent venous thromboembolism while patients were taking oral anticoagulation therapy (5–7 months after initiation). During follow-up after discontinuing oral anticoagulation therapy, all episodes of suspected recurrent venous thromboembolism were independently adjudicated. We performed a multivariable analysis of predictor variables (p < 0.10) with high interobserver reliability to derive a clinical decision rule.

Results

We identified 91 confirmed episodes of recurrent venous thromboembolism during follow-up after discontinuing oral anticoagulation therapy (annual risk 9.3%, 95% CI 7.7%–11.3%). Men had a 13.7% (95% CI 10.8%–17.0%) annual risk. There was no combination of clinical predictors that satisfied our criteria for identifying a low-risk subgroup of men. Fifty-two percent of women had 0 or 1 of the following characteristics: hyperpigmentation, edema or redness of either leg; D-dimer ≥ 250 μg/L while taking warfarin; body mass index ≥ 30 kg/m2; or age ≥ 65 years. These women had an annual risk of 1.6% (95% CI 0.3%–4.6%). Women who had 2 or more of these findings had an annual risk of 14.1% (95% CI 10.9%–17.3%).

Interpretation

Women with 0 or 1 risk factor may safely discontinue oral anticoagulant therapy after 6 months of therapy following a first unprovoked venous thromboembolism. This criterion does not apply to men. (http://Clinicaltrials.gov trial register number NCT00261014)Venous thromboembolism is a common, potentially fatal, yet treatable, condition. The risk of a recurrent venous thromboembolic event after 3–6 months of oral anticoagulant therapy varies. Some groups of patients (e.g., those who had a venous thromboembolism after surgery) have a very low annual risk of recurrence (< 1%),1 and they can safely discontinue anticoagulant therapy.2 However, among patients with an unprovoked thromboembolism who discontine anticoagulation therapy after 3–6 months, the risk of a recurrence in the first year is 5%–27%.3–6 In the second year, the risk is estimated to be 5%,3 and it is estimated to be 2%–3.8% for each subsequent year.5,7 The case-fatality rate for recurrent venous thromboembolism is between 5% and 13%.8,9 Oral anticoagulation therapy is very effective for reducing the risk of recurrence during therapy (> 90% relative risk [RR] reduction);3,4,10,11 however, this benefit is lost after therapy is discontinued.3,10,11 The risk of major bleeding with ongoing oral anticoagulation therapy among venous thromboembolism patients is 0.9–3.0% per year,3,4,6,12 with an estimated case-fatality rate of 13%.13Given that the long-term risk of fatal hemorrhage appears to balance the risk of fatal recurrent pulmonary embolism among patients with an unprovoked venous thromboembolism, clinicians are unsure if continuing oral anticoagulation therapy beyond 6 months is necessary.2,14 Identifying subgroups of patients with an annual risk of less than 3% will help clinicians decide which patients can safely discontinue anticoagulant therapy.We sought to determine the clinical predictors or combinations of predictors that identify patients with an annual risk of venous thromboembolism of less than 3% after taking an oral anticoagulant for 5–7 months after a first unprovoked event.  相似文献   

15.

Background

Up to 50% of adverse events that occur in hospitals are preventable. Language barriers and disabilities that affect communication have been shown to decrease quality of care. We sought to assess whether communication problems are associated with an increased risk of preventable adverse events.

Methods

We randomly selected 20 general hospitals in the province of Quebec with at least 1500 annual admissions. Of the 145 672 admissions to the selected hospitals in 2000/01, we randomly selected and reviewed 2355 charts of patients aged 18 years or older. Reviewers abstracted patient characteristics, including communication problems, and details of hospital admission, and assessed the cause and preventability of identified adverse events. The primary outcome was adverse events.

Results

Of 217 adverse events, 63 (29%) were judged to be preventable, for an overall population rate of 2.7% (95% confidence interval [CI] 2.1%–3.4%). We found that patients with preventable adverse events were significantly more likely than those without such events to have a communication problem (odds ratio [OR] 3.00; 95% CI 1.43–6.27) or a psychiatric disorder (OR 2.35; 95% CI 1.09–5.05). Patients who were admitted urgently were significantly more likely than patients whose admissions were elective to experience an event (OR 1.64, 95% CI 1.07–2.52). Preventable adverse events were mainly due to drug errors (40%) or poor clinical management (32%). We found that patients with communication problems were more likely than patients without these problems to experience multiple preventable adverse events (46% v. 20%; p = 0.05).

Interpretation

Patients with communication problems appeared to be at highest risk for preventable adverse events. Interventions to reduce the risk for these patients need to be developed and evaluated.Patient safety is a priority in modern health care systems. From 3% to 17% of hospital admissions result in an adverse event,1–8 and almost 50% of these events are considered to be preventable.3,9–12 An adverse event is an unintended injury or complication caused by delivery of clinical care rather than by the patient''s condition. The occurrence of adverse events has been well documented; however, identifying modifiable risk factors that contribute to the occurrence of preventable adverse events is critical. Studies of preventable adverse events have focused on many factors, but researchers have only recently begun to evaluate the role of patient characteristics.2,9,12,13 Older patients and those with a greater number of health problems have been shown to be at increased risk for preventable adverse events.10,11 However, previous studies have repeatedly suggested the need to investigate more diverse, modifiable risk factors.3,6,7,10,11,14–16Language barriers and disabilities that affect communication have been shown to decrease quality of care;16–20 however, their impact on preventable adverse events needs to be investigated. Patients with physical and sensory disabilities, such as deafness and blindness, have been shown to face considerable barriers when communicating with health care professionals.20–24 Communication disorders are estimated to affect 5%–10% of the general population,25 and in one study more than 15% of admissions to university hospitals involved patients with 1 or more disabilities severe enough to prevent almost any form of communication.26 In addition, patients with communication disabilities are already at increased risk for depression and other comorbidities.27–29 Determining whether they are at increased risk for preventable adverse events would permit risk stratification at the time of admission and targeted preventive strategies.We sought to estimate the extent to which preventable adverse events that occurred in hospital could be predicted by conditions that affect a patient''s ability to communicate.  相似文献   

16.

Background

A recent Cochrane meta-analysis did not confirm the benefits of fish and fish oil in the secondary prevention of cardiac death and myocardial infarction. We performed a meta-analysis of randomized controlled trials that examined the effect of fish-oil supplementation on ventricular fibrillation and ventricular tachycardia to determine the overall effect and to assess whether heterogeneity exists between trials.

Methods

We searched electronic databases (MEDLINE, EMBASE, The Cochrane Central Register of Controlled Trials, CINAHL) from inception to May 2007. We included randomized controlled trials of fish-oil supplementation on ventricular fibrillation or ventricular tachycardia in patients with implantable cardioverter defibrillators. The primary outcome was implantable cardioverter defibrillator discharge. We calculated relative risk [RR] for outcomes at 1-year follow-up for each study. We used the DerSimonian and Laird random-effects methods when there was significant heterogeneity between trials and the Mantel-Hanzel fixed-effects method when heterogeneity was negligible.

Results

We identified 3 trials of 1–2 years'' duration. These trials included a total of 573 patients who received fish oil and 575 patients who received a control. Meta-analysis of data collected at 1 year showed no overall effect of fish oil on the relative risk of implantable cardioverter defibrillator discharge. There was significant heterogeneity between trials. The second largest study showed a significant benefit of fish oil (relative risk [RR] 0.74, 95% confidence interval [CI] 0.56–0.98). The smallest showed an adverse tendency at 1 year (RR 1.23, 95% CI 0.92–1.65) and significantly worse outcome at 2 years among patients with ventricular tachycardia at study entry (log rank p = 0.007).

Conclusion

These data indicate that there is heterogeneity in the response of patients to fish-oil supplementation. Caution should be used when prescribing fish-oil supplementation for patients with ventricular tachycardia.There is a public perception that fish and fish oil can be recommended uniformly for the prevention of coronary artery disease.1–3 However, the scientific evidence is divided4,5 and official agencies have called for more research.6It is estimated that 0.5% of patients with coronary heart disease, 1% of patients with diabetes or hypertension and 2% of the general population at low risk of coronary heart disease take fish-oil supplements.7 In 2004, the price of fish oils overtook that of vegetable oils, and in 2006, the price rose to US$750 per ton.8 The value of fish oil as a nutraceutical in the European market was US$194 million in 2004, and it is anticipated that the price will continue to rise as availability declines.8 Canada is both a consumer and an exporter of fish oil, and it exported 15 000 tons in 2006.9The scientific debate over the clinical value of fish oil is highlighted by a recent Cochrane review, which concluded that long-chain omega-3 fatty acids (eicosapentaenoic acid and docosahexaenoic acid) had no clear effect on total mortality, combined cardiovascular events or cancer.4 Furthermore, another recent meta-analysis10 only showed a significant positive association between fish-oil consumption and prevention of restenosis after coronary angioplasty in a select subgroup after excluding key negative papers.11 Finally, the antiarrhythmic effect, which is proposed to be the principal mechanism of their benefit in cardiovascular disease, has not been demonstrated clearly in clinical trials.12–14We therefore performed a meta-analysis of randomized controlled trials that examined the effect of fish-oil supplementation in patients with implantable cardioverter defibrillators who are at risk of ventricular arrhythmia to determine the overall effect of fish oils. We also sought to investigate whether there was significant heterogeneity between trials.  相似文献   

17.

Background

Although the Canadian health care system was designed to ensure equal access, inequities persist. It is not known if inequities exist for receipt of investigations used to screen for colorectal cancer (CRC). We examined the association between socioeconomic status and receipt of colorectal investigation in Ontario.

Methods

People aged 50 to 70 years living in Ontario on Jan. 1, 1997, who did not have a history of CRC, inflammatory bowel disease or colorectal investigation within the previous 5 years were followed until death or Dec. 31, 2001. Receipt of any colorectal investigation between 1997 and 2001 inclusive was determined by means of linked administrative databases. Income was imputed as the mean household income of the person''s census enumeration area. Multivariate analysis was performed to evaluate the relationship between the receipt of any colorectal investigation and income.

Results

Of the study cohort of 1 664 188 people, 21.2% received a colorectal investigation in 1997–2001. Multivariate analysis demonstrated a significant association between receipt of any colorectal investigation and income (p < 0.001); people in the highest-income quintile had higher odds of receiving any colorectal investigation (adjusted odds ratio [OR] 1.38; 95% confidence interval [CI] 1.36–1.40) and of receiving colonoscopy (adjusted OR 1.50; 95% CI 1.48–1.53).

Interpretation

Socioeconomic status is associated with receipt of colorectal investigations in Ontario. Only one-fifth of people in the screening-eligible age group received any colorectal investigation. Further work is needed to determine the reason for this low rate and to explore whether it affects CRC mortality.Colorectal cancer (CRC) is the most common cause of cancer-related death among nonsmokers in North America. In 2004 an estimated 19 200 Canadians will receive a diagnosis of CRC and 8400 will die from the disease.1 Although the age-standardized incidence and mortality of CRC have been decreasing, the number of new cases is increasing because of the growing size of the elderly population.CRC screening reduces the incidence and disease-specific mortality,2,3,4,5,6 is cost-effective7,8 and is endorsed by many professional societies.9,10,11,12,13,14,15 In 1994 the Canadian Task Force on the Periodic Health Examination (now the Canadian Task Force on Preventive Health Care) concluded that there was insufficient evidence to support CRC screening in asymptomatic people over the age of 40 years.16 In the 2001 update of these guidelines9 fecal occult blood testing (FOBT) every 1 or 2 years or flexible sigmoidoscopy every 5 years was recommended for screening average-risk people 50 years of age or older; there was judged to be insufficient evidence to support colonoscopy as the initial screening test. Despite these endorsements the use of CRC screening remains suboptimal.17,18,19The Canadian health care system covers all medically necessary services without user fees. Although equity has been achieved in certain areas,20,21 low socioeconomic status (SES) is associated with a lower rate of use of cardiovascular procedures22,23 and screening tests for breast and cervical cancer.24,25,26 It is unknown whether SES affects the receipt of CRC screening investigations. This study assessed the association of neighbourhood income (a marker of SES) with the receipt of colorectal investigations in people eligible for screening who lived in Ontario.  相似文献   

18.

Background

People aged 65 years or more represent a growing group of emergency department users. We investigated whether characteristics of primary care (accessibility and continuity) are associated with emergency department use by elderly people in both urban and rural areas.

Methods

We conducted a cross-sectional study using information for a random sample of 95 173 people aged 65 years or more drawn from provincial administrative databases in Quebec for 2000 and 2001. We obtained data on the patients'' age, sex, comorbidity, rate of emergency department use (number of days on which a visit was made to an amergency department per 1000 days at risk [i.e., alive and not in hospital] during the 2-year study period), use of hospital and ambulatory physician services, residence (urban v. rural), socioeconomic status, access (physician: population ratio, presence of primary physician) and continuity of primary care.

Results

After adjusting for age, sex and comorbidity, we found that an increased rate of emergency department use was associated with lack of a primary physician (adjusted rate ratio [RR] 1.45, 95% confidence interval [CI] 1.41–1.49) and low or medium (v. high) levels of continuity of care with a primary physician (adjusted RR 1.46, 95% CI 1.44–1.48, and 1.27, 95% CI 1.25–1.29, respectively). Other significant predictors of increased use of emergency department services were residence in a rural area, low socioeconomic status and residence in a region with a higher physician:population ratio. Among the patients who had a primary physician, continuity of care had a stronger protective effect in urban than in rural areas.

Interpretation

Having a primary physician and greater continuity of care with this physician are factors associated with decreased emergency department use by elderly people, particularly those living in urban areas.Canada is reforming its health care system, with primary care as a major focus.1 The population of Canadians aged 65 years or older is expected to double by 20262 and already accounts for the largest share of total health care expenditures.3 Thus, it is important to evaluate primary care services in this population. Because the emergency department often acts as a safety net for patients receiving inadequate primary care,4 emergency department use may be an important indicator of the adequacy of primary care services.The main determinants of emergency department use by elderly people are the severity and the nature of the medical needs of the patient (overall and specific comorbidities).5 After adjustment for need, increased access to and continuity of primary care may also be associated with lower emergency department use.5 However, most studies that investigated the impact of access and continuity of primary care were carried out in the United States, where the health care system is fundamentally different from Canada''s.5–8 Furthermore, most of these studies used self- reported measures of access and continuity of primary care.5,7,9We sought to identify determinants of emergency department use in a population-based sample of elderly people in Quebec, with particular focus on measures of access to and continuity of primary care. Access was defined by 2 measures: (a) presence of a primary physician and (b) physician: population ratio. Relational continuity was defined as the proportion of primary care visits with the primary physician.10,11 Finally, because primary care services in Quebec are organized differently in urban and rural areas,12 we also compared the association between emergency department use and continuity of care for urban and rural areas.  相似文献   

19.

Background

Tools for early identification of workers with back pain who are at high risk of adverse occupational outcome would help concentrate clinical attention on the patients who need it most, while helping reduce unnecessary interventions (and costs) among the others. This study was conducted to develop and validate clinical rules to predict the 2-year work disability status of people consulting for nonspecific back pain in primary care settings.

Methods

This was a 2-year prospective cohort study conducted in 7 primary care settings in the Quebec City area. The study enrolled 1007 workers (participation, 68.4% of potential participants expected to be eligible) aged 18–64 years who consulted for nonspecific back pain associated with at least 1 day''s absence from work. The majority (86%) completed 5 telephone interviews documenting a large array of variables. Clinical information was abstracted from the medical files. The outcome measure was “return to work in good health” at 2 years, a variable that combined patients'' occupational status, functional limitations and recurrences of work absence. Predictive models of 2-year outcome were developed with a recursive partitioning approach on a 40% random sample of our study subjects, then validated on the rest.

Results

The best predictive model included 7 baseline variables (patient''s recovery expectations, radiating pain, previous back surgery, pain intensity, frequent change of position because of back pain, irritability and bad temper, and difficulty sleeping) and was particularly efficient at identifying patients with no adverse occupational outcome (negative predictive value 78%– 94%).

Interpretation

A clinical prediction rule accurately identified a large proportion of workers with back pain consulting in a primary care setting who were at a low risk of an adverse occupational outcome.Since the 1950s, back pain has taken on the proportions of a veritable epidemic, counting now among the 5 most frequent reasons for visits to physicians'' offices in North America1,2,3 and ranking sixth among health problems generating the highest direct medical costs.4 Because of its high incidence and associated expense, effective intervention for back pain has great potential for improving population health and for freeing up extensive societal resources.So-called red flags to identify pain that is specific (i.e., pain in the back originating from tumours, fractures, infections, cauda equina syndrome, visceral pain and systemic disease)5 account for about 3% of all cases of back pain.6 The overwhelming majority of back-pain problems are thus nonspecific. One important feature of nonspecific back pain among workers is that a small proportion of cases (< 10%) accounts for most of the costs (> 70%).7,8,9,10,11,12,13,14 This fact has led investigators to focus on the early identification of patients who are at higher risk of disability, so that specialized interventions can be provided earlier, whereas other patients can be expected to recover with conservative care.9,15,16,17,18,19,20,21,22,23,24,25 Although this goal has become much sought-after in back-pain research, most available studies in this area have 3 methodological problems:
  • Potential predictors are often limited to administrative or clinical data, whereas it is clear that back pain is a multidimensional health problem.
  • The outcome variable is most often a 1-point dichotomous measure of return to work, time off work or duration of compensation, although some authors have warned against the use of first return to work as a measure of recovery. Baldwin and colleagues,26 for instance, point out that first return to work is frequently followed by recurrences of work absence.
  • Most published prediction rules developed for back pain have not been successfully validated on any additional samples of patients.
Our study aimed to build a simple predictive tool that could be used by primary care physicians to identify workers with nonspecific back pain who are at higher risk of long-term adverse occupational outcomes, and then to validate this tool on a fresh sample of subjects.  相似文献   

20.

Background

Vitamin D is required for normal bone growth and mineralization. We sought to determine whether vitamin D deficiency at birth is associated with bone mineral content (BMC) of Canadian infants.

Methods

We measured plasma 25-hydroxyvitamin D [25(OH)D] as an indicator of vitamin D status in 50 healthy mothers and their newborn term infants. In the infants, anthropometry and lumbar, femur and whole-body BMC were measured within 15 days of delivery. Mothers completed a 24-hour recall and 3-day food and supplement record. We categorized the vitamin D status of mothers and infants as deficient or adequate and then compared infant bone mass in these groups using nonpaired t tests. Maternal and infant variables known to be related to bone mass were tested for their relation to BMC using backward stepwise regression analysis.

Results

Twenty-three (46%) of the mothers and 18 (36%) of the infants had a plasma 25(OH)D concentration consistent with deficiency. Infants who were vitamin D deficient were larger at birth and follow-up. Absolute lumbar spine, femur and whole-body BMC were not different between infants with adequate vitamin D and those who were deficient, despite larger body size in the latter group. In the regression analysis, higher whole-body BMC was associated with greater gestational age and weight at birth as well as higher infant plasma 25(OH)D.

Conclusion

A high rate of vitamin D deficiency was observed among women and their newborn infants. Among infants, vitamin D deficiency was associated with greater weight and length but lower bone mass relative to body weight. Whether a return to normal vitamin D status, achieved through supplements or fortified infant formula, can reset the trajectory for acquisition of BMC requires investigation.In northern countries, endogenous synthesis of vitamin D is thought to be limited to the months of April through September.1 During the winter months, dietary or supplemental vitamin D intake at values similar to the recommended intake of 200 IU/day (5 μg/day) is not enough to prevent vitamin D deficiency in young women.2 Vitamin D deficiency is well documented among Canadian women3,4,5,6,7 and young children4,8,9,10,11 and has been reported at levels as high as 76% of women and 43% of children (3–24 months) in northern Manitoba4 and 48.4%–88.6% of Aboriginal women and 15.1%–63.5% of non-Aboriginal women in the Inuvik zone of the former Northwest Territories.3 Vitamin D dependent rickets in children and osteomalacia in adults are the most commonly reported features of deficiency.12 We sought to determine whether maternal or infant vitamin D deficiency at birth is associated with BMC of Canadian infants.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号