首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

Background

Tools for early identification of workers with back pain who are at high risk of adverse occupational outcome would help concentrate clinical attention on the patients who need it most, while helping reduce unnecessary interventions (and costs) among the others. This study was conducted to develop and validate clinical rules to predict the 2-year work disability status of people consulting for nonspecific back pain in primary care settings.

Methods

This was a 2-year prospective cohort study conducted in 7 primary care settings in the Quebec City area. The study enrolled 1007 workers (participation, 68.4% of potential participants expected to be eligible) aged 18–64 years who consulted for nonspecific back pain associated with at least 1 day''s absence from work. The majority (86%) completed 5 telephone interviews documenting a large array of variables. Clinical information was abstracted from the medical files. The outcome measure was “return to work in good health” at 2 years, a variable that combined patients'' occupational status, functional limitations and recurrences of work absence. Predictive models of 2-year outcome were developed with a recursive partitioning approach on a 40% random sample of our study subjects, then validated on the rest.

Results

The best predictive model included 7 baseline variables (patient''s recovery expectations, radiating pain, previous back surgery, pain intensity, frequent change of position because of back pain, irritability and bad temper, and difficulty sleeping) and was particularly efficient at identifying patients with no adverse occupational outcome (negative predictive value 78%– 94%).

Interpretation

A clinical prediction rule accurately identified a large proportion of workers with back pain consulting in a primary care setting who were at a low risk of an adverse occupational outcome.Since the 1950s, back pain has taken on the proportions of a veritable epidemic, counting now among the 5 most frequent reasons for visits to physicians'' offices in North America1,2,3 and ranking sixth among health problems generating the highest direct medical costs.4 Because of its high incidence and associated expense, effective intervention for back pain has great potential for improving population health and for freeing up extensive societal resources.So-called red flags to identify pain that is specific (i.e., pain in the back originating from tumours, fractures, infections, cauda equina syndrome, visceral pain and systemic disease)5 account for about 3% of all cases of back pain.6 The overwhelming majority of back-pain problems are thus nonspecific. One important feature of nonspecific back pain among workers is that a small proportion of cases (< 10%) accounts for most of the costs (> 70%).7,8,9,10,11,12,13,14 This fact has led investigators to focus on the early identification of patients who are at higher risk of disability, so that specialized interventions can be provided earlier, whereas other patients can be expected to recover with conservative care.9,15,16,17,18,19,20,21,22,23,24,25 Although this goal has become much sought-after in back-pain research, most available studies in this area have 3 methodological problems:
  • Potential predictors are often limited to administrative or clinical data, whereas it is clear that back pain is a multidimensional health problem.
  • The outcome variable is most often a 1-point dichotomous measure of return to work, time off work or duration of compensation, although some authors have warned against the use of first return to work as a measure of recovery. Baldwin and colleagues,26 for instance, point out that first return to work is frequently followed by recurrences of work absence.
  • Most published prediction rules developed for back pain have not been successfully validated on any additional samples of patients.
Our study aimed to build a simple predictive tool that could be used by primary care physicians to identify workers with nonspecific back pain who are at higher risk of long-term adverse occupational outcomes, and then to validate this tool on a fresh sample of subjects.  相似文献   

2.

Background

The number of births attended by individual family physicians who practice intrapartum care varies. We wanted to determine if the practice–volume relations that have been shown in other fields of medical practice also exist in maternity care practice by family doctors.

Methods

For the period April 1997 to August 1998, we analyzed all singleton births at a major maternity teaching hospital for which the family physician was the responsible physician. Physicians were grouped into 3 categories on the basis of the number of births they attended each year: fewer than 12, 12 to 24, and 25 or more. Physicians with a low volume of deliveries (72 physicians, 549 births), those with a medium volume of deliveries (34 physicians, 871 births) and those with a high volume of deliveries (46 physicians, 3024 births) were compared in terms of maternal and newborn outcomes. The main outcome measures were maternal morbidity, 5-minute Apgar score and admission of the baby to the neonatal intensive care unit or special care unit. Secondary outcomes were obstetric procedures and consultation patterns.

Results

There was no difference among the 3 volume cohorts in terms of rates of maternal complications of delivery, 5-minute Apgar scores of less than 7 or admissions to the neonatal intensive care unit or the special care unit, either before or after adjustment for parity, pregnancy-induced hypertension, diabetes, ethnicity, lone parent status, maternal age, gestational age, newborn birth weight and newborn head circumference at birth. High- and medium-volume family physicians consulted with obstetricians less often than low-volume family physicians (adjusted odds ratio [OR] 0.586 [95% confidence interval, CI, 0.479–0.718] and 0.739 [95% CI 0.583–0.935] respectively). High- and medium-volume family physicians transferred the delivery to an obstetrician less often than low-volume family physicians (adjusted OR 0.668 [95% CI 0.542–0.823] and 0.776 [95% CI 0.607–0.992] respectively). Inductions were performed by medium-volume family physicians more often than by low-volume family physicians (adjusted OR 1.437 [95% CI 1.036–1.992].

Interpretation

Family physicians'' delivery volumes were not associated with adverse outcomes for mothers or newborns. Low-volume family physicians referred patients and transferred deliveries to obstetricians more frequently than high- or medium-volume family physicians. Further research is needed to validate these findings in smaller facilities, both urban and rural.More than 20 years ago, Luft and associates1 conducted one of the earliest volume–outcome studies. Since then, many studies addressing the relation between volume of procedures and patient outcomes have been published.2,3 In some of these studies, either the hospital size or the physician procedural volume was used as a surrogate for physician expertise. Among studies analyzing hospital volumes and outcomes, better outcomes have been associated with higher patient volumes in some instances4,5,6,7 but not others.3,8,9 Some studies of individual provider volume have shown a positive relation between volume and outcomes,10,11 whereas others have shown no relation or inconsistent results.3,12 Finally, a few studies analyzing both hospital volume and provider volume have reported a positive volume–outcome relation.13,14Criticism levelled at the methods used in volume–outcome studies have addressed the lack of adjustment for case mix, different cutoff points for volume categories and retrospective design.3 Other factors that have an effect on patient outcomes but that have not been included in previous volume analyses include health maintenance organization status, physician certification and years since graduation, and patient socioeconomic status, age and ethnicity. Furthermore, most of the studies on volume have covered surgical or oncology specialities.The few studies that have been done on volume and outcome in maternity care have shown variable effects. Rural health care is often associated with lower volumes of obstetric procedures. However, no differences in maternal or newborn outcomes have been shown in some comparisons of births in urban and rural locations.15,16,17,18 Other studies have shown poorer maternal and newborn outcomes in low-volume hospitals, neonatal intensive care units (NICUs) and rural locations.19,20,21,22 Conversely, higher volume (hospitals with more than 1000 deliveries per year) has been associated with more maternal lacerations or complications.23When the health care provider has been the unit of analysis, a relation between volume and maternal or newborn outcome has been demonstrated in at least one study24 but not in others.25,26 Low volume has been defined as 20 to 24 deliveries per year.24,26 Hass and colleagues24 reported an adjusted odds ratio (OR) of 1.4 for low birth weight for infants delivered by low-volume non-board-certified physicians relative to high-volume non-board-certified physicians; the adjusted OR was 1.56 for low-volume board-certified physicians relative to high-volume board-certified physicians (98.7% of whom were obstetricians).Possible explanations for the differences among studies include differences in health care delivery systems, insurance coverage, experience and training of providers, maternal risk factors, triage or transfer of high-risk cases, choice of outcome measures, and changes over time in access to care, quality assurance and standard of living. Relations have been reported between maternal or newborn outcomes and smoking, maternal history of low birth weight (for previous pregnancies), pregnancy–induced hypertension, diabetes, prepregnancy weight, gestational weight gain, maternal height and age, multiple gestation, previous vaginal birth after cesarean section, history of previous delivery problems, parity, large-for-date fetus, ethnicity and fetal sex.25,27,28,29 Few studies of the relation between volume of births and obstetric outcome have been able to control for these potentially confounding variables and adjust for maternal risk factors.Our database of detailed accounts of births in one hospital setting allowed us to examine this issue more rigorously. We posed 2 research questions: Is there a relation between the volume of deliveries attended by individual family physicians and maternal and newborn outcomes? If there are differences in outcomes, are they related to different physician practice styles and consultation patterns?  相似文献   

3.

Background

Although repeat induced abortion is common, data concerning characteristics of women undergoing this procedure are lacking. We conducted this study to identify the characteristics, including history of physical abuse by a male partner and history of sexual abuse, of women who present for repeat induced abortion.

Methods

We surveyed a consecutive series of women presenting for initial or repeat pregnancy termination to a regional provider of abortion services for a wide geographic area in southwestern Ontario between August 1998 and May 1999. Self-reported demographic characteristics, attitudes and practices regarding contraception, history of relationship violence, history of sexual abuse or coercion, and related variables were assessed as potential correlates of repeat induced abortion. We used χ2 tests for linear trend to examine characteristics of women undergoing a first, second, or third or subsequent abortion. We analyzed significant correlates of repeat abortion using stepwise multivariate multinomial logistic regression to identify factors uniquely associated with repeat abortion.

Results

Of the 1221 women approached, 1145 (93.8%) consented to participate. Data regarding first versus repeat abortion were available for 1127 women. A total of 68.2%, 23.1% and 8.7% of the women were seeking a first, second, or third or subsequent abortion respectively. Adjusted odds ratios for undergoing repeat versus a first abortion increased significantly with increased age (second abortion: 1.08, 95% confidence interval [CI] 1.04–1.09; third or subsequent abortion: 1.11, 95% CI 1.07–1.15), oral contraceptive use at the time of conception (second abortion: 2.17, 95% CI 1.52–3.09; third or subsequent abortion: 2.60, 95% CI 1.51–4.46), history of physical abuse by a male partner (second abortion: 2.04, 95% CI 1.39–3.01; third or subsequent abortion: 2.78, 95% CI 1.62–4.79), history of sexual abuse or violence (second abortion: 1.58, 95% CI 1.11–2.25; third or subsequent abortion: 2.53, 95% CI 1.50–4.28), history of sexually transmitted disease (second abortion: 1.50, 95% CI 0.98–2.29; third or subsequent abortion: 2.26, 95% CI 1.28–4.02) and being born outside Canada (second abortion: 1.83, 95% CI 1.19–2.79; third or subsequent abortion: 1.75, 95% CI 0.90–3.41).

Interpretation

Among other factors, a history of physical or sexual abuse was associated with repeat induced abortion. Presentation for repeat abortion may be an important indication to screen for a current or past history of relationship violence and sexual abuse.Repeat pregnancy termination procedures are common in Canada (where 35.5% of all induced abortions are repeat procedures)1,2 and the United States (where 48% of induced abortions are repeat procedures).3,4,5,6,7 Rates of repeat induced abortion increased in both countries for an initial period after abortion was legalized, as a result of an increase in the number of women who had access to a first, and consequently to repeat, legal induced abortion.1,6,8,9 At present, rates of initial and repeat abortion in Canada and the United States appear to be stabilizing.2,7Research concerning characteristics of women who undergo repeat induced abortions has been limited in scope. In a literature search we identified fewer than 20 studies in this area published over the past 3 decades. However, available research has shown several consistent findings. Women undergoing repeat abortions are more likely than those undergoing a first abortion to report using a method of contraception at the time of conception.7,8,10,11 In addition, women seeking repeat abortions report more challenging family situations than women seeking initial abortions: they are more likely to be separated, divorced, widowed or living in a common-law marriage, and to report difficulties with their male partner.1,5,8,11,12 They also are older,7,13 have more children1,5,13 and are more often non-white7,11,13 than women seeking initial abortions.There is little evidence to suggest that women seeking repeat abortion are using pregnancy termination as a method of birth control.1,5,6,8,11 Evidence also does not indicate that women seeking repeat abortion are psychologically maladjusted.8,13Our literature review showed that many studies of repeat abortion are 20 to 30 years old and are based on data collected when abortion was a newly legalized procedure.5,11 Furthermore, in studies of correlates of repeat abortion the investigators did not examine a range of personality characteristics that are known to influence women''s reproductive health outcomes,14,15 including attitudes about sexuality,14 health locus of control,16,17 degree of social integration,16 attitudes about contraception18,19 and history of sexual or physical abuse.20,21,22 The objective of the current study was to identify characteristics of women who undergo repeat induced abortion.  相似文献   

4.

Background

Despite the increase in the number of Aboriginal people with end-stage renal disease around the world, little is known about their health outcomes when undergoing renal replacement therapy. We evaluated differences in survival and rate of renal transplantation among Aboriginal and white patients after initiation of dialysis.

Methods

Adult patients who were Aboriginal or white and who commenced dialysis in Alberta, Saskatchewan or Manitoba between Jan. 1, 1990, and Dec. 31, 2000, were recruited for the study and were followed until death, transplantation, loss to follow-up or the end of the study (Dec. 31, 2001). We used Cox proportional hazards models to examine the effect of race on patient survival and likelihood of transplant, with adjustment for potential confounders.

Results

Of the 4333 adults who commenced dialysis during the study period, 15.8% were Aboriginal and 72.4% were white. Unadjusted rates of death per 1000 patient-years during the study period were 158 (95% confidence interval [CI] 144–176) for Aboriginal patients and 146 (95% CI 139–153) for white patients. When follow-up was censored at the time of transplantation, the age-adjusted risk of death after initiation of dialysis was significantly higher among Aboriginal patients than among white patients (hazard ratio [HR] 1.15, 95% CI 1.02–1.30). The greater risk of death associated with Aboriginal race was no longer observed after adjustment for diabetes mellitus and other comorbid conditions (adjusted HR 0.89, 95% CI 0.77–1.02) and did not appear to be associated with socioeconomic status. During the study period, unadjusted transplantation rates per 1000 patient-years were 62 (95% CI 52–75) for Aboriginal patients and 133 (95% CI 125–142) for white patients. Aboriginal patients were significantly less likely to receive a renal transplant after commencing dialysis, even after adjustment for potential confounders (HR 0.43, 95% CI 0.35–0.53). In an additional analysis that included follow-up after transplantation for those who received renal allografts, the age-adjusted risk of death associated with Aboriginal race (HR 1.36, 95% CI 1.21–1.52) was higher than when follow-up after transplantation was not considered, perhaps because of the lower rate of transplantation among Aboriginals.

Interpretation

Survival among dialysis patients was similar for Aboriginal and white patients after adjustment for comorbidity. However, despite universal access to health care, Aboriginal people had a significantly lower rate of renal transplantation, which might have adversely affected their survival when receiving renal replacement therapy.In North America and the Antipodes, the incidence of diabetes among adolescent and adult Aboriginals has risen dramatically,1,2,3,4 with corresponding increases in the prevalence of diabetic nephropathy.5,6,7 Aboriginal people in Canada have experienced disproportionately high incidence rates of end-stage renal disease (ESRD), with an 8-fold increase in the number of prevalent dialysis patients between 1980 and 2000.8 Although the incidence of ESRD appears to have decreased in recent years, the prevalence of diabetes mellitus and its complications are rising, especially among young people.9,10,11Most work evaluating health outcomes among Aboriginal people considers either the general population12or diseases for which interventions are implemented over a short period, such as alcohol abuse,13 injury14 or critical illness.15 Death and markers of poor health are significantly more common among Aboriginal people than among North Americans of European ancestry, perhaps because of the greater prevalence of diabetes mellitus, adverse health effects due to lower socioeconomic status16 and reduced access to primary care.17 Aboriginal patients may also face unique barriers to care, including mistrust of non-Aboriginal providers, institutional discrimination or preference for traditional remedies.18 These factors may be most relevant when contact with physicians is infrequent, which obstructs development of a therapeutic relationship. In contrast, ESRD is a chronic illness that requires ongoing care from a relatively small, stable multidisciplinary team.Although recent evidence highlights racial inequalities in morbidity and mortality among North Americans with ESRD, most studies have focused on black or Hispanic populations.19We conducted this study to evaluate rates of death and renal transplantation among Aboriginal people after initiation of dialysis in Alberta, Saskatchewan and Manitoba.  相似文献   

5.
6.
7.

Background

Leg ulcers usually occur in older patients, a growing population for which increasing health care resources are required. Treatment is mainly provided in patients'' homes; however, patients often receive poorly integrated services in multiple settings. We report the results of a prospective study of a community-based care strategy for leg ulcers.

Methods

International practice recommendations and guidelines were adapted to make a new clinical protocol. The new model, for a dedicated service staffed by specially trained registered nurses, established initial and ongoing assessment time frames and provided enhanced linkages to medical specialists. Data were collected for 1 year before and after implementation; outcome measures included 3-month healing rates, quality of life and resource usage.

Results

Three-month healing rates more than doubled between the year before implementation (23% [18/78]) and the year afterward (56% [100/180]). The number of nursing visits per case declined, from a median of 37 to 25 (p = 0.041); the median supply cost per case was reduced from $1923 to $406 (p = 0.005).

Interpretation

Reorganization of care for people with leg ulcers was associated with improved healing and a more efficient use of nursing visits.Although not always recognized as a pressing health care problem, leg ulcers are a common, complex, and costly condition. International studies have shown that their occurrence increases with age.1,2,3,4,5 Chronic ulcers are an ongoing burden to patients and to the health care system.6,7,8,9,10 Patients tend to receive poorly integrated services in multiple settings.The Ottawa Community Care Access Centre, an eastern Ontario home-care authority, observed a pattern of yearly increases in the resources required to care for people with chronic wounds. Researchers from Queen''s University, Ottawa University and 3 nursing agencies undertook a comprehensive regional needs assessment to understand the population and the care environment. They documented an estimated prevalence of 1.8 cases of leg ulcer per 1000 population, an incidence comparable to rates reported from other countries.11Profile information revealed a population complex in terms of health problems and care challenges. Most patients were over 65 years in age; nearly three-quarters had 3 or more other conditions. Over two-thirds had experienced leg ulcers for many months. Half of the affected population had a leg-ulcer history spanning 5–10 years; a third, exceeding 10 years. Our 4-week costing study12 estimated that 192 people receiving care would annually consume $1 million in nursing care services and $260 000 in wound-care supplies.Home care nurses and family physicians had varying levels of confidence in managing patients with leg ulcers.13,14 In general, they were unaware of the relative effectiveness of compression therapy for venous leg ulcers, i.e., that 1 of 6 patients so treated would heal (95% confidence interval [CI] 4–18).15,16 Practice audits indicated that assessments were not standardized: ultrasound readings of the ankle brachial pressure index (measured with a hand-held Doppler) to rule out arterial disease were not routine; serial measurements of ulcers were carried out inconsistently; and compression bandaging, the standard of care for venous ulcers, was underutilized and yet also occasionally applied inappropriately to ulcers with arterial involvement.17The researchers and organizations involved collaborated and fed information back and forth, generating regional data which, when combined with available external evidence, provided information appropriate to the local community. Regional decision-makers agreed to a redesign of the delivery of care to these patients, based on that information. This involved changing to a nurse-led service providing clinical care in accordance with a set of evidence-based guidelines.18,19,20 The objective of our study was to determine and compare the health outcomes and efficiencies of the former and new services.  相似文献   

8.

Background

Ethnic disparities in access to health care and health outcomes are well documented. It is unclear whether similar differences exist between Aboriginal and non-Aboriginal people with chronic kidney disease in Canada. We determined whether access to care differed between status Aboriginal people (Aboriginal people registered under the federal Indian Act) and non-Aboriginal people with chronic kidney disease.

Methods

We identified 106 511 non-Aboriginal and 1182 Aboriginal patients with chronic kidney disease (estimated glomerular filtration rate less than 60 mL/min/1.73 m2). We compared outcomes, including hospital admissions, that may have been preventable with appropriate outpatient care (ambulatory-care–sensitive conditions) as well as use of specialist services, including visits to nephrologists and general internists.

Results

Aboriginal people were almost twice as likely as non-Aboriginal people to be admitted to hospital for an ambulatory-care–sensitive condition (rate ratio 1.77, 95% confidence interval [CI] 1.46–2.13). Aboriginal people with severe chronic kidney disease (estimated glomerular filtration rate < 30 mL/min/1.73 m2) were 43% less likely than non-Aboriginal people with severe chronic kidney disease to visit a nephrologist (hazard ratio 0.57, 95% CI 0.39–0.83). There was no difference in the likelihood of visiting a general internist (hazard ratio 1.00, 95% CI 0.83–1.21).

Interpretation

Increased rates of hospital admissions for ambulatory-care–sensitive conditions and a reduced likelihood of nephrology visits suggest potential inequities in care among status Aboriginal people with chronic kidney disease. The extent to which this may contribute to the higher rate of kidney failure in this population requires further exploration.Ethnic disparities in access to health care are well documented;1,2 however, the majority of studies include black and Hispanic populations in the United States. The poorer health status and increased mortality among Aboriginal populations than among non-Aboriginal populations,3,4 particularly among those with chronic medical conditions,5,6 raise the question as to whether there is differential access to health care and management of chronic medical conditions in this population.The prevalence of end-stage renal disease, which commonly results from chronic kidney disease, is about twice as common among Aboriginal people as it is among non-Aboriginal people.7,8 Given that the progression of chronic kidney disease can be delayed by appropriate therapeutic interventions9,10 and that delayed referral to specialist care is associated with increased mortality,11,12 issues such as access to health care may be particularly important in the Aboriginal population. Although previous studies have suggested that there is decreased access to primary and specialist care in the Aboriginal population,13–15 these studies are limited by the inclusion of patients from a single geographically isolated region,13 the use of survey data,14 and the inability to differentiate between different types of specialists and reasons for the visit.15In addition to physician visits, admission to hospital for ambulatory-care–sensitive conditions (conditions that, if managed effectively in an outpatient setting, do not typically result in admission to hospital) has been used as a measure of access to appropriate outpatient care.16,17 Thus, admission to hospital for an ambulatory-care–sensitive condition reflects a potentially preventable complication resulting from inadequate access to care. Our objective was to determine whether access to health care differs between status Aboriginal (Aboriginal people registered under the federal Indian Act) and non-Aboriginal people with chronic kidney disease. We assess differences in care by 2 measures: admission to hospital for an ambulatory-care–sensitive condition related to chronic kidney disease; and receipt of nephrology care for severe chronic kidney disease as recommended by clinical practice guidelines.18  相似文献   

9.
10.
Clostridium difficile is a major cause of antibiotic-associated diarrheal disease in many parts of the world. In recent years, distinct genetic variants of C. difficile that cause severe disease and persist within health care settings have emerged. Highly resistant and infectious C. difficile spores are proposed to be the main vectors of environmental persistence and host transmission, so methods to accurately monitor spores and their inactivation are urgently needed. Here we describe simple quantitative methods, based on purified C. difficile spores and a murine transmission model, for evaluating health care disinfection regimens. We demonstrate that disinfectants that contain strong oxidizing active ingredients, such as hydrogen peroxide, are very effective in inactivating pure spores and blocking spore-mediated transmission. Complete inactivation of 106 pure C. difficile spores on indicator strips, a six-log reduction, and a standard measure of stringent disinfection regimens require at least 5 min of exposure to hydrogen peroxide vapor (HPV; 400 ppm). In contrast, a 1-min treatment with HPV was required to disinfect an environment that was heavily contaminated with C. difficile spores (17 to 29 spores/cm2) and block host transmission. Thus, pure C. difficile spores facilitate practical methods for evaluating the efficacy of C. difficile spore disinfection regimens and bringing scientific acumen to C. difficile infection control.Clostridium difficile is a Gram-positive, spore-forming, anaerobic bacterium that is a major cause of health care-acquired infections and antibiotic-associated diarrhea (2). In recent years, several genetic variants of C. difficile have emerged as important health care pathogens (6). Perhaps most notable is the “hypervirulent” variant, commonly referred to as PCR ribotype 027/restriction endonuclease analysis (REA) group BI, that produces elevated levels of toxins TcdA and TcdB (17, 19). Other virulent ribotypes that display extensive heterogeneity among their toxin protein sequences (26) and gene activities (8) have emerged. Using whole-genome sequencing, we demonstrated that there are broad genetic differences between the entire genomes of several common variants, including ribotype/REA group variants 012/R, 017/CF, and 027/BI used in this study (12, 27, 31). In contrast, phylogeographic analysis of 027/BI isolates from Europe and the United States demonstrates that this clade is extremely clonal and implies recent transcontinental spread of hypervirulent C. difficile (12).C. difficile is distinct from many other health care pathogens because it produces highly infectious spores that are shed into the environment (25, 28). C. difficile spores can resist disinfection regimens that normally inactivate other health care pathogens, such as methicillin-resistant Staphylococcus aureus and vancomycin-resistant enterococci, therefore challenging current infection control measures (2). A multifaceted approach is normally used to control C. difficile in health care facilities (32). Interventions include antimicrobial stewardship, increased clinical awareness, patient isolation (11), and enhanced environmental disinfection regimens based on hydrogen peroxide (H2O2) vapor (HPV) (4). While attempts to break the spore-mediated infection cycle and interrupt these efficient routes of transmission are important for infection control measures, there is little quantitative evidence indicating which interventions are most effective (7). Here we describe the exploitation of pure C. difficile spores (16) and a murine transmission model (15) in simple, practical methods to quantitatively monitor the impact of health care disinfection regimens on C. difficile viability. These methods can be used to optimize disinfection regimens targeted at C. difficile.  相似文献   

11.
12.
13.
Soil substrate membrane systems allow for microcultivation of fastidious soil bacteria as mixed microbial communities. We isolated established microcolonies from these membranes by using fluorescence viability staining and micromanipulation. This approach facilitated the recovery of diverse, novel isolates, including the recalcitrant bacterium Leifsonia xyli, a plant pathogen that has never been isolated outside the host.The majority of bacterial species have never been recovered in the laboratory (1, 14, 19, 24). In the last decade, novel cultivation approaches have successfully been used to recover “unculturables” from a diverse range of divisions (23, 25, 29). Most strategies have targeted marine environments (4, 23, 25, 32), but soil offers the potential for the investigation of vast numbers of undescribed species (20, 29). Rapid advances have been made toward culturing soil bacteria by reformulating and diluting traditional media, extending incubation times, and using alternative gelling agents (8, 21, 29).The soil substrate membrane system (SSMS) is a diffusion chamber approach that uses extracts from the soil of interest as the growth substrate, thereby mimicking the environment under investigation (12). The SSMS enriches for slow-growing oligophiles, a proportion of which are subsequently capable of growing on complex media (23, 25, 27, 30, 32). However, the SSMS results in mixed microbial communities, with the consequent difficulty in isolation of individual microcolonies for further characterization (10).Micromanipulation has been widely used for the isolation of specific cell morphotypes for downstream applications in molecular diagnostics or proteomics (5, 15). This simple technology offers the opportunity to select established microcolonies of a specific morphotype from the SSMS when combined with fluorescence visualization (3, 11). Here, we have combined the SSMS, fluorescence viability staining, and advanced micromanipulation for targeted isolation of viable, microcolony-forming soil bacteria.  相似文献   

14.
15.
Highly active antiretroviral therapy (HAART) can reduce human immunodeficiency virus type 1 (HIV-1) viremia to clinically undetectable levels. Despite this dramatic reduction, some virus is present in the blood. In addition, a long-lived latent reservoir for HIV-1 exists in resting memory CD4+ T cells. This reservoir is believed to be a source of the residual viremia and is the focus of eradication efforts. Here, we use two measures of population structure—analysis of molecular variance and the Slatkin-Maddison test—to demonstrate that the residual viremia is genetically distinct from proviruses in resting CD4+ T cells but that proviruses in resting and activated CD4+ T cells belong to a single population. Residual viremia is genetically distinct from proviruses in activated CD4+ T cells, monocytes, and unfractionated peripheral blood mononuclear cells. The finding that some of the residual viremia in patients on HAART stems from an unidentified cellular source other than CD4+ T cells has implications for eradication efforts.Successful treatment of human immunodeficiency virus type 1 (HIV-1) infection with highly active antiretroviral therapy (HAART) reduces free virus in the blood to levels undetectable by the most sensitive clinical assays (18, 36). However, HIV-1 persists as a latent provirus in resting, memory CD4+ T lymphocytes (6, 9, 12, 16, 48) and perhaps in other cell types (45, 52). The latent reservoir in resting CD4+ T cells represents a barrier to eradication because of its long half-life (15, 37, 40-42) and because specifically targeting and purging this reservoir is inherently difficult (8, 25, 27).In addition to the latent reservoir in resting CD4+ T cells, patients on HAART also have a low amount of free virus in the plasma, typically at levels below the limit of detection of current clinical assays (13, 19, 35, 37). Because free virus has a short half-life (20, 47), residual viremia is indicative of active virus production. The continued presence of free virus in the plasma of patients on HAART indicates either ongoing replication (10, 13, 17, 19), release of virus after reactivation of latently infected CD4+ T cells (22, 24, 31, 50), release from other cellular reservoirs (7, 45, 52), or some combination of these mechanisms. Finding the cellular source of residual viremia is important because it will identify the cells that are still capable of producing virus in patients on HAART, cells that must be targeted in any eradication effort.Detailed analysis of this residual viremia has been hindered by technical challenges involved in working with very low concentrations of virus (13, 19, 35). Recently, new insights into the nature of residual viremia have been obtained through intensive patient sampling and enhanced ultrasensitive sequencing methods (1). In a subset of patients, most of the residual viremia consisted of a small number of viral clones (1, 46) produced by a cell type severely underrepresented in the peripheral circulation (1). These unique viral clones, termed predominant plasma clones (PPCs), persist unchanged for extended periods of time (1). The persistence of PPCs indicates that in some patients there may be another major cellular source of residual viremia (1). However, PPCs were observed in a small group of patients who started HAART with very low CD4 counts, and it has been unclear whether the PPC phenomenon extends beyond this group of patients. More importantly, it has been unclear whether the residual viremia generally consists of distinct virus populations produced by different cell types.Since the HIV-1 infection in most patients is initially established by a single viral clone (23, 51), with subsequent diversification (29), the presence of genetically distinct populations of virus in a single individual can reflect entry of viruses into compartments where replication occurs with limited subsequent intercompartmental mixing (32). Sophisticated genetic tests can detect such population structure in a sample of viral sequences (4, 39, 49). Using two complementary tests of population structure (14, 43), we analyzed viral sequences from multiple sources within individual patients in order to determine whether a source other than circulating resting CD4+ T cells contributes to residual viremia and viral persistence. Our results have important clinical implications for understanding HIV-1 persistence and treatment failure and for improving eradication strategies, which are currently focusing only on the latent CD4+ T-cell reservoir.  相似文献   

16.
Nitrate-reducing enrichments, amended with n-hexadecane, were established with petroleum-contaminated sediment from Onondaga Lake. Cultures were serially diluted to yield a sediment-free consortium. Clone libraries and denaturing gradient gel electrophoresis analysis of 16S rRNA gene community PCR products indicated the presence of uncultured alpha- and betaproteobacteria similar to those detected in contaminated, denitrifying environments. Cultures were incubated with H34-hexadecane, fully deuterated hexadecane (d34-hexadecane), or H34-hexadecane and NaH13CO3. Gas chromatography-mass spectrometry analysis of silylated metabolites resulted in the identification of [H29]pentadecanoic acid, [H25]tridecanoic acid, [1-13C]pentadecanoic acid, [3-13C]heptadecanoic acid, [3-13C]10-methylheptadecanoic acid, and d27-pentadecanoic, d25-, and d24-tridecanoic acids. The identification of these metabolites suggests a carbon addition at the C-3 position of hexadecane, with subsequent β-oxidation and transformation reactions (chain elongation and C-10 methylation) that predominantly produce fatty acids with odd numbers of carbons. Mineralization of [1-14C]hexadecane was demonstrated based on the recovery of 14CO2 in active cultures.Linear alkanes account for a large component of crude and refined petroleum products and, therefore, are of environmental significance with respect to their fate and transport (38). The aerobic activation of alkanes is well documented and involves monooxygenase and dioxygenase enzymes in which not only is oxygen required as an electron acceptor but it also serves as a reactant in hydroxylation (2, 16, 17, 32, 34). Alkanes are also degraded under anoxic conditions via novel degradation strategies (34). To date, there are two known pathways of anaerobic n-alkane degradation: (i) alkane addition to fumarate, commonly referred to as fumarate addition, and (ii) a putative pathway, proposed by So et al. (25), involving carboxylation of the alkane. Fumarate addition proceeds via terminal or subterminal addition (C-2 position) of the alkane to the double bond of fumarate, resulting in the formation of an alkylsuccinate. The alkylsuccinate is further degraded via carbon skeleton rearrangement and β-oxidation (4, 6, 8, 12, 13, 21, 37). Alkane addition to fumarate has been documented for a denitrifying isolate (21, 37), sulfate-reducing consortia (4, 8, 12, 13), and five sulfate-reducing isolates (4, 6-8, 12). In addition to being demonstrated in these studies, fumarate addition in a sulfate-reducing enrichment growing on the alicyclic alkane 2-ethylcyclopentane has also been demonstrated (23). In contrast to fumarate addition, which has been shown for both sulfate-reducers and denitrifiers, the putative carboxylation of n-alkanes has been proposed only for the sulfate-reducing isolate strain Hxd3 (25) and for a sulfate-reducing consortium (4). Experiments using NaH13CO3 demonstrated that bicarbonate serves as the source of inorganic carbon for the putative carboxylation reaction (25). Subterminal carboxylation of the alkane at the C-3 position is followed by elimination of the two terminal carbons, to yield a fatty acid that is one carbon shorter than the parent alkane (4, 25). The fatty acids are subject to β-oxidation, chain elongation, and/or C-10 methylation (25).In this study, we characterized an alkane-degrading, nitrate-reducing consortium and surveyed the metabolites of the consortium incubated with either unlabeled or labeled hexadecane in order to elucidate the pathway of n-alkane degradation. We present evidence of a pathway analogous to the proposed carboxylation pathway under nitrate-reducing conditions.  相似文献   

17.

Background

Although the Canadian health care system was designed to ensure equal access, inequities persist. It is not known if inequities exist for receipt of investigations used to screen for colorectal cancer (CRC). We examined the association between socioeconomic status and receipt of colorectal investigation in Ontario.

Methods

People aged 50 to 70 years living in Ontario on Jan. 1, 1997, who did not have a history of CRC, inflammatory bowel disease or colorectal investigation within the previous 5 years were followed until death or Dec. 31, 2001. Receipt of any colorectal investigation between 1997 and 2001 inclusive was determined by means of linked administrative databases. Income was imputed as the mean household income of the person''s census enumeration area. Multivariate analysis was performed to evaluate the relationship between the receipt of any colorectal investigation and income.

Results

Of the study cohort of 1 664 188 people, 21.2% received a colorectal investigation in 1997–2001. Multivariate analysis demonstrated a significant association between receipt of any colorectal investigation and income (p < 0.001); people in the highest-income quintile had higher odds of receiving any colorectal investigation (adjusted odds ratio [OR] 1.38; 95% confidence interval [CI] 1.36–1.40) and of receiving colonoscopy (adjusted OR 1.50; 95% CI 1.48–1.53).

Interpretation

Socioeconomic status is associated with receipt of colorectal investigations in Ontario. Only one-fifth of people in the screening-eligible age group received any colorectal investigation. Further work is needed to determine the reason for this low rate and to explore whether it affects CRC mortality.Colorectal cancer (CRC) is the most common cause of cancer-related death among nonsmokers in North America. In 2004 an estimated 19 200 Canadians will receive a diagnosis of CRC and 8400 will die from the disease.1 Although the age-standardized incidence and mortality of CRC have been decreasing, the number of new cases is increasing because of the growing size of the elderly population.CRC screening reduces the incidence and disease-specific mortality,2,3,4,5,6 is cost-effective7,8 and is endorsed by many professional societies.9,10,11,12,13,14,15 In 1994 the Canadian Task Force on the Periodic Health Examination (now the Canadian Task Force on Preventive Health Care) concluded that there was insufficient evidence to support CRC screening in asymptomatic people over the age of 40 years.16 In the 2001 update of these guidelines9 fecal occult blood testing (FOBT) every 1 or 2 years or flexible sigmoidoscopy every 5 years was recommended for screening average-risk people 50 years of age or older; there was judged to be insufficient evidence to support colonoscopy as the initial screening test. Despite these endorsements the use of CRC screening remains suboptimal.17,18,19The Canadian health care system covers all medically necessary services without user fees. Although equity has been achieved in certain areas,20,21 low socioeconomic status (SES) is associated with a lower rate of use of cardiovascular procedures22,23 and screening tests for breast and cervical cancer.24,25,26 It is unknown whether SES affects the receipt of CRC screening investigations. This study assessed the association of neighbourhood income (a marker of SES) with the receipt of colorectal investigations in people eligible for screening who lived in Ontario.  相似文献   

18.
Like all viruses, herpesviruses extensively interact with the host cytoskeleton during entry. While microtubules and microfilaments appear to facilitate viral capsid transport toward the nucleus, evidence for a role of intermediate filaments in herpesvirus entry is lacking. Here, we examined the function of vimentin intermediate filaments in fibroblasts during the initial phase of infection of two genotypically distinct strains of human cytomegalovirus (CMV), one with narrow (AD169) and one with broad (TB40/E) cell tropism. Chemical disruption of the vimentin network with acrylamide, intermediate filament bundling in cells from a patient with giant axonal neuropathy, and absence of vimentin in fibroblasts from vimentin−/− mice severely reduced entry of either strain. In vimentin null cells, viral particles remained in the cytoplasm longer than in vimentin+/+ cells. TB40/E infection was consistently slower than that of AD169 and was more negatively affected by the disruption or absence of vimentin. These findings demonstrate that an intact vimentin network is required for CMV infection onset, that intermediate filaments may function during viral entry to facilitate capsid trafficking and/or docking to the nuclear envelope, and that maintenance of a broader cell tropism is associated with a higher degree of dependence on the vimentin cytoskeleton.Human cytomegalovirus (CMV) is a ubiquitous herpesvirus that can cause serious disease in immunocompromised individuals (8, 58). Virtually all cell types, with the exception of lymphocytes and polymorphonuclear leukocytes, can support CMV replication in vivo (80), and this remarkably broad tropism is at the basis of the numerous clinical manifestations of CMV infection (8, 58). The range of permissive cells in vitro is more limited, with human fibroblasts (HF) and endothelial cells being the most widely used for propagation of clinical isolates. Two extensively studied strains, AD169 and Towne, were generated by serial passage of tissue isolates in HF for the purpose of vaccine development (22, 68). During this process, both strains accumulated numerous genomic changes (11) and lost the ability to grow in cell types other than HF. By contrast, propagation in endothelial cells produced strains with more intact genomes and tropism, such as TB40/E, VR1814, TR, and PH (59, 80).The viral determinants of endothelial and epithelial cell tropism have recently been mapped to the UL128-UL131A (UL128-131A) genomic locus (32, 92, 93). Each of the products of the UL128, UL130, and UL131A genes is independently required for tropism and participates in the formation of a complex at the surface of the virion with the viral glycoproteins gH and gL (74, 93), which can also independently associate with gO (45). The gH/gL/UL128-131A complex appears to be required for entry into endothelial cells by endocytosis, followed by low-pH-dependent fusion of the virus envelope with endosomal membranes (73, 74) although some virus strains expressing the UL128-UL131A genes do not require endosome acidification for capsid release (66, 79).HF-adapted strains consistently contain mutations in the UL128-131A genes (32). Loss of endothelial cell tropism in AD169 has been associated with a frameshift mutation in the UL131A gene, leading to the production of a truncated protein and to the loss of the gH/gL/UL128-131A complex, but not the gH/gL/gO complex, from the surface of AD169 virions (1, 3, 92). Reestablishment of wild-type UL131A expression in AD169 by repair of the UL131A gene mutation or by cis-complementation yielded viruses with restored tropism for endothelial cells but with reduced replication capacities in HF (1, 92). Interestingly, the efficiencies of entry of wild-type and repaired or complemented AD169 viruses were comparable, suggesting that the presence of UL131A did not interfere with the initial steps of infection in HF but negatively affected virion release (1, 92).The cellular determinants of CMV tropism are numerous and have not been fully identified. Virus entry begins with virion attachment to the ubiquitously expressed heparan sulfate proteoglycans at the cell surface (17), followed by engagement of one or more receptor(s) including the integrin heterodimers α2β1, α6β1, and αVβ3 (23, 39, 94); the platelet-derived growth factor-α receptor (84); and the epidermal growth factor receptor, whose role in CMV entry is still debated (38, 95).Subsequent delivery of capsids into the cytoplasm requires fusion of the virus envelope with cellular membranes. Release of AD169 capsids in HF occurs mainly by fusion at the plasma membrane at neutral pH although incoming virions have also been found within phagolysosome-like vacuoles (16, 83). Fusion with the plasmalemma appears to be mediated by the gH/gL/gO complex as AD169 virions do not contain the gH/gL/UL128-131A complex, and infectivity of a gO mutant was severely reduced (37). The mechanism used by strain TB40/E to penetrate into HF has not been described but was assumed to be similar to that of AD169 (80) even though TB40/E virions contain both gH/gL/gO and gH/gL/UL128-131A complexes.Transport of released, de-enveloped capsids toward the nucleus is mediated by cellular microtubules, and treatment of Towne-infected HF with microtubule-depolymerizing agents substantially reduced expression levels of the viral nuclear immediate-early protein 1 (IE1) (64). Depolymerization of actin microfilaments was also observed in HF as early as 10 to 20 min postinfection with the Towne strain while stress fiber disappearance was evident at 3 to 5 h postinfection (hpi) with AD169 (4, 42, 54), suggesting that microfilament rearrangement may be required to facilitate capsid transition through the actin-rich cell cortex.The role of intermediate filaments (IF) in CMV infection not been studied. In vivo, expression of the IF protein vimentin is specific to cells of mesenchymal origin like HF and endothelial cells (12). Although the phenotype of vimentin−/− (vim) mice appears to be mild (15), vimentin-null cells display numerous defects including fragmentation of the Golgi apparatus (26), development of nuclear invaginations in some instances (76), and reduced formation of lipid droplets, glycolipids, and autophagosomes (29, 52, 87). Vimentin IF interact with integrins α2β1, α6β4, and αVβ3 at the cell surface and participate in recycling of integrin-containing endocytic vesicles (40, 41). They also accompany endocytic vesicles during their perinuclear accumulation (34), regulate endosome acidification by binding to the adaptor complex AP-3 (86), control lysosome distribution into the cytoplasm (87), and promote directional mobility of cellular vesicles (69). The vimentin cytoskeleton is tightly associated with the nuclear lamina (10) and was shown to anchor the nucleus within the cell, to mediate force transfer from the cell periphery to the nucleus, and to bind to repetitive DNA sequences as well as to supercoiled DNA and histones in the nuclear matrix (56, 89, 90). Microtubules and vimentin IF form close connections in HF (30). Drug-induced disassembly of the microtubule network alters IF synthesis and organization, leading to the collapse of vimentin IF into perinuclear aggregates (2, 25, 30, 70). By contrast, coiling of IF after injection of antivimentin antibodies has no effect on the structure of microtubules (28, 46, 53), indicating that the interaction between vimentin IF and microtubules is functionally unidirectional.In this work, we sought to assess the role of the vimentin cytoskeleton in CMV entry. We hypothesized that vimentin association with integrins at the cell surface, with endosomes and microtubules in the cytoplasm, and with the lamina and matrix in the nucleus might facilitate viral binding and penetration, capsid transport toward the nucleus, and nuclear deposition of the viral genome.We found that, akin to microtubules, vimentin IF do not depolymerize during entry of either AD169 or TB40/E. In comparison to AD169, onset of TB40/E infection in HF was delayed, and the proportion of infected cells was reduced. Virus entry was negatively affected by the disruption of vimentin networks after exposure to acrylamide (ACR), by IF bundling in cells from patients with giant axonal neuropathy (GAN), and by the absence of vimentin IF in vim mouse embryo fibroblasts (MEF). In vim cells, the efficiency of particles trafficking toward the nucleus appeared significantly lower than in vimentin+/+ (vim+) cells, and in each instance the negative effects were more pronounced in TB40/E-infected cells than in AD169-infected cells. These data show that vimentin is required for efficient entry of CMV into HF and that the endotheliotropic strain TB40/E is more reliant on the presence and integrity of vimentin IF than the HF-adapted strain AD169.  相似文献   

19.
20.

Background

Imported malaria is an increasing problem. The arrival of 224 African refugees presented the opportunity to investigate the diagnosis and management of imported malaria within the Quebec health care system.

Methods

The refugees were visited at home 3–4 months after arrival in Quebec. For 221, a questionnaire was completed and permission obtained for access to health records; a blood sample for malaria testing was obtained from 210.

Results

Most of the 221 refugees (161 [73%]) had had at least 1 episode of malaria while in the refugee camps. Since arrival in Canada, 87 (39%) had had symptoms compatible with malaria for which medical care was sought. Complete or partial records were obtained for 66 of these refugees and for 2 asymptomatic adults whose children were found to have malaria: malaria had been appropriately investigated in 55 (81%); no malaria smear was requested for the other 13. Smears were reported as positive for 20 but confirmed for only 15 of the 55; appropriate therapy was verified for 10 of the 15. Of the 5 patients with a false-positive diagnosis of malaria, at least 3 received unnecessary therapy. Polymerase chain reaction testing of the blood sample obtained at the home visit revealed malaria parasites in 48 of the 210 refugees (23%; 95% confidence interval [CI] 17%– 29%). The rate of parasite detection was more than twice as high among the 19 refugees whose smears were reported as negative but not sent for confirmation (47%; 95% CI 25%– 71%).

Interpretation

This study has demonstrated errors of both omission and commission in the response to refugees presenting with possible malaria. Smears were not consistently requested for patients whose presenting complaints were not “typical” of malaria, and a large proportion of smears read locally as “negative” were not sent for confirmation. Further effort is required to ensure optimal malaria diagnosis and care in such high-risk populations.In many industrialized countries, the incidence of imported malaria is rising because of changing immigration patterns and refugee policies as well as increased travel to malaria-endemic regions.1,2,3,4,5,6,7,8,9,10 Imported malaria is not rare in Canada (300–1000 cases per year),3 the United States2,3,4 or other industrialized countries.5,6,7,8,9,10 Malaria can be a serious challenge in these countries because of its potentially rapid and lethal course.11,12,13,14 The task of front-line health care providers is made particularly difficult by the protean clinical presentations of malaria. Classic periodic fevers (tertian or quartan) are seen infrequently.9,15,16,17,18,19 Atypical and subtle presentations are especially common in individuals who have partial immunity (e.g., immigrants and refugees from disease-endemic areas) or are taking malaria prophylaxis (e.g., travellers).9,16,17 Even when malaria is considered, an accurate diagnosis can remain elusive or can be delayed as a result of inadequate or distant specialized laboratory support.19,20In Quebec, the McGill University Centre for Tropical Diseases collaborates with the Laboratoire de santé publique du Québec to raise awareness of imported malaria, to offer training and quality-assurance testing, and to provide reference diagnostic services. A preliminary diagnosis is typically made by the local laboratory, and smears (with or without staining) are sent to the McGill centre, where they are reviewed within 2–48 hours, depending on the urgency of the request. Initial medical decisions are usually based on local findings and interpretations. Although malaria is a reportable disease, there is no requirement to use the reference service.On Aug. 9, 2000, 224 refugees from Tanzanian camps landed in Montréal aboard an airplane chartered by Canadian immigration authorities. Over the ensuing 5 weeks, the McGill University Centre for Tropical Diseases noted an increase in demand for malaria reference services and an apparent small “epidemic” of imported malaria. This “epidemic” prompted us to investigate the performance of the health care system in the diagnosis and management of imported malaria.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号