首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.

Background

The pathogenesis of appendicitis is unclear. We evaluated whether exposure to air pollution was associated with an increased incidence of appendicitis.

Methods

We identified 5191 adults who had been admitted to hospital with appendicitis between Apr. 1, 1999, and Dec. 31, 2006. The air pollutants studied were ozone, nitrogen dioxide, sulfur dioxide, carbon monoxide, and suspended particulate matter of less than 10 μ and less than 2.5 μ in diameter. We estimated the odds of appendicitis relative to short-term increases in concentrations of selected pollutants, alone and in combination, after controlling for temperature and relative humidity as well as the effects of age, sex and season.

Results

An increase in the interquartile range of the 5-day average of ozone was associated with appendicitis (odds ratio [OR] 1.14, 95% confidence interval [CI] 1.03–1.25). In summer (July–August), the effects were most pronounced for ozone (OR 1.32, 95% CI 1.10–1.57), sulfur dioxide (OR 1.30, 95% CI 1.03–1.63), nitrogen dioxide (OR 1.76, 95% CI 1.20–2.58), carbon monoxide (OR 1.35, 95% CI 1.01–1.80) and particulate matter less than 10 μ in diameter (OR 1.20, 95% CI 1.05–1.38). We observed a significant effect of the air pollutants in the summer months among men but not among women (e.g., OR for increase in the 5-day average of nitrogen dioxide 2.05, 95% CI 1.21–3.47, among men and 1.48, 95% CI 0.85–2.59, among women). The double-pollutant model of exposure to ozone and nitrogen dioxide in the summer months was associated with attenuation of the effects of ozone (OR 1.22, 95% CI 1.01–1.48) and nitrogen dioxide (OR 1.48, 95% CI 0.97–2.24).

Interpretation

Our findings suggest that some cases of appendicitis may be triggered by short-term exposure to air pollution. If these findings are confirmed, measures to improve air quality may help to decrease rates of appendicitis.Appendicitis was introduced into the medical vernacular in 1886.1 Since then, the prevailing theory of its pathogenesis implicated an obstruction of the appendiceal orifice by a fecalith or lymphoid hyperplasia.2 However, this notion does not completely account for variations in incidence observed by age,3,4 sex,3,4 ethnic background,3,4 family history,5 temporal–spatial clustering6 and seasonality,3,4 nor does it completely explain the trends in incidence of appendicitis in developed and developing nations.3,7,8The incidence of appendicitis increased dramatically in industrialized nations in the 19th century and in the early part of the 20th century.1 Without explanation, it decreased in the middle and latter part of the 20th century.3 The decrease coincided with legislation to improve air quality. For example, after the United States Clean Air Act was passed in 1970,9 the incidence of appendicitis decreased by 14.6% from 1970 to 1984.3 Likewise, a 36% drop in incidence was reported in the United Kingdom between 1975 and 199410 after legislation was passed in 1956 and 1968 to improve air quality and in the 1970s to control industrial sources of air pollution. Furthermore, appendicitis is less common in developing nations; however, as these countries become more industrialized, the incidence of appendicitis has been increasing.7Air pollution is known to be a risk factor for multiple conditions, to exacerbate disease states and to increase all-cause mortality.11 It has a direct effect on pulmonary diseases such as asthma11 and on nonpulmonary diseases including myocardial infarction, stroke and cancer.1113 Inflammation induced by exposure to air pollution contributes to some adverse health effects.1417 Similar to the effects of air pollution, a proinflammatory response has been associated with appendicitis.1820We conducted a case–crossover study involving a population-based cohort of patients admitted to hospital with appendicitis to determine whether short-term increases in concentrations of selected air pollutants were associated with hospital admission because of appendicitis.  相似文献   

2.

Background:

Acute kidney injury is a serious complication of elective major surgery. Acute dialysis is used to support life in the most severe cases. We examined whether rates and outcomes of acute dialysis after elective major surgery have changed over time.

Methods:

We used data from Ontario’s universal health care databases to study all consecutive patients who had elective major surgery at 118 hospitals between 1995 and 2009. Our primary outcomes were acute dialysis within 14 days of surgery, death within 90 days of surgery and chronic dialysis for patients who did not recover kidney function.

Results:

A total of 552 672 patients underwent elective major surgery during the study period, 2231 of whom received acute dialysis. The incidence of acute dialysis increased steadily from 0.2% in 1995 (95% confidence interval [CI] 0.15–0.2) to 0.6% in 2009 (95% CI 0.6–0.7). This increase was primarily in cardiac and vascular surgeries. Among patients who received acute dialysis, 937 died within 90 days of surgery (42.0%, 95% CI 40.0–44.1), with no change in 90-day survival over time. Among the 1294 patients who received acute dialysis and survived beyond 90 days, 352 required chronic dialysis (27.2%, 95% CI 24.8–29.7), with no change over time.

Interpretation:

The use of acute dialysis after cardiac and vascular surgery has increased substantially since 1995. Studies focusing on interventions to better prevent and treat perioperative acute kidney injury are needed.More than 230 million elective major surgeries are done annually worldwide.1 Acute kidney injury is a serious complication of major surgery. It represents a sudden loss of kidney function that affects morbidity, mortality and health care costs.2 Dialysis is used for the most severe forms of acute kidney injury. In the nonsurgical setting, the incidence of acute dialysis has steadily increased over the last 15 years, and patients are now more likely to survive to discharge from hospital.35 Similarly, in the surgical setting, the incidence of acute dialysis appears to be increasing over time,610 with declining inhospital mortality.8,10,11Although previous studies have improved our understanding of the epidemiology of acute dialysis in the surgical setting, several questions remain. Many previous studies were conducted at a single centre, thereby limiting their generalizability.6,1214 Most multicentre studies were conducted in the nonsurgical setting and used diagnostic codes for acute kidney injury not requiring dialysis; however, these codes can be inaccurate.15,16 In contrast, a procedure such as dialysis is easily determined. The incidence of acute dialysis after elective surgery is of particular interest given the need for surgical consent, the severe nature of the event and the potential for mitigation. The need for chronic dialysis among patients who do not recover renal function after surgery has been poorly studied, yet this condition has a major affect on patient survival and quality of life.17 For these reasons, we studied secular trends in acute dialysis after elective major surgery, focusing on incidence, 90-day mortality and need for chronic dialysis.  相似文献   

3.
Robin Skinner  Steven McFaull 《CMAJ》2012,184(9):1029-1034

Background:

Suicide is the second leading cause of death for young Canadians (10–19 years of age) — a disturbing trend that has shown little improvement in recent years. Our objective was to examine suicide trends among Canadian children and adolescents.

Methods:

We conducted a retrospective analysis of standardized suicide rates using Statistics Canada mortality data for the period spanning from 1980 to 2008. We analyzed the data by sex and by suicide method over time for two age groups: 10–14 year olds (children) and 15–19 year olds (adolescents). We quantified annual trends by calculating the average annual percent change (AAPC).

Results:

We found an average annual decrease of 1.0% (95% confidence interval [CI] −1.5 to −0.4) in the suicide rate for children and adolescents, but stratification by age and sex showed significant variation. We saw an increase in suicide by suffocation among female children (AAPC = 8.1%, 95% CI 6.0 to 10.4) and adolescents (AAPC = 8.0%, 95% CI 6.2 to 9.8). In addition, we noted a decrease in suicides involving poisoning and firearms during the study period.

Interpretation:

Our results show that suicide rates in Canada are increasing among female children and adolescents and decreasing among male children and adolescents. Limiting access to lethal means has some potential to mitigate risk. However, suffocation, which has become the predominant method for committing suicide for these age groups, is not amenable to this type of primary prevention.Suicide was ranked as the second leading cause of death among Canadians aged 10–34 years in 2008.1 It is recognized that suicidal behaviour and ideation is an important public health issue among children and adolescents; disturbingly, suicide is a leading cause of Canadian childhood mortality (i.e., among youths aged 10–19 years).2,3Between 1980 and 2008, there were substantial improvements in mortality attributable to unintentional injury among 10–19 year olds, with rates decreasing from 37.7 per 100 000 to 10.7 per 100 000; suicide rates, however, showed less improvement, with only a small reduction during the same period (from 6.2 per 100 000 in 1980 to 5.2 per 100 000 in 2008).1Previous studies that looked at suicides among Canadian adolescents and young adults (i.e., people aged 15–25 years) have reported rates as being generally stable over time, but with a marked increase in suicides by suffocation and a decrease in those involving firearms.2 There is limited literature on self-inflicted injuries among children 10–14 years of age in Canada and the United States, but there appears to be a trend toward younger children starting to self-harm.3,4 Furthermore, the trend of suicide by suffocation moving to younger ages may be partly due to cases of the “choking game” (self-strangulation without intent to cause permanent harm) that have been misclassified as suicides.57Risk factors for suicidal behaviour and ideation in young people include a psychiatric diagnosis (e.g., depression), substance abuse, past suicidal behaviour, family factors and other life stressors (e.g., relationships, bullying) that have complex interactions.8 A suicide attempt involves specific intent, plans and availability of lethal means, such as firearms,9 elevated structures10 or substances.11 The existence of “pro-suicide” sites on the Internet and in social media12 may further increase risk by providing details of various ways to commit suicide, as well as evaluations ranking these methods by effectiveness, amount of pain involved and length of time to produce death.1315Our primary objective was to present the patterns of suicide among children and adolescents (aged 10–19 years) in Canada.  相似文献   

4.

Background:

Falls cause more than 60% of head injuries in older adults. Lack of objective evidence on the circumstances of these events is a barrier to prevention. We analyzed video footage to determine the frequency of and risk factors for head impact during falls in older adults in 2 long-term care facilities.

Methods:

Over 39 months, we captured on video 227 falls involving 133 residents. We used a validated questionnaire to analyze the mechanisms of each fall. We then examined whether the probability for head impact was associated with upper-limb protective responses (hand impact) and fall direction.

Results:

Head impact occurred in 37% of falls, usually onto a vinyl or linoleum floor. Hand impact occurred in 74% of falls but had no significant effect on the probability of head impact (p = 0.3). An increased probability of head impact was associated with a forward initial fall direction, compared with backward falls (odds ratio [OR] 2.7, 95% confidence interval [CI] 1.3–5.9) or sideways falls (OR 2.8, 95% CI 1.2–6.3). In 36% of sideways falls, residents rotated to land backwards, which reduced the probability of head impact (OR 0.2, 95% CI 0.04–0.8).

Interpretation:

Head impact was common in observed falls in older adults living in long-term care facilities, particularly in forward falls. Backward rotation during descent appeared to be protective, but hand impact was not. Attention to upper-limb strength and teaching rotational falling techniques (as in martial arts training) may reduce fall-related head injuries in older adults.Falls from standing height or lower are the cause of more than 60% of hospital admissions for traumatic brain injury in adults older than 65 years.15 Traumatic brain injury accounts for 32% of hospital admissions and more than 50% of deaths from falls in older adults.1,68 Furthermore, the incidence and age-adjusted rate of fall-related traumatic brain injury is increasing,1,9 especially among people older than 80 years, among whom rates have increased threefold over the past 30 years.10 One-quarter of fall-related traumatic brain injuries in older adults occur in long-term care facilities.1The development of improved strategies to prevent fall-related traumatic brain injuries is an important but challenging task. About 60% of residents in long-term care facilities fall at least once per year,11 and falls result from complex interactions of physiologic, environmental and situational factors.1216 Any fall from standing height has sufficient energy to cause brain injury if direct impact occurs between the head and a rigid floor surface.1719 Improved understanding is needed of the factors that separate falls that result in head impact and injury from those that do not.1,10 Falls in young adults rarely result in head impact, owing to protective responses such as use of the upper limbs to stop the fall, trunk flexion and rotation during descent.2023 We have limited evidence of the efficacy of protective responses to falls among older adults.In the current study, we analyzed video footage of real-life falls among older adults to estimate the prevalence of head impact from falls, and to examine the association between head impact, and biomechanical and situational factors.  相似文献   

5.

Background:

Brief interventions delivered by family physicians to address excessive alcohol use among adult patients are effective. We conducted a study to determine whether such an intervention would be similarly effective in reducing binge drinking and excessive cannabis use among young people.

Methods:

We conducted a cluster randomized controlled trial involving 33 family physicians in Switzerland. Physicians in the intervention group received training in delivering a brief intervention to young people during the consultation in addition to usual care. Physicians in the control group delivered usual care only. Consecutive patients aged 15–24 years were recruited from each practice and, before the consultation, completed a confidential questionnaire about their general health and substance use. Patients were followed up at 3, 6 and 12 months after the consultation. The primary outcome measure was self-reported excessive substance use (≥ 1 episode of binge drinking, or ≥ 1 joint of cannabis per week, or both) in the past 30 days.

Results:

Of the 33 participating physicians, 17 were randomly allocated to the intervention group and 16 to the control group. Of the 594 participating patients, 279 (47.0%) identified themselves as binge drinkers or excessive cannabis users, or both, at baseline. Excessive substance use did not differ significantly between patients whose physicians were in the intervention group and those whose physicians were in the control group at any of the follow-up points (odds ratio [OR] and 95% confidence interval [CI] at 3 months: 0.9 [0.6–1.4]; at 6 mo: 1.0 [0.6–1.6]; and at 12 mo: 1.1 [0.7–1.8]). The differences between groups were also nonsignificant after we re stricted the analysis to patients who reported excessive substance use at baseline (OR 1.6, 95% CI 0.9–2.8, at 3 mo; OR 1.7, 95% CI 0.9–3.2, at 6 mo; and OR 1.9, 95% CI 0.9–4.0, at 12 mo).

Interpretation:

Training family physicians to use a brief intervention to address excessive substance use among young people was not effective in reducing binge drinking and excessive cannabis use in this patient population. Trial registration: Australian New Zealand Clinical Trials Registry, no. ACTRN12608000432314.Most health-compromising behaviours begin in adolescence.1 Interventions to address these behaviours early are likely to bring long-lasting benefits.2 Harmful use of alcohol is a leading factor associated with premature death and disability worldwide, with a disproportionally high impact on young people (aged 10–24 yr).3,4 Similarly, early cannabis use can have adverse consequences that extend into adulthood.58In adolescence and early adulthood, binge drinking on at least a monthly basis is associated with an increased risk of adverse outcomes later in life.912 Although any cannabis use is potentially harmful, weekly use represents a threshold in adolescence related to an increased risk of cannabis (and tobacco) dependence in adulthood.13 Binge drinking affects 30%–50% and excessive cannabis use about 10% of the adolescent and young adult population in Europe and the United States.10,14,15Reducing substance-related harm involves multisectoral approaches, including promotion of healthy child and adolescent development, regulatory policies and early treatment interventions.16 Family physicians can add to the public health messages by personalizing their content within brief interventions.17,18 There is evidence that brief interventions can encourage young people to reduce substance use, yet most studies have been conducted in community settings (mainly educational), emergency services or specialized addiction clinics.1,16 Studies aimed at adult populations have shown favourable effects of brief alcohol interventions, and to some extent brief cannabis interventions, in primary care.1922 These interventions have been recommended for adolescent populations.4,5,16 Yet young people have different modes of substance use and communication styles that may limit the extent to which evidence from adult studies can apply to them.Recently, a systematic review of brief interventions to reduce alcohol use in adolescents identified only 1 randomized controlled trial in primary care.23 The tested intervention, not provided by family physicians but involving audio self-assessment, was ineffective in reducing alcohol use in exposed adolescents.24 Sanci and colleagues showed that training family physicians to address health-risk behaviours among adolescents was effective in improving provider performance, but the extent to which this translates into improved outcomes remains unknown.25,26 Two nonrandomized studies suggested screening for substance use and brief advice by family physicians could favour reduced alcohol and cannabis use among adolescents,27,28 but evidence from randomized trials is lacking.29We conducted the PRISM-Ado (Primary care Intervention Addressing Substance Misuse in Adolescents) trial, a cluster randomized controlled trial of the effectiveness of training family physicians to deliver a brief intervention to address binge drinking and excessive cannabis use among young people.  相似文献   

6.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

7.

Background:

The gut microbiota is essential to human health throughout life, yet the acquisition and development of this microbial community during infancy remains poorly understood. Meanwhile, there is increasing concern over rising rates of cesarean delivery and insufficient exclusive breastfeeding of infants in developed countries. In this article, we characterize the gut microbiota of healthy Canadian infants and describe the influence of cesarean delivery and formula feeding.

Methods:

We included a subset of 24 term infants from the Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort. Mode of delivery was obtained from medical records, and mothers were asked to report on infant diet and medication use. Fecal samples were collected at 4 months of age, and we characterized the microbiota composition using high-throughput DNA sequencing.

Results:

We observed high variability in the profiles of fecal microbiota among the infants. The profiles were generally dominated by Actinobacteria (mainly the genus Bifidobacterium) and Firmicutes (with diverse representation from numerous genera). Compared with breastfed infants, formula-fed infants had increased richness of species, with overrepresentation of Clostridium difficile. Escherichia–Shigella and Bacteroides species were underrepresented in infants born by cesarean delivery. Infants born by elective cesarean delivery had particularly low bacterial richness and diversity.

Interpretation:

These findings advance our understanding of the gut microbiota in healthy infants. They also provide new evidence for the effects of delivery mode and infant diet as determinants of this essential microbial community in early life.The human body harbours trillions of microbes, known collectively as the “human microbiome.” By far the highest density of commensal bacteria is found in the digestive tract, where resident microbes outnumber host cells by at least 10 to 1. Gut bacteria play a fundamental role in human health by promoting intestinal homeostasis, stimulating development of the immune system, providing protection against pathogens, and contributing to the processing of nutrients and harvesting of energy.1,2 The disruption of the gut microbiota has been linked to an increasing number of diseases, including inflammatory bowel disease, necrotizing enterocolitis, diabetes, obesity, cancer, allergies and asthma.1 Despite this evidence and a growing appreciation for the integral role of the gut microbiota in lifelong health, relatively little is known about the acquisition and development of this complex microbial community during infancy.3Two of the best-studied determinants of the gut microbiota during infancy are mode of delivery and exposure to breast milk.4,5 Cesarean delivery perturbs normal colonization of the infant gut by preventing exposure to maternal microbes, whereas breastfeeding promotes a “healthy” gut microbiota by providing selective metabolic substrates for beneficial bacteria.3,5 Despite recommendations from the World Health Organization,6 the rate of cesarean delivery has continued to rise in developed countries and rates of breastfeeding decrease substantially within the first few months of life.7,8 In Canada, more than 1 in 4 newborns are born by cesarean delivery, and less than 15% of infants are exclusively breastfed for the recommended duration of 6 months.9,10 In some parts of the world, elective cesarean deliveries are performed by maternal request, often because of apprehension about pain during childbirth, and sometimes for patient–physician convenience.11The potential long-term consequences of decisions regarding mode of delivery and infant diet are not to be underestimated. Infants born by cesarean delivery are at increased risk of asthma, obesity and type 1 diabetes,12 whereas breastfeeding is variably protective against these and other disorders.13 These long-term health consequences may be partially attributable to disruption of the gut microbiota.12,14Historically, the gut microbiota has been studied with the use of culture-based methodologies to examine individual organisms. However, up to 80% of intestinal microbes cannot be grown in culture.3,15 New technology using culture-independent DNA sequencing enables comprehensive detection of intestinal microbes and permits simultaneous characterization of entire microbial communities. Multinational consortia have been established to characterize the “normal” adult microbiome using these exciting new methods;16 however, these methods have been underused in infant studies. Because early colonization may have long-lasting effects on health, infant studies are vital.3,4 Among the few studies of infant gut microbiota using DNA sequencing, most were conducted in restricted populations, such as infants delivered vaginally,17 infants born by cesarean delivery who were formula-fed18 or preterm infants with necrotizing enterocolitis.19Thus, the gut microbiota is essential to human health, yet the acquisition and development of this microbial community during infancy remains poorly understood.3 In the current study, we address this gap in knowledge using new sequencing technology and detailed exposure assessments20 of healthy Canadian infants selected from a national birth cohort to provide representative, comprehensive profiles of gut microbiota according to mode of delivery and infant diet.  相似文献   

8.

Background:

Uncircumcised boys are at higher risk for urinary tract infections than circumcised boys. Whether this risk varies with the visibility of the urethral meatus is not known. Our aim was to determine whether there is a hierarchy of risk among uncircumcised boys whose urethral meatuses are visible to differing degrees.

Methods:

We conducted a prospective cross-sectional study in one pediatric emergency department. We screened 440 circumcised and uncircumcised boys. Of these, 393 boys who were not toilet trained and for whom the treating physician had requested a catheter urine culture were included in our analysis. At the time of catheter insertion, a nurse characterized the visibility of the urethral meatus (phimosis) using a 3-point scale (completely visible, partially visible or nonvisible). Our primary outcome was urinary tract infection, and our primary exposure variable was the degree of phimosis: completely visible versus partially or nonvisible urethral meatus.

Results:

Cultures grew from urine samples from 30.0% of uncircumcised boys with a completely visible meatus, and from 23.8% of those with a partially or nonvisible meatus (p = 0.4). The unadjusted odds ratio (OR) for culture growth was 0.73 (95% confidence interval [CI] 0.35–1.52), and the adjusted OR was 0.41 (95% CI 0.17–0.95). Of the boys who were circumcised, 4.8% had urinary tract infections, which was significantly lower than the rate among uncircumcised boys with a completely visible urethral meatus (unadjusted OR 0.12 [95% CI 0.04–0.39], adjusted OR 0.07 [95% CI 0.02–0.26]).

Interpretation:

We did not see variation in the risk of urinary tract infection with the visibility of the urethral meatus among uncircumcised boys. Compared with circumcised boys, we saw a higher risk of urinary tract infection in uncircumcised boys, irrespective of urethral visibility.Urinary tract infections are one of the most common serious bacterial infections in young children.16 Prompt diagnosis is important, because children with urinary tract infection are at risk for bacteremia6 and renal scarring.1,7 Uncircumcised boys have a much higher risk of urinary tract infection than circumcised boys,1,3,4,6,812 likely as a result of heavier colonization under the foreskin with pathogenic bacteria, which leads to ascending infections.13,14 The American Academy of Pediatrics recently suggested that circumcision status be used to select which boys should be evaluated for urinary tract infection.1 However, whether all uncircumcised boys are at equal risk for infection, or whether the risk varies with the visibility of the urethral opening, is not known. It has been suggested that a subset of uncircumcised boys with a poorly visible urethral opening are at increased risk of urinary tract infection,1517 leading some experts to consider giving children with tight foreskins topical cortisone or circumcision to prevent urinary tract infections.13,1821We designed a study to challenge the opinion that all uncircumcised boys are at increased risk for urinary tract infections. We hypothesized a hierarchy of risk among uncircumcised boys depending on the visibility of the urethral meatus, with those with a partially or nonvisible meatus at highest risk, and those with a completely visible meatus having a level of risk similar to that of boys who have been circumcised. Our primary aim was to compare the proportions of urinary tract infections among uncircumcised boys with a completely visible meatus with those with a partially or nonvisible meatus.  相似文献   

9.

Background:

Previous studies have suggested that the immunochemical fecal occult blood test has superior specificity for detecting bleeding in the lower gastrointestinal tract even if bleeding occurs in the upper tract. We conducted a large population-based study involving asymptomatic adults in Taiwan, a population with prevalent upper gastrointestinal lesions, to confirm this claim.

Methods:

We conducted a prospective cohort study involving asymptomatic people aged 18 years or more in Taiwan recruited to undergo an immunochemical fecal occult blood test, colonoscopy and esophagogastroduodenoscopy between August 2007 and July 2009. We compared the prevalence of lesions in the lower and upper gastrointestinal tracts between patients with positive and negative fecal test results. We also identified risk factors associated with a false-positive fecal test result.

Results:

Of the 2796 participants, 397 (14.2%) had a positive fecal test result. The sensitivity of the test for predicting lesions in the lower gastrointestinal tract was 24.3%, the specificity 89.0%, the positive predictive value 41.3%, the negative predictive value 78.7%, the positive likelihood ratio 2.22, the negative likelihood ratio 0.85 and the accuracy 73.4%. The prevalence of lesions in the lower gastrointestinal tract was higher among those with a positive fecal test result than among those with a negative result (41.3% v. 21.3%, p < 0.001). The prevalence of lesions in the upper gastrointestinal tract did not differ significantly between the two groups (20.7% v. 17.5%, p = 0.12). Almost all of the participants found to have colon cancer (27/28, 96.4%) had a positive fecal test result; in contrast, none of the three found to have esophageal or gastric cancer had a positive fecal test result (p < 0.001). Among those with a negative finding on colonoscopy, the risk factors associated with a false-positive fecal test result were use of antiplatelet drugs (adjusted odds ratio [OR] 2.46, 95% confidence interval [CI] 1.21–4.98) and a low hemoglobin concentration (adjusted OR 2.65, 95% CI 1.62–4.33).

Interpretation:

The immunochemical fecal occult blood test was specific for predicting lesions in the lower gastrointestinal tract. However, the test did not adequately predict lesions in the upper gastrointestinal tract.The fecal occult blood test is a convenient tool to screen for asymptomatic gastrointestinal bleeding.1 When the test result is positive, colonoscopy is the strategy of choice to investigate the source of bleeding.2,3 However, 13%–42% of patients can have a positive test result but a negative colonoscopy,4 and it has not yet been determined whether asymptomatic patients should then undergo evaluation of the upper gastrointestinal tract.Previous studies showed that the frequency of lesions in the upper gastrointestinal tract was comparable or even higher than that of colonic lesions59 and that the use of esophagogastroduodenoscopy may change clinical management.10,11 Some studies showed that evaluation of the upper gastrointestinal tract helped to identify important lesions in symptomatic patients and those with iron deficiency anemia;12,13 however, others concluded that esophagogastroduodenoscopy was unjustified because important findings in the upper gastrointestinal tract were rare1417 and sometimes irrelevant to the results of fecal occult blood testing.1821 This controversy is related to the heterogeneity of study populations and to the limitations of the formerly used guaiac-based fecal occult blood test,520 which was not able to distinguish bleeding in the lower gastrointestinal tract from that originating in the upper tract.The guaiac-based fecal occult blood test is increasingly being replaced by the immunochemical-based test. The latter is recommended for detecting bleeding in the lower gastrointestinal tract because it reacts with human globin, a protein that is digested by enzymes in the upper gastrointestinal tract.22 With this advantage, the occurrence of a positive fecal test result and a negative finding on colonoscopy is expected to decrease.We conducted a population-based study in Taiwan to verify the performance of the immunochemical fecal occult blood test in predicting lesions in the lower gastrointestinal tract and to confirm that results are not confounded by the presence of lesions in the upper tract. In Taiwan, the incidence of colorectal cancer is rapidly increasing, and Helicobacter pylori-related lesions in the upper gastrointestinal tract remain highly prevalent.23 Same-day bidirectional endoscopies are therefore commonly used for cancer screening.24 This screening strategy provides an opportunity to evaluate the performance of the immunochemical fecal occult blood test.  相似文献   

10.
11.

Background:

Many people with depression experience repeated episodes. Previous research into the predictors of chronic depression has focused primarily on the clinical features of the disease; however, little is known about the broader spectrum of sociodemographic and health factors inherent in its development. Our aim was to identify factors associated with a long-term negative prognosis of depression.

Methods:

We included 585 people aged 16 years and older who participated in the 2000/01 cycle of the National Population Health Survey and who reported experiencing a major depressive episode in 2000/01. The primary outcome was the course of depression until 2006/07. We grouped individuals into trajectories of depression using growth trajectory models. We included demographic, mental and physical health factors as predictors in the multivariable regression model to compare people with different trajectories.

Results:

Participants fell into two main depression trajectories: those whose depression resolved and did not recur (44.7%) and those who experienced repeated episodes (55.3%). In the multivariable model, daily smoking (OR 2.68, 95% CI 1.54–4.67), low mastery (i.e., feeling that life circumstances are beyond one’s control) (OR 1.10, 95% CI 1.03–1.18) and history of depression (OR 3.5, 95% CI 1.95–6.27) were significant predictors (p < 0.05) of repeated episodes of depression.

Interpretation:

People with major depression who were current smokers or had low levels of mastery were at an increased risk of repeated episodes of depression. Future studies are needed to confirm the predictive value of these variables and to evaluate their accuracy for diagnosis and as a guide to treatment.Depression is a common and often recurrent disorder that compromises daily functioning and is associated with a decrease in quality of life.13 Guidelines for the treatment of depression, such as those published by the Canadian Network for Mood and Anxiety Treatments (CANMAT)5 and the National Institute for Health and Clinical Excellence (NICE) in the United Kingdom,4 often recommend antidepressant treatment in patients with severe symptoms and outline specific risk factors supporting long-term treatment maintenance.4,5 However, for patients who do not meet the criteria for treatment of depression, the damaging sequelae of depression are frequently compounded without treatment.5 In such cases, early treatment for depression may result in an improved long-term prognosis.68A small but growing number of studies have begun to characterize the long-term course of depression in terms of severity,9 life-time prevalence10 and patterns of recurrence.11 However, a recent systematic review of the risk factors of chronic depression highlighted a need for longitudinal studies to better identify prognostic factors.12 The capacity to distinguish long-term patterns of recurrence of depression in relation to the wide range of established clinical and nonclinical factors for depression could be highly beneficial. Our objective was to use a population-based cohort to identify and understand the baseline factors associated with a long-term negative prognosis of depression.  相似文献   

12.

Background:

Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults. Other inflammatory rheumatologic disorders are associated with an excess risk of vascular disease. We investigated whether polymyalgia rheumatica is associated with an increased risk of vascular events.

Methods:

We used the General Practice Research Database to identify patients with a diagnosis of incident polymyalgia rheumatica between Jan. 1, 1987, and Dec. 31, 1999. Patients were matched by age, sex and practice with up to 5 patients without polymyalgia rheumatica. Patients were followed until their first vascular event (cardiovascular, cerebrovascular, peripheral vascular) or the end of available records (May 2011). All participants were free of vascular disease before the diagnosis of polymyalgia rheumatica (or matched date). We used Cox regression models to compare time to first vascular event in patients with and without polymyalgia rheumatica.

Results:

A total of 3249 patients with polymyalgia rheumatica and 12 735 patients without were included in the final sample. Over a median follow-up period of 7.8 (interquartile range 3.3–12.4) years, the rate of vascular events was higher among patients with polymyalgia rheumatica than among those without (36.1 v. 12.2 per 1000 person-years; adjusted hazard ratio 2.6, 95% confidence interval 2.4–2.9). The increased risk of a vascular event was similar for each vascular disease end point. The magnitude of risk was higher in early disease and in patients younger than 60 years at diagnosis.

Interpretation:

Patients with polymyalgia rheumatica have an increased risk of vascular events. This risk is greatest in the youngest age groups. As with other forms of inflammatory arthritis, patients with polymyalgia rheumatica should have their vascular risk factors identified and actively managed to reduce this excess risk.Inflammatory rheumatologic disorders such as rheumatoid arthritis,1,2 systemic lupus erythematosus,2,3 gout,4 psoriatic arthritis2,5 and ankylosing spondylitis2,6 are associated with an increased risk of vascular disease, especially cardiovascular disease, leading to substantial morbidity and premature death.26 Recognition of this excess vascular risk has led to management guidelines advocating screening for and management of vascular risk factors.79Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults,10 with a lifetime risk of 2.4% for women and 1.7% for men.11 To date, evidence regarding the risk of vascular disease in patients with polymyalgia rheumatica is unclear. There are a number of biologically plausible mechanisms between polymyalgia rheumatica and vascular disease. These include the inflammatory burden of the disease,12,13 the association of the disease with giant cell arteritis (causing an inflammatory vasculopathy, which may lead to subclinical arteritis, stenosis or aneurysms),14 and the adverse effects of long-term corticosteroid treatment (e.g., diabetes, hypertension and dyslipidemia).15,16 Paradoxically, however, use of corticosteroids in patients with polymyalgia rheumatica may actually decrease vascular risk by controlling inflammation.17 A recent systematic review concluded that although some evidence exists to support an association between vascular disease and polymyalgia rheumatica,18 the existing literature presents conflicting results, with some studies reporting an excess risk of vascular disease19,20 and vascular death,21,22 and others reporting no association.2326 Most current studies are limited by poor methodologic quality and small samples, and are based on secondary care cohorts, who may have more severe disease, yet most patients with polymyalgia rheumatica receive treatment exclusively in primary care.27The General Practice Research Database (GPRD), based in the United Kingdom, is a large electronic system for primary care records. It has been used as a data source for previous studies,28 including studies on the association of inflammatory conditions with vascular disease29 and on the epidemiology of polymyalgia rheumatica in the UK.30 The aim of the current study was to examine the association between polymyalgia rheumatica and vascular disease in a primary care population.  相似文献   

13.
Background:Rates of imaging for low-back pain are high and are associated with increased health care costs and radiation exposure as well as potentially poorer patient outcomes. We conducted a systematic review to investigate the effectiveness of interventions aimed at reducing the use of imaging for low-back pain.Methods:We searched MEDLINE, Embase, CINAHL and the Cochrane Central Register of Controlled Trials from the earliest records to June 23, 2014. We included randomized controlled trials, controlled clinical trials and interrupted time series studies that assessed interventions designed to reduce the use of imaging in any clinical setting, including primary, emergency and specialist care. Two independent reviewers extracted data and assessed risk of bias. We used raw data on imaging rates to calculate summary statistics. Study heterogeneity prevented meta-analysis.Results:A total of 8500 records were identified through the literature search. Of the 54 potentially eligible studies reviewed in full, 7 were included in our review. Clinical decision support involving a modified referral form in a hospital setting reduced imaging by 36.8% (95% confidence interval [CI] 33.2% to 40.5%). Targeted reminders to primary care physicians of appropriate indications for imaging reduced referrals for imaging by 22.5% (95% CI 8.4% to 36.8%). Interventions that used practitioner audits and feedback, practitioner education or guideline dissemination did not significantly reduce imaging rates. Lack of power within some of the included studies resulted in lack of statistical significance despite potentially clinically important effects.Interpretation:Clinical decision support in a hospital setting and targeted reminders to primary care doctors were effective interventions in reducing the use of imaging for low-back pain. These are potentially low-cost interventions that would substantially decrease medical expenditures associated with the management of low-back pain.Current evidence-based clinical practice guidelines recommend against the routine use of imaging in patients presenting with low-back pain.13 Despite this, imaging rates remain high,4,5 which indicates poor concordance with these guidelines.6,7Unnecessary imaging for low-back pain has been associated with poorer patient outcomes, increased radiation exposure and higher health care costs.8 No short- or long-term clinical benefits have been shown with routine imaging of the low back, and the diagnostic value of incidental imaging findings remains uncertain.912 A 2008 systematic review found that imaging accounted for 7% of direct costs associated with low-back pain, which in 1998 translated to more than US$6 billion in the United States and £114 million in the United Kingdom.13 Current costs are likely to be substantially higher, with an estimated 65% increase in spine-related expenditures between 1997 and 2005.14Various interventions have been tried for reducing imaging rates among people with low-back pain. These include strategies targeted at the practitioner such as guideline dissemination,1517 education workshops,18,19 audit and feedback of imaging use,7,20,21 ongoing reminders7 and clinical decision support.2224 It is unclear which, if any, of these strategies are effective.25 We conducted a systematic review to investigate the effectiveness of interventions designed to reduce imaging rates for the management of low-back pain.  相似文献   

14.

Background:

The ABCD2 score (Age, Blood pressure, Clinical features, Duration of symptoms and Diabetes) is used to identify patients having a transient ischemic attack who are at high risk for imminent stroke. However, despite its widespread implementation, the ABCD2 score has not yet been prospectively validated. We assessed the accuracy of the ABCD2 score for predicting stroke at 7 (primary outcome) and 90 days.

Methods:

This prospective cohort study enrolled adults from eight Canadian emergency departments who had received a diagnosis of transient ischemic attack. Physicians completed data forms with the ABCD2 score before disposition. The outcome criterion, stroke, was established by a treating neurologist or by an Adjudication Committee. We calculated the sensitivity and specificity for predicting stroke 7 and 90 days after visiting the emergency department using the original “high-risk” cutpoint of an ABCD2 score of more than 5, and the American Heart Association recommendation of a score of more than 2.

Results:

We enrolled 2056 patients (mean age 68.0 yr, 1046 (50.9%) women) who had a rate of stroke of 1.8% at 7 days and 3.2% at 90 days. An ABCD2 score of more than 5 had a sensitivity of 31.6% (95% confidence interval [CI] 19.1–47.5) for stroke at 7 days and 29.2% (95% CI 19.6–41.2) for stroke at 90 days. An ABCD2 score of more than 2 resulted in sensitivity of 94.7% (95% CI 82.7–98.5) for stroke at 7 days with a specificity of 12.5% (95% CI 11.2–14.1). The accuracy of the ABCD2 score as calculated by either the enrolling physician (area under the curve 0.56; 95% CI 0.47–0.65) or the coordinating centre (area under the curve 0.65; 95% CI 0.57–0.73) was poor.

Interpretation:

This multicentre prospective study involving patients in emergency departments with transient ischemic attack found the ABCD2 score to be inaccurate, at any cut-point, as a predictor of imminent stroke. Furthermore, the ABCD2 score of more than 2 that is recommended by the American Heart Association is nonspecific.There are approximately 100 visits to the emergency department per 100 000 population for transient ischemic attack each year.1 Although often considered benign, transient ischemic attack carries a risk of imminent stroke. Studies have shown that the risk of stroke is 0.2%–10% within 7 days of the first transient ischemic attack, and this risk increases to 1.2%–12% at 90 days.29 Stroke continues to be the leading cause of disability among adults and the third-leading cause of death in North America.10,11 Identifying people with transient ischemic attack who are at high risk of stroke is an opportunity to prevent stroke.3,4 However, urgent investigation of all transient ischemic attacks would require substantial resources. Three studies have attempted to develop clinical decision rules (i.e., scores) for assessing whether a patient with transient ischemic attack is at high risk of stroke.9,12,13 Combined, these studies led to the development of the ABCD2 (Age, Blood pressure, Clinical features, Duration of symptoms and Diabetes) score. However, despite its widespread implementation, the ABCD2 score has not yet been prospectively validated.12,1418 This essential step in the development of rules for making clinical predictions has recently been requested.14,1921The objective of this study was to externally validate the ABCD2 score as a tool for identifying patients seen in the emergency department with transient ischemic attack who are at high risk of stroke within 7 (primary outcome) and 90 days (one of the secondary outcomes).  相似文献   

15.

Background:

Although warfarin has been extensively studied in clinical trials, little is known about rates of hemorrhage attributable to its use in routine clinical practice. Our objective was to examine incident hemorrhagic events in a large population-based cohort of patients with atrial fibrillation who were starting treatment with warfarin.

Methods:

We conducted a population-based cohort study involving residents of Ontario (age ≥ 66 yr) with atrial fibrillation who started taking warfarin between Apr. 1, 1997, and Mar. 31, 2008. We defined a major hemorrhage as any visit to hospital for hemorrage. We determined crude rates of hemorrhage during warfarin treatment, overall and stratified by CHADS2 score (congestive heart failure, hypertension, age ≥ 75 yr, diabetes mellitus and prior stroke, transient ischemic attack or thromboembolism).

Results:

We included 125 195 patients with atrial fibrillation who started treatment with warfarin during the study period. Overall, the rate of hemorrhage was 3.8% (95% confidence interval [CI] 3.8%–3.9%) per person-year. The risk of major hemorrhage was highest during the first 30 days of treatment. During this period, rates of hemorrhage were 11.8% (95% CI 11.1%–12.5%) per person-year in all patients and 16.7% (95% CI 14.3%–19.4%) per person-year among patients with a CHADS2 scores of 4 or greater. Over the 5-year follow-up, 10 840 patients (8.7%) visited the hospital for hemorrhage; of these patients, 1963 (18.1%) died in hospital or within 7 days of being discharged.

Interpretation:

In this large cohort of older patients with atrial fibrillation, we found that rates of hemorrhage are highest within the first 30 days of warfarin therapy. These rates are considerably higher than the rates of 1%–3% reported in randomized controlled trials of warfarin therapy. Our study provides timely estimates of warfarin-related adverse events that may be useful to clinicians, patients and policy-makers as new options for treatment become available.Atrial fibrillation is a major risk factor for stroke and systemic embolism, and strong evidence supports the use of the anticoagulant warfarin to reduce this risk.13 However, warfarin has a narrow therapeutic range and requires regular monitoring of the international normalized ratio to optimize its effectiveness and minimize the risk of hemorrhage.4,5 Although rates of major hemorrhage reported in trials of warfarin therapy typically range between 1% and 3% per person-year,611 observational studies suggest that rates may be considerably higher when warfarin is prescribed outside of a clinical trial setting,1215 approaching 7% per person-year in some studies.1315 The different safety profiles derived from clinical trials and observational data may reflect the careful selection of patients, precise definitions of bleeding and close monitoring in the trial setting. Furthermore, although a few observational studies suggest that hemorrhage rates are higher than generally appreciated, these studies involve small numbers of patients who received care in specialized settings.1416 Consequently, the generalizability of their results to general practice may be limited.More information regarding hemorrhage rates during warfarin therapy is particularly important in light of the recent introduction of new oral anticoagulant agents such as dabigatran, rivaroxaban and apixaban, which may be associated with different outcome profiles.1719 There are currently no large studies offering real-world, population-based estimates of hemorrhage rates among patients taking warfarin, which are needed for future comparisons with new anticoagulant agents once they are widely used in routine clinical practice.20We sought to describe the risk of incident hemorrhage in a large population-based cohort of patients with atrial fibrillation who had recently started warfarin therapy.  相似文献   

16.

Background

Fractures have largely been assessed by their impact on quality of life or health care costs. We conducted this study to evaluate the relation between fractures and mortality.

Methods

A total of 7753 randomly selected people (2187 men and 5566 women) aged 50 years and older from across Canada participated in a 5-year observational cohort study. Incident fractures were identified on the basis of validated self-report and were classified by type (vertebral, pelvic, forearm or wrist, rib, hip and “other”). We subdivided fracture groups by the year in which the fracture occurred during follow-up; those occurring in the fourth and fifth years were grouped together. We examined the relation between the time of the incident fracture and death.

Results

Compared with participants who had no fracture during follow-up, those who had a vertebral fracture in the second year were at increased risk of death (adjusted hazard ratio [HR] 2.7, 95% confidence interval [CI] 1.1–6.6); also at risk were those who had a hip fracture during the first year (adjusted HR 3.2, 95% CI 1.4–7.4). Among women, the risk of death was increased for those with a vertebral fracture during the first year (adjusted HR 3.7, 95% CI 1.1–12.8) or the second year of follow-up (adjusted HR 3.2, 95% CI 1.2–8.1). The risk of death was also increased among women with hip fracture during the first year of follow-up (adjusted HR 3.0, 95% CI 1.0–8.7).

Interpretation

Vertebral and hip fractures are associated with an increased risk of death. Interventions that reduce the incidence of these fractures need to be implemented to improve survival.Osteoporosis-related fractures are a major health concern, affecting a growing number of individuals worldwide. The burden of fracture has largely been assessed by the impact on health-related quality of life and health care costs.1,2 Fractures can also be associated with death. However, trials that have examined the relation between fractures and mortality have had limitations that may influence their results and the generalizability of the studies, including small samples,3,4 the examination of only 1 type of fracture,410 the inclusion of only women,8,11 the enrolment of participants from specific areas (i.e., hospitals or certain geographic regions),3,4,7,8,10,12 the nonrandom selection of participants311 and the lack of statistical adjustment for confounding factors that may influence mortality.3,57,12We evaluated the relation between incident fractures and mortality over a 5-year period in a cohort of men and women 50 years of age and older. In addition, we examined whether other characteristics of participants were risk factors for death.  相似文献   

17.

Background:

The San Francisco Syncope Rule has been proposed as a clinical decision rule for risk stratification of patients presenting to the emergency department with syncope. It has been validated across various populations and settings. We undertook a systematic review of its accuracy in predicting short-term serious outcomes.

Methods:

We identified studies by means of systematic searches in seven electronic databases from inception to January 2011. We extracted study data in duplicate and used a bivariate random-effects model to assess the predictive accuracy and test characteristics.

Results:

We included 12 studies with a total of 5316 patients, of whom 596 (11%) experienced a serious outcome. The prevalence of serious outcomes across the studies varied between 5% and 26%. The pooled estimate of sensitivity of the San Francisco Syncope Rule was 0.87 (95% confidence interval [CI] 0.79–0.93), and the pooled estimate of specificity was 0.52 (95% CI 0.43–0.62). There was substantial between-study heterogeneity (resulting in a 95% prediction interval for sensitivity of 0.55–0.98). The probability of a serious outcome given a negative score with the San Francisco Syncope Rule was 5% or lower, and the probability was 2% or lower when the rule was applied only to patients for whom no cause of syncope was identified after initial evaluation in the emergency department. The most common cause of false-negative classification for a serious outcome was cardiac arrhythmia.

Interpretation:

The San Francisco Syncope Rule should be applied only for patients in whom no cause of syncope is evident after initial evaluation in the emergency department. Consideration of all available electrocardiograms, as well as arrhythmia monitoring, should be included in application of the San Francisco Syncope Rule. Between-study heterogeneity was likely due to inconsistent classification of arrhythmia.Syncope is defined as sudden, transient loss of consciousness with the inability to maintain postural tone, followed by spontaneous recovery and return to pre-existing neurologic function.15 It represents a common clinical problem, accounting for 1%–3% of visits to the emergency department and up to 6% of admissions to acute care hospitals.6,7Assessment of syncope in patients presenting to the emergency department is challenging because of the heterogeneity of underlying pathophysiologic processes and diseases. Although many underlying causes of syncope are benign, others are associated with substantial morbidity or mortality, including cardiac arrhythmia, myocardial infarction, pulmonary embolism and occult hemorrhage.4,810 Consequently, a considerable proportion of patients with benign causes of syncope are admitted for inpatient evaluation.11,12 Therefore, risk stratification that allows for the safe discharge of patients at low risk of a serious outcome is important for efficient management of patients in emergency departments and for reduction of costs associated with unnecessary diagnostic workup.12,13In recent years, various prediction rules based on the probability of an adverse outcome after an episode of syncope have been proposed.3,1416 However, the San Francisco Syncope Rule, derived by Quinn and colleagues in 2004,3 is the only prediction rule for serious outcomes that has been validated in a variety of populations and settings. This simple, five-step clinical decision rule is intended to identify patients at low risk of short-term serious outcomes3,17 (Box 1).

Box 1:

San Francisco Syncope Rule3

AimPrediction of short-term (within 30 days) serious outcomes in patients presenting to the emergency department with syncope.DefinitionsSyncope: Transient loss of consciousness with return to baseline neurologic function. Trauma-associated and alcohol- or drug-related loss of consciousness excluded, as is definite seizure or altered mental status.Serious outcome: Death, myocardial infarction, arrhythmia, pulmonary embolism, stroke, subarachnoid hemorrhage, significant hemorrhage or any condition causing or likely to cause a return visit to the emergency department and admission to hospital for a related event.Selection of predictors in multivariable analysis: Fifty predictor variables were evaluated for significant associations with a serious outcome and combined to create a minimal set of predictors that are highly sensitive and specific for prediction of a serious outcome.Clinical decision ruleFive risk factors, indicated by the mnemonic “CHESS,” were identified to predict patients at high risk of a serious outcome:
  • C – History of congestive heart failure
  • H – Hematocrit < 30%
  • E – Abnormal findings on 12-lead ECG or cardiac monitoring17 (new changes or nonsinus rhythm)
  • S – History of shortness of breath
  • S – Systolic blood pressure < 90 mm Hg at triage
Note: ECG = electrocardiogram.The aim of this study was to conduct a systematic review and meta-analysis of the accuracy of the San Francisco Syncope Rule in predicting short-term serious outcome for patients presenting to the emergency department with syncope.  相似文献   

18.
Gronich N  Lavi I  Rennert G 《CMAJ》2011,183(18):E1319-E1325

Background:

Combined oral contraceptives are a common method of contraception, but they carry a risk of venous and arterial thrombosis. We assessed whether use of drospirenone was associated with an increase in thrombotic risk relative to third-generation combined oral contraceptives.

Methods:

Using computerized records of the largest health care provider in Israel, we identified all women aged 12 to 50 years for whom combined oral contraceptives had been dispensed between Jan. 1, 2002, and Dec. 31, 2008. We followed the cohort until 2009. We used Poisson regression models to estimate the crude and adjusted rate ratios for risk factors for venous thrombotic events (specifically deep vein thrombosis and pulmonary embolism) and arterial thromboic events (specifically transient ischemic attack and cerebrovascular accident). We performed multivariable analyses to compare types of contraceptives, with adjustment for the various risk factors.

Results:

We identified a total of 1017 (0.24%) venous and arterial thrombotic events among 431 223 use episodes during 819 749 woman-years of follow-up (6.33 venous events and 6.10 arterial events per 10 000 woman-years). In a multivariable model, use of drospirenone carried an increased risk of venous thrombotic events, relative to both third-generation combined oral contraceptives (rate ratio [RR] 1.43, 95% confidence interval [CI] 1.15–1.78) and second-generation combined oral contraceptives (RR 1.65, 95% CI 1.02–2.65). There was no increase in the risk of arterial thrombosis with drospirenone.

Interpretation:

Use of drospirenone-containing oral contraceptives was associated with an increased risk of deep vein thrombosis and pulmonary embolism, but not transient ischemic attack or cerebrovascular attack, relative to second- and third-generation combined oral contraceptives.Oral hormonal therapy is the preferred method of contraception, especially among young women. In the United States in 2002, 12 million women were using “the pill.”1 In a survey of households in Great Britain conducted in 2005 and 2006, one-quarter of women aged 16 to 49 years of age were using this form of contraception.2 A large variety of combined oral contraceptive preparations are available, differing in terms of estrogen dose and in terms of the dose and type of the progestin component. Among preparations currently in use, the estrogen dose ranges from 15 to 35 μg, and the progestins are second-generation, third-generation or newer. The second-generation progestins (levonorgestrel and norgestrel), which are derivatives of testosterone, have differing degrees of androgenic and estrogenic activities. The structure of these agents was modified to reduce the androgenic activity, thus producing the third-generation progestins (desogestrel, gestodene and norgestimate). Newer progestins are chlormadinone acetate, a derivative of progesterone, and drospirenone, an analogue of the aldosterone antagonist spironolactone having antimineralo-corticoid and antiandrogenic activities. Drospirenone is promoted as causing less weight gain and edema than other forms of oral contraceptives, but few well-designed studies have compared the minor adverse effects of these drugs.3The use of oral contraceptives has been reported to confer an increased risk of venous and arterial thrombotic events,47 specifically an absolute risk of venous thrombosis of 6.29 per 10 000 woman-years, compared with 3.01 per 10 000 woman-years among nonusers.8 It has long been accepted that there is a dose–response relationship between estrogen and the risk of venous thrombotic events. Reducing the estrogen dose from 50 μg to 20–30 μg has reduced the risk.9 Studies published since the mid-1990s have suggested a greater risk of venous thrombotic events with third-generation oral contraceptives than with second-generation formulations,1013 indicating that the risk is also progestin-dependent. The pathophysiological mechanism of the risk with different progestins is unknown. A twofold increase in the risk of arterial events (specifically ischemic stroke6,14 and myocardial infarction7) has been observed in case–control studies for users of second-generation pills and possibly also third-generation preparations.7,14Conflicting information is available regarding the risk of venous and arterial thrombotic events associated with drospirenone. An increased risk of venous thromboembolism, relative to second-generation pills, has been reported recently,8,15,16 whereas two manufacturer-sponsored studies claimed no increase in risk.17,18 In the study reported here, we investigated the risk of venous and arterial thrombotic events among users of various oral contraceptives in a large population-based cohort.  相似文献   

19.

Background:

Persistent postoperative pain continues to be an underrecognized complication. We examined the prevalence of and risk factors for this type of pain after cardiac surgery.

Methods:

We enrolled patients scheduled for coronary artery bypass grafting or valve replacement, or both, from Feb. 8, 2005, to Sept. 1, 2009. Validated measures were used to assess (a) preoperative anxiety and depression, tendency to catastrophize in the face of pain, health-related quality of life and presence of persistent pain; (b) pain intensity and interference in the first postoperative week; and (c) presence and intensity of persistent postoperative pain at 3, 6, 12 and 24 months after surgery. The primary outcome was the presence of persistent postoperative pain during 24 months of follow-up.

Results:

A total of 1247 patients completed the preoperative assessment. Follow-up retention rates at 3 and 24 months were 84% and 78%, respectively. The prevalence of persistent postoperative pain decreased significantly over time, from 40.1% at 3 months to 22.1% at 6 months, 16.5% at 12 months and 9.5% at 24 months; the pain was rated as moderate to severe in 3.6% at 24 months. Acute postoperative pain predicted both the presence and severity of persistent postoperative pain. The more intense the pain during the first week after surgery and the more it interfered with functioning, the more likely the patients were to report persistent postoperative pain. Pre-existing persistent pain and increased preoperative anxiety also predicted the presence of persistent postoperative pain.

Interpretation:

Persistent postoperative pain of nonanginal origin after cardiac surgery affected a substantial proportion of the study population. Future research is needed to determine whether interventions to modify certain risk factors, such as preoperative anxiety and the severity of pain before and immediately after surgery, may help to minimize or prevent persistent postoperative pain.Postoperative pain that persists beyond the normal time for tissue healing (> 3 mo) is increasingly recognized as an important complication after various types of surgery and can have serious consequences on patients’ daily living.13 Cardiac surgeries, such as coronary artery bypass grafting (CABG) and valve replacement, rank among the most frequently performed interventions worldwide.4 They aim to improve survival and quality of life by reducing symptoms, including anginal pain. However, persistent postoperative pain of nonanginal origin has been reported in 7% to 60% of patients following these surgeries.523 Such variability is common in other types of major surgery and is due mainly to differences in the definition of persistent postoperative pain, study design, data collection methods and duration of follow-up.13,24Few prospective cohort studies have examined the exact time course of persistent postoperative pain after cardiac surgery, and follow-up has always been limited to a year or less.9,14,25 Factors that put patients at risk of this type of problem are poorly understood.26 Studies have reported inconsistent results regarding the contribution of age, sex, body mass index, preoperative angina, surgical technique, grafting site, postoperative complications or level of opioid consumption after surgery.57,9,13,14,1619,2123,25,27 Only 1 study investigated the role of chronic nonanginal pain before surgery as a contributing factor;21 5 others prospectively assessed the association between persistent postoperative pain and acute pain intensity in the first postoperative week but reported conflicting results.13,14,21,22,25 All of the above studies were carried out in a single hospital and included relatively small samples. None of the studies examined the contribution of psychological factors such as levels of anxiety and depression before cardiac surgery, although these factors have been shown to influence acute or persistent postoperative pain in other types of surgery.1,24,28,29We conducted a prospective multicentre cohort study (the CARD-PAIN study) to determine the prevalence of persistent postoperative pain of nonanginal origin up to 24 months after cardiac surgery and to identify risk factors for the presence and severity of the condition.  相似文献   

20.

Background:

There have been several published reports of inflammatory ocular adverse events, mainly uveitis and scleritis, among patients taking oral bisphosphonates. We examined the risk of these adverse events in a pharmacoepidemiologic cohort study.

Methods:

We conducted a retrospective cohort study involving residents of British Columbia who had visited an ophthalmologist from 2000 to 2007. Within the cohort, we identified all people who were first-time users of oral bisphosphonates and who were followed to the first inflammatory ocular adverse event, death, termination of insurance or the end of the study period. We defined an inflammatory ocular adverse event as scleritis or uveitis. We used a Cox proportional hazard model to determine the adjusted rate ratios. As a sensitivity analysis, we performed a propensity-score–adjusted analysis.

Results:

The cohort comprised 934 147 people, including 10 827 first-time users of bisphosphonates and 923 320 nonusers. The incidence rate among first-time users was 29/10 000 person-years for uveitis and 63/10 000 person-years for scleritis. In contrast, the incidence among people who did not use oral bisphosphonates was 20/10 000 person-years for uveitis and 36/10 000 for scleritis (number needed to harm: 1100 and 370, respectively). First-time users had an elevated risk of uveitis (adjusted relative risk [RR] 1.45, 95% confidence interval [CI] 1.25–1.68) and scleritis (adjusted RR 1.51, 95% CI 1.34–1.68). The rate ratio for the propensity-score–adjusted analysis did not change the results (uveitis: RR 1.50, 95% CI 1.29–1.73; scleritis: RR 1.53, 95% CI 1.39–1.70).

Interpretation:

People using oral bisphosphonates for the first time may be at a higher risk of scleritis and uveitis compared to people with no bisphosphonate use. Patients taking bisphosphonates must be familiar with the signs and symptoms of these conditions, so that they can immediately seek assessment by an ophthalmologist.Oral bisphosphonates are the most frequently prescribed class of medications for the prevention of osteoporosis. Most literature about the safety of bisphosphonates has mainly focused on long-term adverse events, including atypical fractures,1 atrial fibrillation,2 and esophageal and colon cancer.3Uveitis and scleritis are ocular inflammatory diseases that are associated with major morbidity. Anterior uveitis is the most common type of uveitis with an estimated 11.4–100.0 cases/100 000 person-years.4,5 Both diseases require immediate treatment to prevent further complications, which may include cataracts, glaucoma, macular edema and scleral perforation. Numerous case reports and case series have described an association between the use of oral bisphosphonates and anterior uveitis68 and scleritis.8,9 In most reported cases, severe eye pain was reported within days of taking an oral bisphosphonates, and the symptom resolved after stopping the agent.6,9 Only one large epidemiologic study has examined the association between the use of bisphosphonates and ocular inflammatory diseases.10 This study did not find an association, but it was limited by a small number of events and a lack of power. Thus, the association between uveitis or scleritis and the use of oral bisphosphonates is not fully known. Given that early intervention may prevent complications, we performed a pharmacoepidemiologic study to assess the true risk of these potentially serious conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号