首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

Background:

The multicomponent serogroup B meningococcal (4CMenB) vaccine was recently licensed for use in Europe. There are currently no data on the persistence of bactericidal antibodies induced by use of this vaccine in infants. Our objective was to evaluate serogroup B–specific bactericidal antibodies in children aged 40–44 months previously vaccinated at 2, 4, 6 and 12 months of age.

Methods:

Participants given 4 doses of 4CMenB as infants received a fifth dose of the vaccine at 40–44 months of age. Age-matched participants who were MenB vaccine–naive received 4CMenB and formed the control group. We evaluated human complement serum bactericidal activity (hSBA) titres at baseline and 1 month after each dose of 4CMenB.

Results:

Before a booster dose at enrolment, 41%–76% of 17 participants previously vaccinated with 4CMenB in infancy had hSBA titres of 4 or greater against 4 reference strains. Before vaccination in the control group (n = 40) these proportions were similar for strains 44/76-SL (63%) and M10713 (68%) but low for strains NZ98/254 (0%) and 5/99 (3%). A booster dose in the 4CMenB-primed participants generated greater increases in hSBA titres than in controls.

Interpretation:

As has been observed with other meningococcal vaccines, bactericidal antibodies waned after vaccination with 4CMenB administered according to an approved infant vaccination schedule of 2, 4, 6 and 12 months of age, but there was an anamnestic response to a booster dose at 40–44 months of age. If 4CMenB were introduced into routine vaccination schedules, assessment of the need for a booster dose would require data on the impact of these declining titres on vaccine effectiveness. ClinicalTrials.gov, no. NCT01027351A vaccine against serogroup B meningococcus has recently been licensed for use in Europe1 and is being considered for licensure in Canada. This vaccine, known as multicomponent serogroup B meningococcal (4CMenB) vaccine, consists of 3 recombinant proteins: factor H binding protein (fHbp), Neisseria adhesin A (NadA) and Neisseria heparin binding antigen (NHBA) combined with detoxified outer membrane vesicles from the strain responsible for an epidemic of serogroup B meningococcal disease in New Zealand (NZ98/254). Clinical trials of 4CMenB have shown it to be immunogenic against reference strains selected to speciScally express one of the vaccine antigens.26 On the basis of these trials, the approved schedule for infants aged 2 to 5 months is 3 doses given at least 1 month apart, with a booster dose given at 12 to 23 months of age.7 The persistence of vaccine-induced antibodies throughout childhood following this booster dose is unknown, but it is particularly relevant because the incidence of invasive serogroup B meningococcal disease in children aged 1 to 4 years is second only to the incidence in children less than 1 year of age.8In this study, we assessed the persistence of these bactericidal antibodies in children aged 40–44 months who had previously received either 4CMenB or a vaccine containing the recombinant proteins alone (recombinant protein serogroup B meningococcal [rMenB] vaccine) at 2, 4, 6 and 12 months of age.3 We also assessed the immunogenicity and reactogenicity of a booster dose.  相似文献   

2.
Background:Previous investigations have reported that physicians tend to neglect their own health care; however, they may also use their professional knowledge and networks to engage in healthier lifestyles or seek prompt health services. We sought to determine whether the stage at which cancer is diagnosed differs between physicians and nonphysicians.Methods:We conducted a nationwide matched cohort study over a period of 14 years in Taiwan. We accessed data from two national databases: the National Health Insurance Research Database and the Taiwan Cancer Registry File. We collected data on all patients with the 6 most common cancers in Taiwan (hepatoma, lung, colorectal, oral, female breast and cervical cancer) from 1999 to 2012. We excluded patients less than 25 years of age, as well as those with a history of organ transplantation, cancer or AIDS. We used propensity score matching for age, sex, residence and income to select members for the control (nonphysicians) and experimental (physicians) groups at a 5:1 ratio. We used χ2 tests to analyze the distribution of incident cancer stages among physicians and nonphysicians. We compared these associations using multinomial logistic regression. We performed sensitivity analyses for subgroups of doctors and cancers.Results:We identified 274 003 patients with cancer, 542 of whom were physicians. After propensity score matching, we assigned 536 physicians to the experimental group and 2680 nonphysicians to the control group. We found no significant differences in cancer stage distributions between physicians and controls. Multinomial logistic regression and sensitivity analyses showed similar cancer stages in most scenarios; however, physicians had 2.64-fold higher risk of having stage IV cancer at diagnosis in cases of female breast and cervical cancer.Interpretation:In this cohort of physicians in Taiwan, cancer was not diagnosed at earlier stages than in nonphysicians, with the exception of stage IV cancer of the cervix and female breast.The health of physicians is vital to health care systems. Physicians who are unwell mentally or physically are prone to providing suboptimal patient care.1 Several studies have investigated the risk of cancer for doctors with inconclusive findings;14 few investigations have addressed whether cancer is diagnosed at earlier stages in physicians.Previous investigations have reported that physicians tend to neglect their own physical examinations and, once sick, procrastinate seeking medical treatment.58 However, doctors may use their own professional knowledge and network to engage in healthy lifestyles or seek prompt health services in ways that reduce their risk of illness.911Factors protecting people from advanced cancer stages include attending screening services1214 and access to physicians.15,16 Delayed cancer diagnoses lead to poorer outcomes. We sought to compare the incident cancer stages of the 6 most common cancers between physicians and nonphysicians in Taiwan to determine whether physicians’ cancers were diagnosed at earlier or later stages than nonphysicians’ cancers.  相似文献   

3.
《CMAJ》2015,187(8):E243-E252
Background:We aimed to prospectively validate a novel 1-hour algorithm using high-sensitivity cardiac troponin T measurement for early rule-out and rule-in of acute myocardial infarction (MI).Methods:In a multicentre study, we enrolled 1320 patients presenting to the emergency department with suspected acute MI. The high-sensitivity cardiac troponin T 1-hour algorithm, incorporating baseline values as well as absolute changes within the first hour, was validated against the final diagnosis. The final diagnosis was then adjudicated by 2 independent cardiologists using all available information, including coronary angiography, echocardiography, follow-up data and serial measurements of high-sensitivity cardiac troponin T levels.Results:Acute MI was the final diagnosis in 17.3% of patients. With application of the high-sensitivity cardiac troponin T 1-hour algorithm, 786 (59.5%) patients were classified as “rule-out,” 216 (16.4%) were classified as “rule-in” and 318 (24.1%) were classified to the “observational zone.” The sensitivity and the negative predictive value for acute MI in the rule-out zone were 99.6% (95% confidence interval [CI] 97.6%–99.9%) and 99.9% (95% CI 99.3%–100%), respectively. The specificity and the positive predictive value for acute MI in the rule-in zone were 95.7% (95% CI 94.3%–96.8%) and 78.2% (95% CI 72.1%–83.6%), respectively. The 1-hour algorithm provided higher negative and positive predictive values than the standard interpretation of highsensitivity cardiac troponin T using a single cut-off level (both p < 0.05). Cumulative 30-day mortality was 0.0%, 1.6% and 1.9% in patients classified in the rule-out, observational and rule-in groups, respectively (p = 0.001).Interpretation:This rapid strategy incorporating high-sensitivity cardiac troponin T baseline values and absolute changes within the first hour substantially accelerated the management of suspected acute MI by allowing safe rule-out as well as accurate rule-in of acute MI in 3 out of 4 patients. Trial registration: ClinicalTrials.gov, NCT00470587Acute myocardial infarction (MI) is a major cause of death and disability worldwide. As highly effective treatments are available, early and accurate detection of acute MI is crucial.15 Clinical assessment, 12-lead electrocardiography (ECG) and measurement of cardiac troponin levels form the pillars for the early diagnosis of acute MI in the emergency department. Major advances have recently been achieved by the development of more sensitive cardiac troponin assays.615 High-sensitivity cardiac troponin assays, which allow measurement of even low concentrations of cardiac troponin with high precision, have been shown to largely overcome the sensitivity deficit of conventional cardiac troponin assays within the first hours of presentation in the diagnosis of acute MI.615 These studies have consistently shown that the classic diagnostic interpretation of cardiac troponin as a dichotomous variable (troponin-negative and troponin-positive) no longer seems appropriate, because the positive predictive value for acute MI of being troponin-positive was only about 50%.615 The best way to interpret and clinically use high-sensitivity cardiac troponin levels in the early diagnosis of acute MI is still debated.3,5,7In a pilot study, a novel high-sensitivity cardiac troponin T 1-hour algorithm was shown to allow accurate rule-out and rule-in of acute MI within 1 hour in up to 75% of patients.11 This algorithm is based on 2 concepts. First, high-sensitivity cardiac troponin T is interpreted as a quantitative variable where the proportion of patients who have acute MI increases with increasing concentrations of cardiac troponin T.615 Second, early absolute changes in the concentrations within 1 hour provide incremental diagnostic information when added to baseline levels, with the combination acting as a reliable surrogate for late concentrations at 3 or 6 hours.615 However, many experts remained skeptical regarding the safety of the high-sensitivity cardiac troponin T 1-hour algorithm and its wider applicability.16 Accordingly, this novel diagnostic concept has not been adopted clinically to date. Because the clinical application of this algorithm would represent a profound change in clinical practice, prospective validation in a large cohort is mandatory before it can be considered for routine clinical use. The aim of this multicentre study was to prospectively validate the high-sensitivity cardiac troponin T 1-hour algorithm in a large independent cohort.  相似文献   

4.
5.

Background:

The ratio of revascularization to medical therapy (referred to herein as the revascularization ratio) for the initial treatment of stable ischemic heart disease varies considerably across hospitals. We conducted a comprehensive study to identify patient, physician and hospital factors associated with variations in the revascularization ratio across 18 cardiac centres in the province of Ontario. We also explored whether clinical outcomes differed between hospitals with high, medium and low ratios.

Methods:

We identified all patients in Ontario who had stable ischemic heart disease documented by index angiography performed between Oct. 1, 2008, and Sept. 30, 2011, at any of the 18 cardiac centres in the province. We classified patients by initial treatment strategy (medical therapy or revascularization). Hospitals were classified into equal tertiles based on their revascularization ratio. The primary outcome was all-cause mortality. Patient follow-up was until Dec. 31, 2012. Hierarchical logistic regression models identified predictors of revascularization. Multivariable Cox proportional hazards models, with a time-varying covariate for actual treatment received, were used to evaluate the impact of the revascularization ratio on clinical outcomes.

Results:

Variation in revascularization ratios was twofold across the hospitals. Patient factors accounted for 67.4% of the variation in revascularization ratios. Physician and hospital factors were not significantly associated with the variation. Significant patient-level predictors of revascularization were history of smoking, multivessel disease, high-risk findings on noninvasive stress testing and more severe symptoms of angina (v. no symptoms). Treatment at hospitals with a high revascularization ratio was associated with increased mortality compared with treatment at hospitals with a low ratio (hazard ratio 1.12, 95% confidence interval 1.03–1.21).

Interpretation:

Most of the variation in revascularization ratios across hospitals was warranted, in that it was driven by patient factors. Nonetheless, the variation was associated with potentially important differences in mortality.Stable ischemic heart disease is a common manifestation of cardiovascular disease, the leading cause of death in the world.1,2 The treatment strategies for stable ischemic heart disease include medical therapy alone or in combination with revascularization by percutaneous coronary intervention (PCI) or coronary artery bypass grafting (CABG).A tremendous amount of research has examined the best initial treatment strategy for stable ischemic heart disease.35 Randomized controlled trials have not shown a difference in major adverse events between optimal medical therapy and revascularization.6 Some argue that revascularization should be reserved only for symptom relief.5,7,8 Criteria for the appropriate use of revascularization have been developed to aid in clinical decision-making; however, a substantial proportion of revascularization procedures for stable ischemic heart disease are performed under clinical circumstances deemed as “uncertain.”9,10 Reflecting this uncertainty, there is wide regional variation in the rate of coronary revascularization,1113 which suggests different thresholds for invasive therapy for stable ischemic heart disease.Studies have predominantly examined the determinants of variations in the type of revascularization modality used.13,14 There is a paucity of data exploring the determinants of variations in the earlier decision to treat with medical therapy alone or with revascularization. A study published nearly a decade ago did not examine outcomes.7 Accordingly, our primary research objective was to determine whether the variations in initial treatment strategies for stable ischemic heart disease are warranted. We conducted a comprehensive population-based study to identify patient, physician and hospital factors associated with variations in treatment strategies within 90 days after angiography. We also explored whether clinical outcomes differed between hospitals with high, medium and low ratios of revascularization to medical therapy (hereafter referred to as the revascularization ratio).  相似文献   

6.
Wan-Jie Gu  Fei Wang  Jing-Chen Liu 《CMAJ》2015,187(3):E101-E109
Background:In anesthetized patients undergoing surgery, the role of lung-protective ventilation with lower tidal volumes is unclear. We performed a meta-analysis of randomized controlled trials (RCTs) to evaluate the effect of this ventilation strategy on postoperative outcomes.Methods:We searched electronic databases from inception through September 2014. We included RCTs that compared protective ventilation with lower tidal volumes and conventional ventilation with higher tidal volumes in anesthetized adults undergoing surgery. We pooled outcomes using a random-effects model. The primary outcome measures were lung injury and pulmonary infection.Results:We included 19 trials (n = 1348). Compared with patients in the control group, those who received lung-protective ventilation had a decreased risk of lung injury (risk ratio [RR] 0.36, 95% confidence interval [CI] 0.17 to 0.78; I2 = 0%) and pulmonary infection (RR 0.46, 95% CI 0.26 to 0.83; I2 = 8%), and higher levels of arterial partial pressure of carbon dioxide (standardized mean difference 0.47, 95% CI 0.18 to 0.75; I2 = 65%). No significant differences were observed between the patient groups in atelectasis, mortality, length of hospital stay, length of stay in the intensive care unit or the ratio of arterial partial pressure of oxygen to fraction of inspired oxygen.Interpretation:Anesthetized patients who received ventilation with lower tidal volumes during surgery had a lower risk of lung injury and pulmonary infection than those given conventional ventilation with higher tidal volumes. Implementation of a lung-protective ventilation strategy with lower tidal volumes may lower the incidence of these outcomes.Estimates suggest that more than 230 million patients undergo major surgical procedures worldwide each year.1 Postoperative pulmonary complications, including lung injury, pneumonia and atelectasis, are common and a major cause of morbidity and death.25 Thus, prevention of these complications has become a high priority of perioperative care.Mechanical ventilation is mandatory in patients undergoing surgical procedures during general anesthesia. Conventional mechanical ventilation with tidal volumes of 10 to 15 mL/kg has been advocated to prevent hypoxemia and atelectasis in anesthetized patients undergoing surgery.6 However, unequivocal evidence from experimental and clinical studies suggests that mechanical ventilation, especially the use of high tidal volumes, may cause or aggravate lung injury.79 Mechanical ventilation using high tidal volumes can result in overdistention of alveoli that mainly causes ventilator-associated lung injury.10Lung-protective ventilation refers to the use of low tidal volumes and moderate to high levels of positive end-expiratory pressure, with or without a recruitment manoeuvre.11 Lung-protective ventilation has been found to reduce morbidity and mortality among patients with acute lung injury and acute respiratory distress syndrome.11,12 However, in anesthetized patients without the syndrome, the role of lung-protective ventilation remains unclear. Two previous meta-analyses addressing similar research questions have been published,13,14 but the inclusion of observational studies compromised the reliability of the results. Recently, randomized controlled trials (RCTs) on the topic have reported conflicting results. We performed a meta-analysis of RCTs to evaluate the effect of lung-protective ventilation with lower tidal volumes on clinical outcomes in patients undergoing surgery.  相似文献   

7.

Background:

Practice guidelines recommend that imaging to detect metastatic disease not be performed in the majority of patients with early-stage breast cancer who are asymptomatic. We aimed to determine whether practice patterns in Ontario conform with these recommendations.

Methods:

We used provincial registry data to identify a population-based cohort of Ontario women in whom early-stage, operable breast cancer was diagnosed between 2007 and 2012. We then determined whether imaging of the skeleton, thorax, and abdomen or pelvis had been performed within 3 months of tissue diagnosis. We calculated rates of confirmatory imaging of the same body site.

Results:

Of 26 547 patients with early-stage disease, 22 811 (85.9%) had at least one imaging test, and a total of 83 249 imaging tests were performed (mean of 3.7 imaging tests per patient imaged). Among patients with pathologic stage I and II disease, imaging was performed in 79.6% (10 921/13 724) and 92.7% (11 882/12 823) of cases, respectively. Of all imaging tests, 19 784 (23.8%) were classified as confirmatory investigations. Imaging was more likely for patients who were younger, had greater comorbidity, had tumours of higher grade or stage or had undergone preoperative breast ultrasonography, mastectomy or surgery in the community setting.

Interpretation:

Despite recommendations from multiple international guidelines, most Ontario women with early-stage breast cancer underwent imaging to detect distant metastases. Inappropriate imaging in asymptomatic patients with early-stage disease is costly and may lead to harm. The use of population datasets will allow investigators to evaluate whether or not strategies to implement practice guidelines lead to meaningful and sustained change in physician practice.Most women with newly diagnosed breast cancer present with early-stage, potentially curable disease.1 Among patients whose disease is restricted to the breast and axillary lymph nodes, without signs or symptoms of metastatic disease, the likelihood of having radiologically evident metastases in pathologic stage I and II disease is about 0.2% and 1.2%, respectively.2 This low frequency has not changed significantly, even with the increasing use of magnetic resonance imaging (MRI) and positron emission tomography.2,3 For this reason, most provincial, national and international guidelines do not recommend imaging for all patients with early-stage breast cancer who are asymptomatic.48Despite these evidence-based guidelines, imaging for distant metastases in patients with a new diagnosis of breast cancer remains common.2,912 In response to the Choosing Wisely campaign of the American Board of Internal Medicine Foundation,13 the American Society of Clinical Oncology (ASCO) published its inaugural “top 5” list for choosing wisely in oncology.14 It recommended against routine imaging for staging purposes in women with early breast cancer, because “such imaging adds little benefit to patient care and has the potential to cause harm.14 In 2014, Choosing Wisely Canada was launched in an effort to encourage physicians and patients to engage in conversations about unnecessary tests, treatments and procedures, to help ensure that patients receive the highest-quality care.15The ASCO Choosing Wisely recommendation14 is similar to the Cancer Care Ontario guideline,4 which has been in existence for over a decade. Whereas ASCO in its Choosing Wisely campaign recommends no imaging for patients with stage I or II disease, the Cancer Care Ontario guideline recommends no imaging for patients with stage I disease and a bone scan for those with stage II disease. A recent study at a large Canadian academic cancer centre showed that, despite publication of both a provincial guideline and the ASCO recommendations, most patients with primary operable (early-stage) breast cancer undergo imaging for distant metastases.10 We hypothesized that despite the provincial guideline, this practice may be more widespread. We undertook this population-based study to determine whether physician practice patterns in Ontario regarding imaging of patients with early-stage breast cancer are in keeping with the published Cancer Care Ontario guideline.  相似文献   

8.

Background:

Evidence suggests that migrant groups have an increased risk of psychotic disorders and that the level of risk varies by country of origin and host country. Canadian evidence is lacking on the incidence of psychotic disorders among migrants. We sought to examine the incidence of schizophrenia and schizoaffective disorders in first-generation immigrants and refugees in the province of Ontario, relative to the general population.

Methods:

We constructed a retrospective cohort that included people aged 14–40 years residing in Ontario as of Apr. 1, 1999. Population-based administrative data from physician billings and hospital admissions were linked to data from Citizenship and Immigration Canada. We used Poisson regression models to calculate age- and sex-adjusted incidence rate ratios (IRRs) and 95% confidence intervals (CIs) for immigrant and refugee groups over a 10-year period.

Results:

In our cohort (n = 4 284 694), we found higher rates of psychotic disorders among immigrants from the Caribbean and Bermuda (IRR 1.60, 95% CI 1.29–1.98). Lower rates were found among immigrants from northern Europe (IRR 0.50, 95% CI 0.28–0.91), southern Europe (IRR 0.60, 95% CI 0.41–0.90) and East Asia (IRR 0.56, 95% CI 0.41–0.78). Refugee status was an independent predictor of risk among all migrants (IRR 1.27, 95% CI 1.04–1.56), and higher rates were found specifically for refugees from East Africa (IRR 1.95, 95% CI 1.44–2.65) and South Asia (IRR 1.51, 95% CI 1.08–2.12).

Interpretation:

The differential pattern of risk across ethnic subgroups in Ontario suggests that psychosocial and cultural factors associated with migration may contribute to the risk of psychotic disorders. Some groups may be more at risk, whereas others are protected.Meta-analytic reviews suggest that international migrants have a two- to threefold increased risk of psychosis compared with the host population, and the level of risk varies by country of origin and host country.1,2 This increased risk may persist into the second and third generations.2,3 Incidence rates are not typically found to be elevated in the country of origin;47 therefore, it is believed that the migratory or postmigration experience may play a role in the etiology.The migration-related emergence of psychotic disorders is a potential concern in Canada, which receives about 250 000 new immigrants and refugees each year.8 However, there is a notable lack of current epidemiological information on the incidence of psychosis among these groups.9 Hospital admission data from the early 1900s suggest that European migrants to British Columbia had a higher incidence of schizophrenia than the general population,10 and more recent data from Ontario suggest higher rates of hospital admission for psychotic disorders in areas with a large proportion of first-generation migrants.11 The fact that a large and increasing proportion of Canada’s population are migrants has been cited as a potential explanation for the higher prevalence of schizophrenia compared with international estimates.12The province of Ontario is home to the largest number of migrants in Canada, with first-generation migrants constituting nearly 30% of the population. Canada operates on a human capital model of immigration, using a points-based system that favours younger age, higher education, and proficiency in English or French. Nearly 60% of all newcomers to Canada are economic migrants, 27% are sponsored by a relative living in Canada, and 13% are refugees or temporary workers.8 Canada also requires a prearrival medical examination, but less than 0.001% of all applications are denied on the basis of medical grounds, and exemptions may be granted for refugees and some family-reunification applicants.13The Canadian migration process differs from that of many countries where the association between migration and psychotic disorders has been previously investigated.1,2 In most of these countries, migrants generally originate from a smaller number of countries that have historic ties to the host country, and there tends to be a low proportion of refugees, although these processes have changed in recent years. In Canada, migrants come from a wide array of countries, admission policies focus on migrants with professional skills and there is a larger proportion of refugees. Few studies to date have examined the role of refugee status in the risk of psychotic disorders14 or have assessed all of the migrant groups within a country, because most studies focus on particular groups considered to be at high risk.1 An examination of migrants to Canada offers a unique opportunity to investigate the risk of psychotic disorders in a group with diverse geographical origins, and the larger proportion of refugees also allows us to investigate their risk separately from immigrant groups. Thus, the breadth, scope and scale of migration to Canada over time offers a diverse and deep population for advancing our understanding of why some groups may have a higher risk of psychotic disorders.Our primary objective was to examine the incidence of schizophrenia and schizoaffective disorders over a 10-year period in first-generation immigrants and refugees in Ontario, relative to the general population. We also compared the incidence among specific migrant groups, stratified by country of birth and refugee status, because research suggests differences in the degree and direction of risk.1,2 We restricted the sample to first-generation migrants to estimate the extent to which sociodemographic factors had an impact on the risk of schizophrenia and schizoaffective disorders among all migrants.  相似文献   

9.

Background:

We evaluated a large-scale transition of primary care physicians to blended capitation models and team-based care in Ontario, Canada, to understand the effect of each type of reform on the management and prevention of chronic disease.

Methods:

We used population-based administrative data to assess monitoring of diabetes mellitus and screening for cervical, breast and colorectal cancer among patients belonging to team-based capitation, non–team-based capitation or enhanced fee-for-service medical homes as of Mar. 31, 2011 (n = 10 675 480). We used Poisson regression models to examine these associations for 2011. We then used a fitted nonlinear model to compare changes in outcomes between 2001 and 2011 by type of medical home.

Results:

In 2011, patients in a team-based capitation setting were more likely than those in an enhanced fee-for-service setting to receive diabetes monitoring (39.7% v. 31.6%, adjusted relative risk [RR] 1.22, 95% confidence interval [CI] 1.18 to 1.25), mammography (76.6% v. 71.5%, adjusted RR 1.06, 95% CI 1.06 to 1.07) and colorectal cancer screening (63.0% v. 60.9%, adjusted RR 1.03, 95% CI 1.02 to 1.04). Over time, patients in medical homes with team-based capitation experienced the greatest improvement in diabetes monitoring (absolute difference in improvement 10.6% [95% CI 7.9% to 13.2%] compared with enhanced fee for service; 6.4% [95% CI 3.8% to 9.1%] compared with non–team-based capitation) and cervical cancer screening (absolute difference in improvement 7.0% [95% CI 5.5% to 8.5%] compared with enhanced fee for service; 5.3% [95% CI 3.8% to 6.8%] compared with non–team-based capitation). For breast and colorectal cancer screening, there were no significant differences in change over time between different types of medical homes.

Interpretation:

The shift to capitation payment and the addition of team-based care in Ontario were associated with moderate improvements in processes related to diabetes care, but the effects on cancer screening were less clear.Health care systems with a strong primary care orientation have better health outcomes, lower costs and fewer disparities across population subgroups.1 Countries around the world have been experimenting with reforms to improve the delivery of primary care, changing the way physicians are organized and paid. In the United States, several national organizations2,3 and policy experts4,5 have advocated a shift away from fee for service toward capitation or blended payments, and in 2015, the Centers for Medicare and Medicaid Services brought in blended payment in primary care, introducing a non–visit-based payment for chronic care management.6Patient-centred medical homes have provided an opportunity to transition physicians to new payment models, but they also necessitate changes in care delivery, including incorporation of team-based care, enhancement of access for patients, coordination of care and a focus on quality and safety.79 Evidence suggests that team-based care is a particularly important element in improving the management and prevention of chronic disease and reducing related costs.10 Early evaluations of patient-centred medical home pilots were promising,11,12 but a recent study of large-scale implementation showed limited improvements in the quality of chronic disease care and no reduction in health care utilization or total costs over 3 years.13Before 2002, primary care physicians in Ontario, Canada, were almost universally paid through a fee-for-service system. Over the past decade, more than three-quarters have transitioned to patient-centred medical homes.14,15 About half of Ontario physicians working in patient-centred medical homes have shifted to blended capitation payments, with a portion of these physicians working in groups that also receive government funding for nonphysician health professionals to enable team-based care. However, about 40% of physicians in patient-centred medical homes still receive most of their income through fee-for-service payments. This natural health policy experiment offers a unique opportunity to compare the effectiveness of different payment models and team-based care. Early studies have shown small differences in the quality of cardiovascular16 and diabetes mellitus17 care between physicians receiving capitation payments and those receiving fee-for-service payments, but no studies have assessed changes in quality of care over time.We evaluated a large-scale transition of primary care physicians to blended capitation models and team-based care to understand the effect of each type of reform on chronic disease management and prevention over time.  相似文献   

10.
11.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

12.

Background:

Modifiable behaviours during early childhood may provide opportunities to prevent disease processes before adverse outcomes occur. Our objective was to determine whether young children’s eating behaviours were associated with increased risk of cardiovascular disease in later life.

Methods:

In this cross-sectional study involving children aged 3–5 years recruited from 7 primary care practices in Toronto, Ontario, we assessed the relation between eating behaviours as assessed by the NutriSTEP (Nutritional Screening Tool for Every Preschooler) questionnaire (completed by parents) and serum levels of non–high-density lipoprotein (HDL) cholesterol, a surrogate marker of cardiovascular risk. We also assessed the relation between dietary intake and serum non-HDL cholesterol, and between eating behaviours and other laboratory indices of cardiovascular risk (low-density lipoprotein [LDL] cholesterol, apolipoprotein B, HDL cholesterol and apoliprotein A1).

Results:

A total of 1856 children were recruited from primary care practices in Toronto. Of these children, we included 1076 in our study for whom complete data and blood samples were available for analysis. The eating behaviours subscore of the NutriSTEP tool was significantly associated with serum non-HDL cholesterol (p = 0.03); for each unit increase in the eating behaviours subscore suggesting greater nutritional risk, we saw an increase of 0.02 mmol/L (95% confidence interval [CI] 0.002 to 0.05) in serum non-HDL cholesterol. The eating behaviours subscore was also associated with LDL cholesterol and apolipoprotein B, but not with HDL cholesterol or apolipoprotein A1. The dietary intake subscore was not associated with non-HDL cholesterol.

Interpretation:

Eating behaviours in preschool-aged children are important potentially modifiable determinants of cardiovascular risk and should be a focus for future studies of screening and behavioural interventions.Modifiable behaviours during early childhood may provide opportunities to prevent later chronic diseases, in addition to the behavioural patterns that contribute to them, before adverse outcomes occur. There is evidence that behavioural interventions during early childhood (e.g., ages 3–5 yr) can promote healthy eating.1 For example, repeated exposure to vegetables increases vegetable preference and intake,2 entertaining presentations of fruits (e.g., in the shape of a boat) increase their consumption,3 discussing internal satiety cues with young children reduces snacking,4 serving carrots before the main course (as opposed to with the main course) increases carrot consumption,5 and positive modelling of the consumption of healthy foods increases their intake by young children.6,7 Responsive eating behavioural styles in which children are given access to healthy foods and allowed to determine the timing and pace of eating in response to internal cues with limited distractions, such as those from television, have been recommended by the Institute of Medicine.8Early childhood is a critical period for assessing the origins of cardiometabolic disease and implementing preventive interventions.8 However, identifying behavioural risk factors for cardiovascular disease during early childhood is challenging, because signs of disease can take decades to appear. One emerging surrogate marker for later cardiovascular risk is the serum concentration of non–high-density lipoprotein (HDL) cholesterol (or total cholesterol minus HDL cholesterol).912 The Young Finn Longitudinal Study found an association between non-HDL cholesterol levels during childhood (ages 3–18 yr) and an adult measure of atherosclerosis (carotid artery intima–media thickness), although this relation was not significant for the subgroup of younger female children (ages 3–9 yr).10,11 The Bogalusa Heart Study, which included a subgroup of children aged 2–15 years, found an association between low-density lipoprotein (LDL) cholesterol concentration (which is highly correlated with non-HDL cholesterol) and asymptomatic atherosclerosis at autopsy.12 The American Academy of Pediatrics recommends non-HDL cholesterol concentration as the key measure for screening for cardiovascular risk in children.9 Serum non-HDL cholesterol concentration is the dyslipidemia screening test recommended by the American Academy of Pediatrics for children aged 9–11 years.9 Cardiovascular risk stratification tools such as the Reynold Risk Score (www.reynoldsriskscore.org) and the Framingham Heart Study coronary artery disease 10-year risk calculator (www.framinghamheartstudy.org/risk) for adults do not enable directed interventions when cardiovascular disease processes begin — during childhood.The primary objective of our study was to determine whether eating behaviours at 3–5 years of age, as assessed by the NutriSTEP (Nutritional Screening for Every Preschooler) questionnaire,13,14 are associated with non-HDL cholesterol levels, a surrogate marker of cardiovascular risk. Our secondary objectives were to determine whether other measures of nutritional risk, such as dietary intake, were associated with non-HDL cholesterol levels and whether eating behaviours are associated with other cardiovascular risk factors, such as LDL cholesterol, apolipoprotein B, HDL cholesterol and apoliprotein A1.  相似文献   

13.
14.

Background:

Disturbance of the sleep–wake cycle is a characteristic of delirium. In addition, changes in melatonin rhythm influence the circadian rhythm and are associated with delirium. We compared the effect of melatonin and placebo on the incidence and duration of delirium.

Methods:

We performed this multicentre, double-blind, randomized controlled trial between November 2008 and May 2012 in 1 academic and 2 nonacademic hospitals. Patients aged 65 years or older who were scheduled for acute hip surgery were eligible for inclusion. Patients received melatonin 3 mg or placebo in the evening for 5 consecutive days, starting within 24 hours after admission. The primary outcome was incidence of delirium within 8 days of admission. We also monitored the duration of delirium.

Results:

A total of 452 patients were randomly assigned to the 2 study groups. We subsequently excluded 74 patients for whom the primary end point could not be measured or who had delirium before the second day of the study. After these postrandomization exclusions, data for 378 patients were included in the main analysis. The overall mean age was 84 years, 238 (63.0%) of the patients lived at home before admission, and 210 (55.6%) had cognitive impairment. We observed no effect of melatonin on the incidence of delirium: 55/186 (29.6%) in the melatonin group v. 49/192 (25.5%) in the placebo group; difference 4.1 (95% confidence interval −0.05 to 13.1) percentage points. There were no between-group differences in mortality or in cognitive or functional outcomes at 3-month follow-up.

Interpretation:

In this older population with hip fracture, treatment with melatonin did not reduce the incidence of delirium. Trial registration: Netherlands Trial Registry, NTR1576: MAPLE (Melatonin Against PLacebo in Elderly patients) study; www.trialregister.nl/trialreg/admin/rctview.asp?TC=1576Delirium in older inpatients is associated with a high risk of dementia and other complications that translate into increased mortality and health care costs.1,2 The antipsychotic haloperidol has historically been the agent of choice for treating delirium, and it has increasingly been administered as a prophylactic for delirium or to reduce symptoms such as hallucinations and aggressive behaviour.3,4 However, all antipsychotic treatments may induce serious cerebrovascular adverse effects and greater mortality, particularly among patients with dementia.5,6 These effects led the US Food and Drug Administration to issue a serious warning against their use.7 In addition, benzodiazepines are still frequently used to treat delirium, despite their being known to elicit or aggravate delirium.8,9Disturbances of the circadian sleep–wake cycle represent one of the core features of delirium,10 leading to the hypothesis that the neurotransmitter melatonin and changes in its metabolism may be involved in the pathogenesis of delirium.11,12 Objective measurements have shown that melatonin metabolism is disturbed after abdominal and other types of surgery, insomnia, sleep deprivation and stays in the intensive care unit (ICU), all of which are also known to be factors that contribute to delirium.1316 These characteristics suggest an association between melatonin abnormalities and delirium.1722 Although proof of a causal relation is still lacking, inpatients might nevertheless benefit from melatonin supplementation therapy through postoperative maintenance or restoration of their sleep–wake cycle.2325 Although melatonin depletion is thought to be one of the mechanisms of delirium, few studies have investigated the effects of altering perioperative plasma concentrations of melatonin, in particular, the possible effects on postoperative delirium.The primary objective of this study was to assess the effects of melatonin on the incidence of delirium among elderly patients admitted to hospital as an emergency following hip fracture. Secondary outcomes were duration and severity of delirium, length of hospital stay, total doses of haloperidol and benzodiazepines administered to patients with delirium, mortality during the hospital stay, and functional status, cognitive function and mortality at 3-month follow-up.  相似文献   

15.

Background:

Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults. Other inflammatory rheumatologic disorders are associated with an excess risk of vascular disease. We investigated whether polymyalgia rheumatica is associated with an increased risk of vascular events.

Methods:

We used the General Practice Research Database to identify patients with a diagnosis of incident polymyalgia rheumatica between Jan. 1, 1987, and Dec. 31, 1999. Patients were matched by age, sex and practice with up to 5 patients without polymyalgia rheumatica. Patients were followed until their first vascular event (cardiovascular, cerebrovascular, peripheral vascular) or the end of available records (May 2011). All participants were free of vascular disease before the diagnosis of polymyalgia rheumatica (or matched date). We used Cox regression models to compare time to first vascular event in patients with and without polymyalgia rheumatica.

Results:

A total of 3249 patients with polymyalgia rheumatica and 12 735 patients without were included in the final sample. Over a median follow-up period of 7.8 (interquartile range 3.3–12.4) years, the rate of vascular events was higher among patients with polymyalgia rheumatica than among those without (36.1 v. 12.2 per 1000 person-years; adjusted hazard ratio 2.6, 95% confidence interval 2.4–2.9). The increased risk of a vascular event was similar for each vascular disease end point. The magnitude of risk was higher in early disease and in patients younger than 60 years at diagnosis.

Interpretation:

Patients with polymyalgia rheumatica have an increased risk of vascular events. This risk is greatest in the youngest age groups. As with other forms of inflammatory arthritis, patients with polymyalgia rheumatica should have their vascular risk factors identified and actively managed to reduce this excess risk.Inflammatory rheumatologic disorders such as rheumatoid arthritis,1,2 systemic lupus erythematosus,2,3 gout,4 psoriatic arthritis2,5 and ankylosing spondylitis2,6 are associated with an increased risk of vascular disease, especially cardiovascular disease, leading to substantial morbidity and premature death.26 Recognition of this excess vascular risk has led to management guidelines advocating screening for and management of vascular risk factors.79Polymyalgia rheumatica is one of the most common inflammatory rheumatologic conditions in older adults,10 with a lifetime risk of 2.4% for women and 1.7% for men.11 To date, evidence regarding the risk of vascular disease in patients with polymyalgia rheumatica is unclear. There are a number of biologically plausible mechanisms between polymyalgia rheumatica and vascular disease. These include the inflammatory burden of the disease,12,13 the association of the disease with giant cell arteritis (causing an inflammatory vasculopathy, which may lead to subclinical arteritis, stenosis or aneurysms),14 and the adverse effects of long-term corticosteroid treatment (e.g., diabetes, hypertension and dyslipidemia).15,16 Paradoxically, however, use of corticosteroids in patients with polymyalgia rheumatica may actually decrease vascular risk by controlling inflammation.17 A recent systematic review concluded that although some evidence exists to support an association between vascular disease and polymyalgia rheumatica,18 the existing literature presents conflicting results, with some studies reporting an excess risk of vascular disease19,20 and vascular death,21,22 and others reporting no association.2326 Most current studies are limited by poor methodologic quality and small samples, and are based on secondary care cohorts, who may have more severe disease, yet most patients with polymyalgia rheumatica receive treatment exclusively in primary care.27The General Practice Research Database (GPRD), based in the United Kingdom, is a large electronic system for primary care records. It has been used as a data source for previous studies,28 including studies on the association of inflammatory conditions with vascular disease29 and on the epidemiology of polymyalgia rheumatica in the UK.30 The aim of the current study was to examine the association between polymyalgia rheumatica and vascular disease in a primary care population.  相似文献   

16.

Background

Little is known about the incidence and causes of heparin-induced skin lesions. The 2 most commonly reported causes of heparin-induced skin lesions are immune-mediated heparin-induced thrombocytopenia and delayed-type hypersensitivity reactions.

Methods

We prospectively examined consecutive patients who received subcutaneous heparin (most often enoxaparin or nadroparin) for the presence of heparin-induced skin lesions. If such lesions were identified, we performed a skin biopsy, platelet count measurements, and antiplatelet-factor 4 antibody and allergy testing.

Results

We enrolled 320 patients. In total, 24 patients (7.5%, 95% confidence interval [CI] 4.7%–10.6%) had heparin-induced skin lesions. Delayed-type hypersensitivity reactions were identified as the cause in all 24 patients. One patient with histopathologic evidence of delayed-type hypersensitivity tested positive for antiplatelet-factor 4 antibodies. We identified the following risk factors for heparin-induced skin lesions: a body mass index greater than 25 (odds ratio [OR] 4.6, 95% CI 1.7–15.3), duration of heparin therapy longer than 9 days (OR 5.9, 95% CI 1.9–26.3) and female sex (OR 3.0, 95% CI 1.1–8.8).

Interpretation

Heparin-induced skin lesions are relatively common, have identifiable risk factors and are commonly caused by a delayed-type hypersensitivity reaction (type IV allergic response). (ClinicalTrials.gov trial register no. NCT00510432.)Hpeparin has been used as an anticoagulant for over 60 years.1 Well-known adverse effects of heparin therapy are bleeding, osteoporosis, hair loss, and immune and nonimmune heparin-induced thrombocytopenia. The incidence of heparin-induced skin lesions is unknown, despite being increasingly reported.24 Heparin-induced skin lesions may be caused by at least 5 mechanisms: delayed-type (type IV) hypersensitivity responses,2,46 immune-mediated thrombocytopenia,3 type I allergic reactions,7,8 skin necrosis9 and pustulosis.10Heparin-induced skin lesions may indicate the presence of life-threatening heparin-induced thrombocytopenia11 — even in the absence of thrombocytopenia.3 There are no data available on the incidence of heparin-induced skin lesions or their causes. Given the rising number of reports of heparin-induced skin lesions and the importance of correctly diagnosing this condition, we sought to determine the incidence of heparin-induced skin lesions.  相似文献   

17.

Background:

Although guidelines advise titration of palliative sedation at the end of life, in practice the depth of sedation can range from mild to deep. We investigated physicians’ considerations about the depth of continuous sedation.

Methods:

We performed a qualitative study in which 54 physicians underwent semistructured interviewing about the last patient for whom they had been responsible for providing continuous palliative sedation. We also asked about their practices and general attitudes toward sedation.

Results:

We found two approaches toward the depth of continuous sedation: starting with mild sedation and only increasing the depth if necessary, and deep sedation right from the start. Physicians described similar determinants for both approaches, including titration of sedatives to the relief of refractory symptoms, patient preferences, wishes of relatives, expert advice and esthetic consequences of the sedation. However, physicians who preferred starting with mild sedation emphasized being guided by the patient’s condition and response, and physicians who preferred starting with deep sedation emphasized ensuring that relief of suffering would be maintained. Physicians who preferred each approach also expressed different perspectives about whether patient communication was important and whether waking up after sedation is started was problematic.

Interpretation:

Physicians who choose either mild or deep sedation appear to be guided by the same objective of delivering sedation in proportion to the relief of refractory symptoms, as well as other needs of patients and their families. This suggests that proportionality should be seen as a multidimensional notion that can result in different approaches toward the depth of sedation.Palliative sedation is considered to be an appropriate option when other treatments fail to relieve suffering in dying patients.1,2 There are important questions associated with this intervention, such as how deep the sedation must be to relieve suffering and how important it is for patients and their families for the patient to maintain a certain level of consciousness.1 In the national guidelines for the Netherlands, palliative sedation is defined as “the intentional lowering of consciousness of a patient in the last phase of life.”3,4 Sedatives can be administered intermittently or continuously, and the depth of palliative sedation can range from mild to deep.1,5Continuous deep sedation until death is considered the most far reaching and controversial type of palliative sedation. Nevertheless, it is used frequently: comparable nationwide studies in Europe show frequencies of 2.5% to 16% of all deaths.68 An important reason for continuous deep sedation being thought of as controversial is the possible association of this practice with the hastening of death,911 although it is also argued that palliative sedation does not shorten life when its use is restricted to the patient’s last days of life.12,13 Guidelines for palliative sedation often advise physicians to titrate sedatives,2,3,14 which means that the dosages of sedatives are adjusted to the level needed for proper relief of symptoms. To date, research has predominantly focused on the indications and type of medications used for sedation. In this study, we investigated how physicians decide the depth of continuous palliative sedation and how these decisions relate to guidelines.  相似文献   

18.

Background:

Evidence from controlled trials encourages the intake of dietary pulses (beans, chickpeas, lentils and peas) as a method of improving dyslipidemia, but heart health guidelines have stopped short of ascribing specific benefits to this type of intervention or have graded the beneficial evidence as low. We conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) to assess the effect of dietary pulse intake on established therapeutic lipid targets for cardiovascular risk reduction.

Methods:

We searched electronic databases and bibliographies of selected trials for relevant articles published through Feb. 5, 2014. We included RCTs of at least 3 weeks’ duration that compared a diet emphasizing dietary pulse intake with an isocaloric diet that did not include dietary pulses. The lipid targets investigated were low-density lipoprotein (LDL) cholesterol, apolipoprotein B and non–high-density lipoprotein (non-HDL) cholesterol. We pooled data using a random-effects model.

Results:

We identified 26 RCTs (n = 1037) that satisfied the inclusion criteria. Diets emphasizing dietary pulse intake at a median dose of 130 g/d (about 1 serving daily) significantly lowered LDL cholesterol levels compared with the control diets (mean difference −0.17 mmol/L, 95% confidence interval −0.25 to −0.09 mmol/L). Treatment effects on apolipoprotein B and non-HDL cholesterol were not observed.

Interpretation:

Our findings suggest that dietary pulse intake significantly reduces LDL cholesterol levels. Trials of longer duration and higher quality are needed to verify these results. Trial registration: ClinicalTrials.gov, no. NCT01594567.Abnormal blood concentrations of lipids are one of the most important modifiable risk factors for cardiovascular disease. Although statins are effective in reducing low-density lipoprotein (LDL) cholesterol levels, major health organizations have maintained that the initial and essential approach to the prevention and management of cardiovascular disease is to modify dietary and lifestyle patterns.14Dietary non–oil-seed pulses (beans, chickpeas, lentils and peas) are foods that have received particular attention for their ability to reduce the risk of cardiovascular disease. Consumption of dietary pulses was associated with a reduction in cardiovascular disease in a large observational study5 and with improvements in LDL cholesterol levels in small trials.68 Although most guidelines on the prevention of major chronic diseases encourage the consumption of dietary pulses as part of a healthy strategy,2,3,913 none has included recommendations based on the direct benefits of lowering lipid concentrations or reducing the risk of cardiovascular disease. In all cases, the evidence on which recommendations have been based was assigned a low grade,2,3,913 and dyslipidemia guidelines do not address dietary pulse intake directly.1,4To improve the evidence on which dietary guidelines are based, we conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) of the effect of dietary pulse intake on established therapeutic lipid targets for cardiovascular risk reduction. The lipid targets were LDL cholesterol, apolipoprotein B and non–high-density lipoprotein (non-HDL) cholesterol.  相似文献   

19.

Background:

The gut microbiota is essential to human health throughout life, yet the acquisition and development of this microbial community during infancy remains poorly understood. Meanwhile, there is increasing concern over rising rates of cesarean delivery and insufficient exclusive breastfeeding of infants in developed countries. In this article, we characterize the gut microbiota of healthy Canadian infants and describe the influence of cesarean delivery and formula feeding.

Methods:

We included a subset of 24 term infants from the Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort. Mode of delivery was obtained from medical records, and mothers were asked to report on infant diet and medication use. Fecal samples were collected at 4 months of age, and we characterized the microbiota composition using high-throughput DNA sequencing.

Results:

We observed high variability in the profiles of fecal microbiota among the infants. The profiles were generally dominated by Actinobacteria (mainly the genus Bifidobacterium) and Firmicutes (with diverse representation from numerous genera). Compared with breastfed infants, formula-fed infants had increased richness of species, with overrepresentation of Clostridium difficile. Escherichia–Shigella and Bacteroides species were underrepresented in infants born by cesarean delivery. Infants born by elective cesarean delivery had particularly low bacterial richness and diversity.

Interpretation:

These findings advance our understanding of the gut microbiota in healthy infants. They also provide new evidence for the effects of delivery mode and infant diet as determinants of this essential microbial community in early life.The human body harbours trillions of microbes, known collectively as the “human microbiome.” By far the highest density of commensal bacteria is found in the digestive tract, where resident microbes outnumber host cells by at least 10 to 1. Gut bacteria play a fundamental role in human health by promoting intestinal homeostasis, stimulating development of the immune system, providing protection against pathogens, and contributing to the processing of nutrients and harvesting of energy.1,2 The disruption of the gut microbiota has been linked to an increasing number of diseases, including inflammatory bowel disease, necrotizing enterocolitis, diabetes, obesity, cancer, allergies and asthma.1 Despite this evidence and a growing appreciation for the integral role of the gut microbiota in lifelong health, relatively little is known about the acquisition and development of this complex microbial community during infancy.3Two of the best-studied determinants of the gut microbiota during infancy are mode of delivery and exposure to breast milk.4,5 Cesarean delivery perturbs normal colonization of the infant gut by preventing exposure to maternal microbes, whereas breastfeeding promotes a “healthy” gut microbiota by providing selective metabolic substrates for beneficial bacteria.3,5 Despite recommendations from the World Health Organization,6 the rate of cesarean delivery has continued to rise in developed countries and rates of breastfeeding decrease substantially within the first few months of life.7,8 In Canada, more than 1 in 4 newborns are born by cesarean delivery, and less than 15% of infants are exclusively breastfed for the recommended duration of 6 months.9,10 In some parts of the world, elective cesarean deliveries are performed by maternal request, often because of apprehension about pain during childbirth, and sometimes for patient–physician convenience.11The potential long-term consequences of decisions regarding mode of delivery and infant diet are not to be underestimated. Infants born by cesarean delivery are at increased risk of asthma, obesity and type 1 diabetes,12 whereas breastfeeding is variably protective against these and other disorders.13 These long-term health consequences may be partially attributable to disruption of the gut microbiota.12,14Historically, the gut microbiota has been studied with the use of culture-based methodologies to examine individual organisms. However, up to 80% of intestinal microbes cannot be grown in culture.3,15 New technology using culture-independent DNA sequencing enables comprehensive detection of intestinal microbes and permits simultaneous characterization of entire microbial communities. Multinational consortia have been established to characterize the “normal” adult microbiome using these exciting new methods;16 however, these methods have been underused in infant studies. Because early colonization may have long-lasting effects on health, infant studies are vital.3,4 Among the few studies of infant gut microbiota using DNA sequencing, most were conducted in restricted populations, such as infants delivered vaginally,17 infants born by cesarean delivery who were formula-fed18 or preterm infants with necrotizing enterocolitis.19Thus, the gut microbiota is essential to human health, yet the acquisition and development of this microbial community during infancy remains poorly understood.3 In the current study, we address this gap in knowledge using new sequencing technology and detailed exposure assessments20 of healthy Canadian infants selected from a national birth cohort to provide representative, comprehensive profiles of gut microbiota according to mode of delivery and infant diet.  相似文献   

20.
Background:An important challenge with the application of next-generation sequencing technology is the possibility of uncovering incidental genomic findings. A paucity of evidence on personal utility for incidental findings has hindered clinical guidelines. Our objective was to estimate personal utility for complex information derived from incidental genomic findings.Methods:We used a discrete-choice experiment to evaluate participants’ personal utility for the following attributes: disease penetrance, disease treatability, disease severity, carrier status and cost. Study participants were drawn from the Canadian public. We analyzed the data with a mixed logit model.Results:In total, 1200 participants completed our questionnaire (available in English and French). Participants valued receiving information about high-penetrance disorders but expressed disutility for receiving information on low-penetrance disorders. The average willingness to pay was $445 (95% confidence interval [CI] $322–$567) to receive incidental findings in a scenario where clinicians returned information about high-penetrance, medically treatable disorders, but only 66% of participants (95% CI 63%–71%) indicated that they would choose to receive information in that scenario. On average, participants placed an important value ($725, 95% CI $600–$850) on having a choice about what type of findings they would receive, including receipt of information about high-penetrance, treatable disorders or receipt of information about high-penetrance disorders with or without available treatment. The predicted uptake of that scenario was 76% (95% CI 72%–79%).Interpretation:Most participants valued receiving incidental findings, but personal utility depended on the type of finding, and not all participants wanted to receive incidental results, regardless of the potential health implications. These results indicate that to maximize benefit, participant-level preferences should inform the decision about whether to return incidental findings.Clinical genomic sequencing technologies are on the verge of allowing individualized care at reasonable cost.1 Patients and their families will soon receive information from clinical sequencing that has implications for clinical care, including information on consequences related to disease prognosis, treatment response or hereditary risk for disease.2 Clinical sequencing can also generate incidental findings, which are clinically relevant genetic variants for disorders unrelated to the reason for ordering the genetic testing. The decision of whether to provide information about incidental findings is complex because such results will have varying clinical validity (whether the genetic variant causes the disorder) and utility (whether effective medical treatment is available for the disorder).3,4 For example, although effective medical treatment may be available for some validated incidental findings, other incidental findings may not be validated as causing the disorder, and still others will be validated but not associated with effective treatment options.To address in part the challenges surrounding the return of incidental findings, the American College of Medical Genetics and Genomics published recommendations for reporting incidental findings from clinical sequencing.5 The statement lists a minimum of 56 genes that laboratories should examine, with results reported to patients through the managing physician. This list includes genes with high-penetrance mutations (i.e., a high proportion of individuals with the mutation will exhibit clinical symptoms) validated to be associated with disorders for which medical interventions are available.The original version of this statement did not “favour offering the patient a preference” for which results would be returned. The reasoning was that clinicians have a duty to prevent potential harm by telling patients about incidental findings. The working group that developed the recommendations further stated that it is impractical to provide the level of genetic counselling required for informed preference on all potential disorders.5 As such, the working group recommended that clinicians discuss with patients the possibility of receiving incidental findings from the list. It was argued that patient autonomy is preserved because patients can decline clinical sequencing if they prefer to not receive information about incidental findings.5 However, this rationale has been subject to debate because of its “all-or-none” nature, whereby patients must agree to receive information about incidental findings or clinical sequencing is not provided.69 In April 2014, in response to the ongoing debate, the statement was amended to include an “opt-out” option for patients who do not want to receive information about incidental findings.10Notwithstanding the ethical debate, there is a lack of quantitative, preference-based economic evidence for the return of incidental genomic findings.8 It has been argued8 that this gap in evidence hindered development of the working group’s recommendation statement. More generally, evidence on preferences for the return of incidental findings is crucial for health policy, for health systems planning and for informing future lists that may include “many more genes.”8 We aimed to generate evidence on the personal utility that study participants from the Canadian public ascribe to the return of incidental genomic findings in the clinical setting. We chose participants from the general public because the public is the largest stakeholder in Canada’s publicly funded health care system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号