首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
ARTICLE WATCH     
This column highlights recently published articles that are of interest to the readership of this publication. We encourage ABRF members to forward information on articles they feel are important and useful to Clive Slaughter, Hartwell Center, St. Jude Children’s Research Hospital, 332 North Lauderdale St., Memphis TN 38105-2794. Tel: (901) 495-4844; Fax: (901) 495-2945; email: gro.edujts@rethgualS.evilC or to any member of the editorial board. Article summaries reflect the reviewer’s opinions and not necessarily those of the Association.  相似文献   

2.
ARTICLE WATCH     
This column highlights recently published articles that are of interest to the readership of this publication. We encourage ABRF members to forward information on articles they feel are important and useful to Clive Slaughter, Hartwell Center, St. Jude Children’s Research Hospital, 332 North Lauderdale St., Memphis TN 38105-2794. Tel: (901) 495-4844; Fax: (901) 495-2945; email: gro.edujts@rethgualS.evilC or to any member of the editorial board. Article summaries reflect the reviewer’s opinions and not necessarily those of the Association.  相似文献   

3.
4.

Background

Repetitive behaviours (RB) in patients with Gilles de la Tourette syndrome (GTS) are frequent. However, a controversy persists whether they are manifestations of obssessive-compulsive disorder (OCD) or correspond to complex tics.

Methods

166 consecutive patients with GTS aged 15–68 years were recruited and submitted to extensive neurological, psychiatric and psychological evaluations. RB were evaluated by the YBOCS symptom checklist and Mini International Neuropsychiatric Interview (M.I.N.I), and classified on the basis of a semi-directive psychiatric interview as compulsions or tics.

Results

RB were present in 64.4% of patients with GTS (107/166) and categorised into 3 major groups: a ‘tic-like’ group (24.3%–40/166) characterised by RB such as touching, counting, ‘just right’ and symmetry searching; an ‘OCD-like’ group (20.5%–34/166) with washing and checking rituals; and a ‘mixed’ group (13.2%–22/166) with both ‘tics-like’ and ‘OCD-like’ types of RB present in the same patient. In 6.3% of patients, RB could not be classified into any of these groups and were thus considered ‘undetermined’.

Conclusions

The results confirm the phenomenological heterogeneity of RB in GTS patients and allows to distinguish two types: tic-like behaviours which are very likely an integral part of GTS; and OCD-like behaviours, which can be considered as a comorbid condition of GTS and were correlated with higher score of complex tics, neuroleptic and SSRIs treatment frequency and less successful socio-professional adaptation. We suggest that a meticulous semiological analysis of RB in GTS patients will help to tailor treatment and allow to better classify patients for future pathophysiologic studies.

Trial Registration

ClinicalTrials.gov NCT00169351  相似文献   

5.

Context

Medical educational reform includes enhancing role modelling of clinical teachers. This requires faculty being aware of their role model status and performance. We developed the System for Evaluation of Teaching Qualities (SETQ) to generate individualized feedback on previously defined teaching qualities and role model status for faculty in (non) academic hospitals.

Objectives

(i) To examine whether teaching qualities of faculty were associated with their being seen as a specialist role model by residents, and (ii) to investigate whether those associations differed across residency years and specialties.

Methods & Materials

Cross-sectional questionnaire survey amongst 549 Residents of 36 teaching programs in 15 hospitals in the Netherlands. The main outcome measure was faculty being seen as specialist role models by residents. Statistical analyses included (i) Pearson''s correlation coefficients and (ii) multivariable logistic generalized estimating equations to assess the (adjusted) associations between each of five teaching qualities and ‘being seen as a role model’.

Results

407 residents completed a total of 4123 evaluations of 662 faculty. All teaching qualities were positively correlated with ‘being seen as a role model’ with correlation coefficients ranging from 0.49 for ‘evaluation of residents’ to 0.64 for ‘learning climate’ (P<0.001). Faculty most likely to be seen as good role models were those rated highly on ‘feedback’ (odds ratio 2.91, 95% CI: 2.41–3.51), ‘a professional attitude towards residents’ (OR 2.70, 95% CI: 2.34–3.10) and ‘creating a positive learning climate’ (OR 2.45, 95% CI: 1.97–3.04). Results did not seem to vary much across residency years. The relative strength of associations between teaching qualities and being seen as a role model were more distinct when comparing specialties.

Conclusions

Good clinical educators are more likely to be seen as specialist role models for most residents.  相似文献   

6.
7.

Background

Numerous observational studies suggest that preventable adverse drug reactions are a significant burden in healthcare, but no meta-analysis using a standardised definition for adverse drug reactions exists. The aim of the study was to estimate the percentage of patients with preventable adverse drug reactions and the preventability of adverse drug reactions in adult outpatients and inpatients.

Methods

Studies were identified through searching Cochrane, CINAHL, EMBASE, IPA, Medline, PsycINFO and Web of Science in September 2010, and by hand searching the reference lists of identified papers. Original peer-reviewed research articles in English that defined adverse drug reactions according to WHO’s or similar definition and assessed preventability were included. Disease or treatment specific studies were excluded. Meta-analysis on the percentage of patients with preventable adverse drug reactions and the preventability of adverse drug reactions was conducted.

Results

Data were analysed from 16 original studies on outpatients with 48797 emergency visits or hospital admissions and from 8 studies involving 24128 inpatients. No studies in primary care were identified. Among adult outpatients, 2.0% (95% confidence interval (CI): 1.2–3.2%) had preventable adverse drug reactions and 52% (95% CI: 42–62%) of adverse drug reactions were preventable. Among inpatients, 1.6% (95% CI: 0.1–51%) had preventable adverse drug reactions and 45% (95% CI: 33–58%) of adverse drug reactions were preventable.

Conclusions

This meta-analysis corroborates that preventable adverse drug reactions are a significant burden to healthcare among adult outpatients. Among both outpatients and inpatients, approximately half of adverse drug reactions are preventable, demonstrating that further evidence on prevention strategies is required. The percentage of patients with preventable adverse drug reactions among inpatients and in primary care is largely unknown and should be investigated in future research.  相似文献   

8.

Background

Patient reported outcomes (PROs) are increasingly assessed in clinical trials, and guidelines are available to inform the design and reporting of such trials. However, researchers involved in PRO data collection report that specific guidance on ‘in-trial’ activity (recruitment, data collection and data inputting) and the management of ‘concerning’ PRO data (i.e., data which raises concern for the well-being of the trial participant) appears to be lacking. The purpose of this review was to determine the extent and nature of published guidelines addressing these areas.

Methods and Findings

Systematic review of 1,362 articles identified 18 eligible papers containing ‘in-trial’ guidelines. Two independent authors undertook a qualitative content analysis of the selected papers. Guidelines presented in each of the articles were coded according to an a priori defined coding frame, which demonstrated reliability (pooled Kappa 0.86–0.97), and validity (<2% residual category coding). The majority of guidelines present were concerned with ‘pre-trial’ activities (72%), for example, outcome measure selection and study design issues, or ‘post-trial’ activities (16%) such as data analysis, reporting and interpretation. ‘In-trial’ guidelines represented 9.2% of all guidance across the papers reviewed, with content primarily focused on compliance, quality control, proxy assessment and reporting of data collection. There were no guidelines surrounding the management of concerning PRO data.

Conclusions

The findings highlight there are minimal in-trial guidelines in publication regarding PRO data collection and management in clinical trials. No guidance appears to exist for researchers involved with the handling of concerning PRO data. Guidelines are needed, which support researchers to manage all PRO data appropriately and which facilitate unbiased data collection.  相似文献   

9.

Background and Aims

Previous studies indicate that the size-controlling capacity of peach rootstocks is associated with reductions of scion water potential during mid-day that are caused by the reduced hydraulic conductance of the rootstock. Thus, shoot growth appears to be reduced by decreases in stem water potential. The aim of this study was to investigate the mechanism of reduced hydraulic conductance in size-controlling peach rootstocks.

Methods

Anatomical measurements (diameter and frequency) of xylem vessels were determined in shoots, trunks and roots of three contrasting peach rootstocks grown as trees, each with different size-controlling characteristics: ‘Nemaguard’ (vigorous), ‘P30-135’ (intermediate vigour) and ‘K146-43’ (substantially dwarfing). Based on anatomical measurements, the theoretical axial xylem conductance of each tissue type and rootstock genotype was calculated via the Poiseuille–Hagen law.

Key Results

Larger vessel dimensions were found in the vigorous rootstock (‘Nemaguard’) than in the most dwarfing one (‘K146-43’) whereas vessels of ‘P30-135’ had intermediate dimensions. The density of vessels per xylem area in ‘Nemaguard’ was also less than in ‘P30-135’and ‘K146-43’. These characteristics resulted in different estimated hydraulic conductance among rootstocks: ‘Nemaguard’ had higher theoretical values followed by ‘P30-135’ and ‘K146-43’.

Conclusions

These data indicate that phenotypic differences in xylem anatomical characteristics of rootstock genotypes appear to influence hydraulic conductance capacity directly, and therefore may be the main determinant of dwarfing in these peach rootstocks.Key words: Prunus, rootstock, vessel diameter, hydraulic conductance, dwarfing, xylem anatomy, Poiseuille–Hagen  相似文献   

10.
Thyroglobulin (Tg) protein is synthesised uniquely by thyroid tissue and is measured as a post-operative differentiated thyroid cancer (DTC) tumour-marker. Tg autoantibodies (TgAb), present in ∼20 percent of DTC patients, interfere with Tg immunometric assay (IMA) measurements causing falsely low/undetectable serum Tg values. Tg radioimmunoassay (RIA) methodology appears resistant to such interferences but has limited availability, whereas new Tg mass-spectrometry methods have inferior sensitivity and unproven clinical value. When present, TgAb concentrations respond to changes in thyroid tissue mass. Thus, when Tg IMA measurements are compromised by the presence of TgAb the TgAb trend can serve as a surrogate DTC tumour-marker. Unfortunately, both physiologic and technical factors impact the interpretation of Tg and TgAb used as DTC tumour-markers.

Serum Tg Testing

Circulating Tg concentrations change in response to thyroid tissue mass, injury (surgery, biopsy or radioiodine) and the degree of TSH stimulation. Technical factors (Tg assay sensitivity, specificity and interferences) additionally impact the clinical utility of Tg testing. Specifically, new 2nd generation Tg IMAs (functional sensitivities ≤ 0.1 μg/L) now mostly obviate the need for expensive recombinant human TSH(rhTSH)-stimulated Tg testing. Tg molecular heterogeneity remains responsible for two-fold between-method differences in Tg values that preclude switching methods and TgAb interference remains especially problematic.

Serum TgAb Testing

Reliable TgAb testing is critical for authenticating that Tg IMA measurements are not compromised by interference. Unfortunately, TgAb methodologies vary widely in sensitivity, specificity and the absolute values they report, necessitating that TgAb concentrations be monitored using the same method. Furthermore, adopting the manufacturer’s TgAb cut-off value to define a ‘detectable’ TgAb results in falsely classifying sera as TgAb-negative, because manufacturers’ cut-offs are set to diagnose thyroid autoimmunity and not to detect TgAb interference.CBR. 2012 Nov; 33(4): S13.

S10 Lab Tests Online – Pathology Information for the People

B CampbellAuthor information Copyright and License information DisclaimerEditor, Lab Tests Online Australasia moc.liamg@rekwahdnallebpmacThe contents of articles or advertisements in The Clinical Biochemist – Reviews are not to be construed as official statements, evaluations or endorsements by the AACB, its official bodies or its agents. Statements of opinion in AACB publications are those of the contributors. Print Post Approved - PP255003/01665. Copyright © 2005 The Australasian Association of Clinical Biochemists Inc. No literary matter in The Clinical Biochemist – Reviews is to be reproduced, stored in a retrieval system or transmitted in any form by electronic or mechanical means, photocopying or recording, without permission. Requests to do so should be addressed to the Editor. ISSN 0159 – 8090  相似文献   

11.

Background

In July, 2009, French health authorities, like those in many other countries, decided to embark on a mass vaccination campaign against the pandemic A(H1N1) influenza. Private general practitioners (GPs) were not involved in this campaign. We studied GPs’ pandemic vaccine (pvaccine) uptake, quantified the relative contribution of its potential explanatory factors and studied whether their own vaccination choice was correlated with their recommendations to patients about pvaccination.

Methodology/Principal Findings

In this cross-sectional telephone survey, professional investigators interviewed an existing panel of randomly selected private GPs (N = 1431; response rate at inclusion in the panel: 36.8%; participation rate in the survey: 100%). The main outcome variable was GPs’ own pvaccine uptake. We used an averaging multi-model approach to quantify the relative contribution of factors associated with their vaccination. The pvaccine uptake rate was 61% (95%CI = 58.3–63.3). Four independent factors contributed the most to this rate (partial Nagelkerke’s R2): history of previous vaccination against seasonal influenza (14.5%), perception of risks and efficacy of the pvaccine (10.8%), opinions regarding the organization of the vaccination campaign (7.1%), and perception of the pandemic''s severity (5.2%). Overall, 71.3% (95%CI = 69.0–73.6) of the participants recommended pvaccination to young adults at risk and 40.1% (95%CI = 37.6–42.7) to other young adults. GPs’ own pvaccination was strongly predictive of their recommendation to both young adults at risk (OR = 9.6; 95%CI = 7.2–12.6) and those not at risk (OR = 8.5; 95%CI = 6.4–11.4).

Conclusions/Significance

These results suggest that around 60% of French private GPs followed French authorities’ recommendations about vaccination of health care professionals against the A(H1N1) influenza. They pinpoint priority levers for improving preparedness for future influenza pandemics. Besides encouraging GPs'' own uptake of regular vaccination against seasonal influenza, providing GPs with clear information about the risks and efficacy of any new pvaccine and involving them in the organization of any future vaccine campaign may improve their pvaccine uptake.  相似文献   

12.

Background:

Vitamin D fortification of non–cow’s milk beverages is voluntary in North America. The effect of consuming non–cow’s milk beverages on serum 25-hydroxyvitamin D levels in children is unclear. We studied the association between non–cow’s milk consumption and 25-hydroxyvitamin D levels in healthy preschool-aged children. We also explored whether cow’s milk consumption modified this association and analyzed the association between daily non–cow’s milk and cow’s milk consumption.

Methods:

In this cross-sectional study, we recruited children 1–6 years of age attending routinely scheduled well-child visits. Survey responses, and anthropometric and laboratory measurements were collected. The association between non–cow’s milk consumption and 25-hydroxyvitamin D levels was tested using multiple linear regression and logistic regression. Cow’s milk consumption was explored as an effect modifier using an interaction term. The association between daily intake of non–cow’s milk and cow’s milk was explored using multiple linear regression.

Results:

A total of 2831 children were included. The interaction between non–cow’s milk and cow’s milk consumption was statistically significant (p = 0.03). Drinking non–cow’s milk beverages was associated with a 4.2-nmol/L decrease in 25-hydroxyvitamin D level per 250-mL cup consumed among children who also drank cow’s milk (p = 0.008). Children who drank only non–cow’s milk were at higher risk of having a 25-hydroxyvitamin D level below 50 nmol/L than children who drank only cow’s milk (odds ratio 2.7, 95% confidence interval 1.6 to 4.7).

Interpretation:

Consumption of non–cow’s milk beverages was associated with decreased serum 25-hydroxyvitamin D levels in early childhood. This association was modified by cow’s milk consumption, which suggests a trade-off between consumption of cow’s milk fortified with higher levels of vitamin D and non–cow’s milk with lower vitamin D content.Goat’s milk and plant-based milk alternatives made from soy, rice, almonds, coconut, hemp, flax or oats (herein called “non–cow’s milk”) are increasingly available on supermarket shelves. Many consumers may be switching from cow’s milk to these beverages.13 Parents may choose non–cow’s milk beverages for their children because of perceived health benefits. However, it is unclear whether they offer health advantages over cow’s milk or, alternatively, whether they increase the risk of nutritional inadequacy.In the United States and Canada, cow’s milk products are required to contain about 40 IU of vitamin D per 100 mL, making it the major dietary source of vitamin D for children.48 The only other food source with mandatory vitamin D fortification in Canada is margarine, which is required to contain 53 IU per 10 mL (10 g).5 Fortification of non–cow’s milk beverages with vitamin D is also possible, but it is voluntary in both countries. Furthermore, there is little regulation on the vitamin D content even if such beverages are fortified.5,6,9We conducted a study to test the association between total daily consumption of non–cow’s milk and serum 25-hydroxyvitamin D levels in a population of healthy urban preschool-aged children attending routinely scheduled well-child visits. We hypothesized that vitamin D stores would be lower in children who consume non–cow’s milk. The secondary objectives were to explore how consumption of cow’s milk might modify this association and to study the association between daily intake of non–cow’s milk and cow’s milk.  相似文献   

13.
The Illumina Infinium HumanMethylation450 BeadChip – the successor to their hugely popular HumanMethylation27 BeadChip – is arguably the most prevalent platform for large-scale studies of DNA methylome analysis. After the success of last year’s meeting1 that discussed initial analysis strategies for this then-new platform, this year’s meeting (held at Queen Mary, University of London) included the presentation of now established pipelines and normalization methods for data analysis, as well as some exciting tools for down-stream analysis. The importance of defining cell composition was a new topic mentioned by most speakers. The epigenome varies between cell types and insuring that methylation differences are related to sample treatment and not a differing cell population is essential. The meeting was attended by 215 computational and bench scientists from 18 countries. There were 11 speakers, a small poster session, and a discussion session. Talks were recorded and are now freely available at http://www.illumina.com/applications/epigenetics/array-based_methylation_analysis/methylation-array-analysis-education.ilmn  相似文献   

14.

Introduction

Falls are common in older people and increase in prevalence with advancing old age. There is limited knowledge about their impact in those aged 85 years and older, the fastest growing age group of the population. We investigated the prevalence and impact of falls, and the overlap between falls, dizziness and blackouts, in a population-based sample of 85 year olds.

Methods

Design: Cross-sectional analysis of baseline data from Newcastle 85+ Cohort Study. Setting: Primary care, North-East England. Participants: 816 men and women aged 85 years. Measurements: Structured interview with research nurse. Cost-consequence analysis of fall-related healthcare costs.

Results

Over 38% (313/816) of participants had fallen at least once in the previous 12 months and of these: 10.6% (33/312) sustained a fracture, 30.1% (94/312) attended an emergency department, and 12.8% (40/312) were admitted to hospital. Only 37.2% (115/309) of fallers had specifically discussed their falls problem with their general practitioner and only 12.7% (39/308) had seen a falls specialist. The average annual healthcare cost per faller was estimated at £202 (inter-quartile range £174–£231) or US$329 ($284–$377). ‘Worry about falling’ was experienced by 42.0% (128/305) of fallers, ‘loss of confidence’ by 40.0% (122/305), and ‘going out less often’ by 25.9% (79/305); each was significantly more common in women, odds ratios (95% confidence interval) for women: men of 2.63 (1.45–4.55), 4.00 (2.27–7.14), and 2.86 (1.54–5.56) respectively. Dizziness and blackouts were reported by 40.0% (318/796) and 6.4% (52/808) of participants respectively. There was marked overlap in the report of falls, dizziness and blackouts.

Conclusions

Falls in 85 year olds are very common, associated with considerable psychological and physical morbidity, and have high impact on healthcare services. Wider use of fall prevention services is needed. Significant expansion in acute and preventative services is required in view of the rapid growth in this age group.  相似文献   

15.
Ribonucleotides     
It has normally been assumed that ribonucleotides arose on the early Earth through a process in which ribose, the nucleobases, and phosphate became conjoined. However, under plausible prebiotic conditions, condensation of nucleobases with ribose to give β-ribonucleosides is fraught with difficulties. The reaction with purine nucleobases is low-yielding and the reaction with the canonical pyrimidine nucleobases does not work at all. The reasons for these difficulties are considered and an alternative high-yielding synthesis of pyrimidine nucleotides is discussed. Fitting the new synthesis to a plausible geochemical scenario is a remaining challenge but the prospects appear good. Discovery of an improved method of purine synthesis, and an efficient means of stringing activated nucleotides together, will provide underpinning support to those theories that posit a central role for RNA in the origins of life.Whether RNA first functioned in isolation, or in the presence of other macromolecules and small molecules is still an open question, and a question that is addressable through chemistry (Borsenberger et al. 2004). If synergies are found between RNA assembly chemistry and that associated with the assembly of lipids and/or peptides, the purist RNA world concept (Woese 1967; Crick 1968; Orgel 1968) might have to be loosened to allow other such molecules a role in the origin of life. Metabolism, or the roots of metabolism, could also potentially have coevolved with RNA if organic chemistry happened to work in a particular way on a set of plausible prebiotic feedstock molecules in a dynamic geochemical setting. Such considerations point to the need for an open mind when considering the chemical derivation of RNA. Notwithstanding these caveats, however, the self-assembly of polymeric RNA on the early Earth most likely involved activated monomers (Verlander et al. 1973; Ferris et al. 1996). These activated monomers could either have come together sequentially to make RNA one monomer at a time, or short oligoribonucleotides formed by such a process could have joined together by ligation in what would thus amount to a two-stage assembly of RNA polymers. Replication of RNA would then have involved template-directed versions of these or related chemistries (Orgel 2004).The details of the polymerization processes that might plausibly have given rise to the first RNA molecules can only be investigated when there is some evidence as to the specific chemical nature of the activated monomers. Broadly speaking, however, it is possible to differentiate different polymerization chemistries on the basis of the bonds formed in the polymerization step. P–O bond forming polymerization chemistry is reasonable to consider first because of the simplicity of P–O retrosynthetic disconnections of RNA (Corey 1988). In the case of the simplest P–O bond forming polymerization chemistry, the monomeric products of preceding prebiotic chemistry would either be activated ribonucleoside-5′-phosphates 1—activated through having a leaving group attached to the phosphate—or ribonucleoside-2′,3′-cyclic phosphates 2, wherein the activation is intrinsic to the cyclic phosphate (Fig. 1). If all potential routes from prebiotic feedstock molecules to such monomers were to be investigated experimentally without success, then the potential for prebiotic self-assembly of monomers associated with more complicated polymerization chemistries, would additionally have to be investigated (this would include alternative P–O bond forming polymerization chemistry, as well as C–O and C–C bond forming chemistries). However, continuing with the simplest P–O bond forming polymerization chemistry, and the assumption that it seems reasonable to follow the simplest retrosynthetic disconnections first, 1 and 2 can then be conceptually reduced to ribose 3, a nucleobase and phosphate (Joyce 2002; Joyce and Orgel 2006). Ribose 3 can then be disconnected to glycolaldehyde 4 and glyceraldehyde 5 through aldol chemistry, and the nucleobases disconnected to simpler carbon and nitrogen containing molecules—the pyrimidines to cyanamide 6 and cyanoacetylene 7 (conventionally through the hydration products urea 8 and cyanoacetaldehyde 9), and the purines to hydrogen cyanide and a C(IV) oxidation level molecule such as 6 or 8. This retrosynthetic analysis ultimately breaks ribonucleotides down into molecules that are sufficiently simple so that they can be deemed prebiotically plausible feedstock molecules (Fig. 2) (Sanchez et al. 1966; Pasek and Lauretta 2005; Bryant and Kee 2006; Thaddeus 2006).Open in a separate windowFigure 1.Activated ribonucleotides in the potentially prebiotic assembly of RNA. Potential P–O bond forming polymerization chemistry is indicated by the curved arrows.Open in a separate windowFigure 2.One of the synthetic routes to β-ribocytidine-2′,3′-cyclic phosphate 2 (B=C) implied by the assumption that nucleosides can self-assemble by nucleobase ribosylation. The general synthetic approach has been supported by the experimental demonstration of most of its steps. However, prebiotically plausible conditions under which the key nucleobase ribosylation step works have not been found despite numerous attempts over several decades.It is not just because the simplest retrosyn-thetic disconnections of ribonucleotides proceed by way of ribose, nucleobases, and phosphate that people have tried to synthesize them via these three building blocks under prebiotically plausible conditions for the last 40–50 years—in terms of their appearance to the human eye, ribonucleotides undoubtedly consist of these three building blocks. Experimentally, there have been several notably successful reactions that ostensibly support this nucleobase ribosylation approach: Orgel’s and Miller’s syntheses of cytosine 10 (Ferris et al. 1968; Robertson and Miller 1995); Benner’s and Darbre’s syntheses of ribose 3 by aldolization of glycolaldehyde 4 and glyceraldehyde 5 (Ricardo et al. 2004; Kofoed et al. 2005); Pasek’s and Kee’s demonstration of phosphate synthesis by disproportionation of meteoritic metal phosphides (Pasek and Lauretta 2005; Bryant and Kee 2006); and Orgel’s urea-catalyzed phosphorylation of nucleosides (eg., 11 (B=C)→2 (B=C)) (Lohrmann and Orgel 1971). Indeed, for many years, a prebiotically plausible synthesis of ribonucleotides from ribose 3, the nucleobases, and phosphate has been tantalizingly close but for one step of the assumed synthesis—the joining of ribose to the nucleobases. This reaction works extremely poorly for the purines and not at all in the case of the pyrimidines (Fuller et al. 1972a, 1972b; Orgel 2004).So, why does the ribosylation chemistry not work with free nucleobases and ribose 3 when, using the protecting and controlling groups of conventional synthetic chemistry, nucleobase ribosylation is possible? The reasons are predominantly kinetic and can be appreciated by consideration of the structure and reactivity of ribose 3 and representative nucleobases (Fig. 3). Ribose 3 exists as an equilibrating mixture of different forms in aqueous solution (Fig. 3A) (Drew et al. 1998). The mixture is dominated by β- and α-pyranose isomers (3 [β–p] and 3 [α–p]) with lesser amounts of β- and α-furanose isomers (3 [β–f] and 3 [α–f]). The various hemiacetal ring forms equilibrate via the open chain aldehyde form 3 (a), which is a very minor component along with an open chain hydrate. The purine nucleobase adenine 12 also exists in various equilibrating forms in aqueous solution (Fig. 3B) (Fonseca Guerra et al. 2006). In this case, the isomers differ in the position of protonation, the major tautomer, 12 has N9 protonated, but other tautomers such as 13—in which N1 is protonated—exist at extremely low concentration. To connect adenine 12 to ribose 3 to give a natural RNA ribonucleoside 11 (B=A) it is necessary for N9 of adenine to function as a nucleophile and C1 of 3 (α–f) to function as an electrophile. The latter is possible under acidic conditions when a small amount of 14—a selectively protonated form of 3 (α–f)—is present at equilibrium. The protonation converts the anomeric hydroxyl group into a better leaving group and enhances the electrophilicity of C1. The major tautomer of adenine, 12, is not nucleophilic at N9 because the lone pair on that atom is delocalized throughout the bicyclic ring structure. N9 of several minor tautomers such as 13 is nucleophilic because the nitrogen lone pair is localized, and so reaction with 14 is possible, though slow because of the low concentrations of the productively reactive species. To compound this sluggishness, the reaction is plagued with additional problems. First, the acid needed to activate 3 (α–f) also substantially protonates adenine, giving the cation 15, which is not nucleophilic on N9 (Christensen et al. 1970; Zimmer and Biltonen 1972; Major et al. 2002). Second, the most nucleophilic nitrogen—the 6-amino group—of the major tautomer of adenine 12 reacts with 3 (a)—the most reactive form of 3 despite its scarcity—resulting in N6-ribosyl adducts as by far the major products (Fuller et al. 1972a, 1972b). Third, the other isomeric forms of ribose, 3 (β–p), 3 (α–p), and 3 (β–f), can also react with 13 when they are protonated at their anomeric hydroxyl groups. Fourth, N9 of the minor adenine tautomer 13 is not the only nucleophilic ring nitrogen of adenine; N1, N3, and N7 of the major tautomer 12 are also nucleophilic. These latter two points mean that the small amount of protonated adenosine 16, that is formed when N9 of 13 reacts with 14, is accompanied by a multitude of isomeric products. The final problem with the synthesis is reversibility. Any adenosine 11 (B=A) that is produced is formed in acid at equilibrium, and the equilibrium in aqueous solution lies in favor of 3 and 15. The only way round this is to carry out the reaction in the dry-state in the presence of acidic catalysts. The best that has been achieved—a 4% yield of 11 (B=A)—involves such a dry-state reaction with an excess of ribose, followed by heating with concentrated ammonium hydroxide solution to hydrolyse N6-ribosyl adducts (Fuller et al. 1972a).Open in a separate windowFigure 3.The difficulties of assembling β-ribonucleosides by nucleobase ribosylation. (A) The many different forms of ribose 3 adopted in aqueous solution. The pyranose (p) and furanose (f) forms interconvert via the open-chain aldehyde (a), which is also in equilibrium with an open-chain aldehyde hydrate (not shown). (B) Adenine tautomerism and the ribosylation step necessary to make the adenosine 11 (B=A) thought to be needed for RNA assembly. The low abundance of the reactive entities 13 and 14 is partly responsible for the low yield of 11 (B=A). (C) The reason for the lower nucleophilicity of N1 of the pyrimidines, and the conventional synthetic chemist’s solution to the problems of ribosylation.The situation with prebiotic pyrimidine ribosylation is even worse (Fig. 3c). Thus, for example, N1 of cytosine 10 is not nucleophilic because the lone pair is delocalized round the ring and into the carbonyl group (as indicated by the resonance canonical structure 17). Experimentally, cytosine 10 cannot be ribosylated on N1 even using the conditions established for unselective ribosylation of adenine 12 (Fuller et al. 1972a). If there is any tautomeric form such as 18 with a localized N1 lone pair present at equilibrium, it must be in such a low concentration as to be effectively unreactive. The same considerations hold true for uracil.In conventional synthetic chemistry, the aforementioned difficulties in nucleobase ribosylation can be overcome with directing, blocking, and activating groups on the nucleobase and ribose (Ueda and Nishino 1968). Thus, the cytosine derivative 19 is directed to function as a nucleophile at N1 by alkylation of O2. The ribose-derived intermediate 20 is constrained to the furanose form by benzoylation of the C5-hydroxyl group, and neighboring group participation from an O2-benzoyl group directs β-glycosylation. These molecular interventions are synthetically ingenious, but serve to emphasize the enormous difficulties that must be overcome if ribonucleosides are to be efficiently produced by nucleobase ribosylation under prebiotically plausible conditions. This impasse has led most people to abandon the idea that RNA might have assembled abiotically, and has prompted a search for potential pre-RNA informational molecules (Joyce et al. 1987; Eschenmoser 1999; Schöning et al. 2000; Zhang et al. 2005; Sutherland 2007). However, we realized that there were other possible synthetic approaches that although less obvious, still had the potential to make the ribonucleotides 1 and 2 (Anastasi et al. 2007). Furthermore, and as pointed out earlier, there are also alternative bond forming polymerization chemistries imaginable. Our plan was to work through these possibilities by systematic experimentation before deciding whether the abiogenesis of RNA is possible or not.  相似文献   

16.

Background and Aims

Cognitive behavioral group therapy (CBGT) is an effective, well-established, but not widely available treatment for social anxiety disorder (SAD). Internet-based cognitive behavior therapy (ICBT) has the potential to increase availability and facilitate dissemination of therapeutic services for SAD. However, ICBT for SAD has not been directly compared with in-person treatments such as CBGT and few studies investigating ICBT have been conducted in clinical settings. Our aim was to investigate if ICBT is at least as effective as CBGT for SAD when treatments are delivered in a psychiatric setting.

Methods

We conducted a randomized controlled non-inferiority trial with allocation to ICBT (n = 64) or CBGT (n = 62) with blinded assessment immediately following treatment and six months post-treatment. Participants were 126 individuals with SAD who received CBGT or ICBT for a duration of 15 weeks. The Liebowitz Social Anxiety Scale (LSAS) was the main outcome measure. The following non-inferiority margin was set: following treatment, the lower bound of the 95 % confidence interval (CI) of the mean difference between groups should be less than 10 LSAS-points.

Results

Both groups made large improvements. At follow-up, 41 (64%) participants in the ICBT group were classified as responders (95% CI, 52%–76%). In the CBGT group, 28 participants (45%) responded to the treatment (95% CI, 33%–58%). At post-treatment and follow-up respectively, the 95 % CI of the LSAS mean difference was 0.68–17.66 (Cohen’s d between group = 0.41) and −2.51–15.69 (Cohen’s d between group = 0.36) favoring ICBT, which was well within the non-inferiority margin. Mixed effects models analyses showed no significant interaction effect for LSAS, indicating similar improvement across treatments (F = 1.58; df = 2, 219; p = .21).

Conclusions

ICBT delivered in a psychiatric setting can be as effective as CBGT in the treatment of SAD and could be used to increase availability to CBT.

Trial Registration

ClinicalTrials.gov NCT00564967  相似文献   

17.
Somatic mtDNA mutations and deletions in particular are known to clonally expand within cells, eventually reaching detrimental intracellular concentrations. The possibility that clonal expansion is a slow process taking a lifetime had prompted an idea that founder mutations of mutant clones that cause mitochondrial dysfunction in the aged tissue might have originated early in life. If, conversely, expansion was fast, founder mutations should predominantly originate later in life. This distinction is important: indeed, from which mutations should we protect ourselves – those of early development/childhood or those happening at old age? Recently, high-resolution data describing the distribution of mtDNA deletions have been obtained using a novel, highly efficient method (Taylor et al., 2014). These data have been interpreted as supporting predominantly early origin of founder mutations. Re-analysis of the data implies that the data actually better fit mostly late origin of founders, although more research is clearly needed to resolve the controversy.mtDNA mutations, and in particular deletions, progressively increase with age and are suspected culprits of several age-related degenerative processes. Because there are hundreds or even thousands of mtDNA genomes per cell, increase in mutational load may include not only the increase in the number of cells containing mutant genomes, but also increase in the fraction of mutant mtDNA in each cell. Studies of mutational composition of individual cells showed that accumulation of mutations within a cell usually does not occur via accrual of random hits. Instead, mtDNA mutations ‘clonally expand’, that is, a single initial mutation multiplies within cell, replaces normal mtDNAs, eventually takes over the cell, and may impair its mitochondrial function. Expansion is possible because mtDNA molecules in a cell are persistently replicated, even in nondividing cells, where some of them are destroyed and replaced by replication of others. Half-life of murine mtDNA is on the order of several weeks (Korr et al., 1998). The result of clonal expansion is that different cells typically contain different types of mutations, while mutant genomes within a cell carry the same mutation. Mechanisms of expansion are still debated; possibilities range from neutral genetic drift to selection within the ‘population’ of intracellular mitochondria. In this commentary, we do not assume any particular mechanism and concentrate on the kinetics of expansion.Because expansion takes time, it is possible that founder mutations of expanded mutant clones that compromise mitochondria at old age might have occurred early in life. Indeed, if expansion was a slow process taking about a lifetime to conclude (Fig. (Fig.1A,1A, upper panel), then only those mutations that were generated early in life would have enough time to reach harmful intracellular concentration. In an utmost version of this scenario, there is little de novo mutagenesis and increase in mutations with age is mostly driven by clonal expansion of early founder mutations. The ‘slow’ scenario implies that, as far as mtDNA mutagenesis is concerned, we need to preserve mtDNA during early years or even during development and to be less worried about mutations that arise in older individuals.Open in a separate windowFigure 1The ‘slow’ and the ‘fast’ expansion scenarios and the predicted changes in the diversity and extent of expansion of mtDNA mutations with age. Diversity and extent of expansion can be directly measured and used to distinguish between the two scenarios. mtDNA molecules with different deletions are depicted by small circles of different bright colors. Wild-type mtDNA molecules and cells that never get mutations are not shown for simplicity. Of course in a real tissue, mutant cells are surrounded by a great majority of nonmutant cells.If, on the other hand, clonal expansion was rapid (Fig. (Fig.1B1B upper panel), then expanding mutations would swiftly fill up the cells in which founders had arisen and therefore stop expanding. Consequently, overall mutation load would soon ‘plateau off’ if mutants were not continuously occurring. So, in this scenario, the observed persistent increase in mutations with age must be driven by de novo mutagenesis, and most impairment in this scenario is caused by ‘late in life’ mutations, which therefore should be of primary concern. Despite importance of this question, there is still no consensus on which scenario is correct (Payne et al., 2011), (Khrapko, 2011), although the ‘early mutations’ hypothesis appeared more than a decade ago (Elson et al., 2001), (Khrapko et al., 2003).Figure Figure11 schematically depicts two characteristics of the mutational dynamics that distinguish between the two scenarios. First, the diversity, that is, number of different types of deletions (Supplemental Note 0), remains constant in the slow scenario (Fig. (Fig.1A),1A), but steadily increases in the fast scenario (Fig. (Fig.1B).1B). Second, the extent of expansion, that is, the average number of mutant mtDNA molecules per clonal expansion, should increase steadily throughout the lifespan in the slow, early mutations scenario. In contrast, in the fast scenario, extent of expansion should increase rapidly early in life, up to the point when the earliest mutations had enough time to expand to the limits of their host cells and increase much slower thereafter.Recent paper by Taylor et al. (Taylor et al., 2014) describes a new method called Digital Deletion Detection, based on enrichment of deletions by wild-type specific restriction digestion, massive single-molecule PCR in microdroplets, followed by next generation sequencing of the PCR products. This ‘3D’ approach for the first time provided a detailed frequency distribution for a large set of different deleted mtDNA molecules in human brain as a function of age. These data are sufficient to estimate diversity and level of expansion of mutations and therefore promise to help to distinguish between the fast and the slow scenarios. The authors found that a) the number of different types of deletions per sample (used as proxy of diversity) does not increase with age, while b) the ratio of total deletion frequency over the number of deletion types (used as proxy of expansion) does steadily increase with age. Consequently, the authors concluded that ‘diversity of unique deletions remains constant’ and that the ‘data supported the hypothesis that expansion of pre-existing mutations is the primary factor contributing to age-related accumulation of mtDNA deletions’, that is, the slow expansion scenario. We believe, however, that these data deserve more detailed analysis and more cautious interpretation.A striking feature of the data (Taylor et al., 2014) is that the types of deletions found in any two samples are almost completely different (Supplemental Note 1). The same pattern has been previously observed in muscle (Nicholas et al., 2009). To explain this, consider that deletions originate mostly from individual cells each containing clonal expansion of deletion of a certain type. Because there are very many potential types of deletions and much fewer clonal expansions per sample, only a small proportion of possible types of deletions are found in each sample, which explains why two samples typically have almost no deletions in common. Similarly, any two cells with clonal expansions from the same sample usually carry different types of deletion. With this in mind, we will reconsider interpretation of the data.First, consider diversity of deletions. Unfortunately, number of deletion types per sample normalized against the total number of deletions used by Taylor et al. as proxy of diversity is not an adequate measure. First, normalization against the total number of sampled deletion molecules is not justified because in a sample with clonal expansions, the number of types of deletions is not proportional to the number of sequenced molecules (Supplemental Note 2). Instead, the number of deletion types per sample is proportional to the size of the sample (i.e., the size of the tissue piece actually used for DNA isolation). Indeed, increasing sample size means including proportionally more cells with expansions. As discussed above, these additional cells contain different deletion types, so the number of deletion types will also increase roughly proportionally to the sample size. Sample size must be factored out of a rational measure of deletion diversity. The best proxy of sample size available in the original study (Taylor et al., 2014) is the number of mtDNA copies isolated from each sample. Thus, to factor out the sample size, we used the number of deletion types per 1010 mtDNA, (Fig. (Fig.2A).2A). This corrected measure shows rather strong (P < 0.0003) increase in diversity of mtDNA deletions with age (Supplemental Note 3), which fits the ‘fast’ expansion scenario (Fig. (Fig.1B1B).Open in a separate windowFigure 2The observed changes in diversity and extent of expansion of mtDNA mutations in brain with age in Taylor et al. data. (A) Diversity of mtDNA deletions (number of deletion types per 1010 mtDNA) shows strong increase with age (P < 0.0003), corroborating the ‘fast’ expansion scenario (Fig. (Fig.1B).1B). (B) The extent of expansion shows excessive variance and does not seem to support any of the two scenarios (neither ‘fast’ nor ‘slow’) to any significant extent. Interpretation of these data requires more detailed analysis.Next, we revisited the extent of expansion of clonal mutations. As a measure of expansion, we used the average of the actual numbers of deleted molecules per deletion type, same as in Fig 1A,B. Note that this measure is different from ‘expansion index’ (Taylor et al., 2014), defined as deletion frequency per deletion type. This is essentially the same measure we use, additionally divided by the number of all mtDNA molecules in the sample. Unfortunately, ‘expansion index’ so defined systematically increases with decreased sample size. This is because deletion frequency is not expected to systematically increase with sample size, while the number of deletion types is, as shown in the previous paragraph. Thus, in particular because old samples in this set tend to be smaller (Supplemental Fig. 4A), this measure is biased.Extent of expansion of mtDNA mutations is plotted versus age in Fig. Fig.2B.2B. Which theoretical expansion pattern, the ‘slow’ (Fig. (Fig.1A)1A) or the ‘fast’ (Fig. (Fig.1B),1B), better fits the actual data (Fig. (Fig.2B)?2B)? It looks like either fit is poor: the data are notoriously variable. We conclude that it is necessary to look beyond the coarse average measure of the expansion to interpret the data and explain the excessive variance (Supplemental Note 4).The characteristic biphasic shape of the predicted ‘fast’ plot (Fig. (Fig.1B)1B) results from the early large expanded mutations, which are absent in the ‘slow’ scenario (Fig. (Fig.1A).1A). We therefore used the data (Taylor et al., 2014) to estimate the size of expansions (Supplemental Note 5, Supplementary Table S1) and in particular, to look for large expansions in young tissue. Indeed, young samples do contain large clonal expansions, and there are four expansions more than 1000 copies in samples 30 years and younger (Table S1). This is consistent with our own observations of large expansions of deletions in single neurons of the young brain using a different approach – single-molecule amplification (Kraytsberg et al., 2006). In other words, although rapid expansion pattern in Fig. Fig.2B2B is obscured by large variance of the data, the hallmarks of fast expansion, that is, large early mutant expansions, are present in the tissue.An aspect of the data, however, is at odds either with the fast or the slow scenario. The distribution of expansion sizes at any age is rather gradual; that is, there is a large proportion of expansions of intermediate sizes, ranging from smallest detectable (typically about 10 molecules) up to those more than 1000 molecules. In contrast, according to the ‘slow’ expansion scenario, all expansions should be of approximately the same size, which should increase with age, turning ‘large’ at approximately the same time. The fast scenario, also in contradiction with observations, predicts that proportion of mutants contained in expansions of intermediate sizes markedly decreases with age (Supplemental Note 6).If neither of the scenarios fits the data, what kind of mutational dynamics could be responsible for the observed distribution (Taylor et al., 2014)? We believe that most plausible is a ‘mixed’ scenario, where expansions are fast in some cells and slow in other (‘fast’ and ‘slow expanders’, correspondingly), probably with the whole spectrum of expansion rates in between. Expansion rates may differ between cell types or between cells of the same type differing in individual activity, stress, levels of ROS, length of deletion (Fukui & Moraes, 2009), etc. An example of such a difference is given by myoblasts, which, unlike their descendant myofibers, support only very slow, if any, expansion of mtDNA deletions (Moraes et al., 1989).What does this mean with respect to the question in the title of this commentary – when do mtDNA deletions arise? In fact, if we accept the mixed scenario, then it follows that the share of late mutations is at least significant. Indeed, if late mutations played little role, then accumulation of mutations should have been markedly decelerating with age. This is because ‘fast expander’ cells are saturated with mutations early in life and increase in mutation load at older age is driven by progressively ‘slower expanders’, meaning slower increase in mutational load. In contrast with this prediction, accumulation of deletions observed in most tissues appears to aggressively accelerate with age and is traditionally approximated with an exponent. This is also true for the Taylor et al. (2014), which are better fit by accelerating curves than they are by linear function (Fig. S6). The fraction of deleted mtDNA increases over the lifespan by up to four orders of magnitude in highly affected brain areas such as substantia nigra and about three in less affected, such as cortex (Meissner et al., 2008). In principle, even such a dramatic increase in mutant fraction might be entirely driven by expansion of early founder mutations in slow scenario. Neurons contain thousands of mtDNA copies, so expansion alone could potentially sustain about four orders of magnitude increase in mutant fraction from single founder mutants mtDNA to fully mutant cells. However, accelerated accumulation of (expanded) mutations in mixed scenario can only be explained by generation of de novo mutations at older ages.The reality is probably more complicated than idealized scenarios considered above. For example, cells with expanded mutations may die preferentially. If true, this would make fast scenario/late origin even more plausible. Indeed, that would mean that the actual number of mutations that have reached full expansion at any age is higher than observed (extra mutations being those that had died), implying that mutations expand faster than it appears. Other refinements of the model are certainly possible. However, notable variability of the data makes testing hypotheses, in particular, complex ones, difficult. Excessive variability of data on mtDNA deletions has been observed before, for example Meissner et al., 2008, but have never been duly explored. Lack of replicate analyses hampers understanding of the source of variance and of the shape of the frequency distributions of mutations. The latter are indispensable for interpreting the data. Future studies seeking to explain dynamics of mutations with age must include multiple replicate measurements (Supplemental Note 7).In conclusion, re-analysis of the data (Taylor et al., 2014) challenges the authors’ inference that diversity of unique deletions remains constant with age and that expansion of pre-existing deletions is the primary factor contributing to age-related accumulation of mtDNA deletions. The data are more consistent with increasing diversity of deletions and significant impact of mutagenesis at older age. However, the issue is far from being solved, in part because of high variability of the data, and it awaits more detailed studies (Supplemental Note 7).  相似文献   

18.

Background:

Coroners in Australia, Canada, New Zealand and other countries in the Commonwealth hold inquests into deaths in two situations. Mandatory inquests are held when statutory rules dictate they must be; discretionary inquests are held based on the decisions of individual coroners. Little is known as to how and why coroners select particular deaths for discretionary inquests.

Methods:

We analyzed the deaths investigated by Australian coroners for a period of seven and one-half years in five jurisdictions. We classified inquests as mandatory or discretionary. After excluding mandatory inquests, we used logistic regression analysis to identify the factors associated with coroners’ decisions to hold discretionary inquests.

Results:

Of 20 379 reported deaths due to external causes, 1252 (6.1%) proceeded to inquest. Of these inquests, 490 (39.1%) were mandatory and 696 (55.6%) were discretionary. In unadjusted analyses, the rates of discretionary inquests varied widely in terms of age of the decedent and cause of death. In adjusted analyses, the odds of discretionary inquests declined with the age of the decedent; the odds were highest for children (odds ratio [OR] 2.17, 95% confidence interval [CI] 1.54–3.06) and lowest for people aged 65 years and older (OR 0.38, 95% CI 0.28–0.51). Using poisoning as a reference cause of death, the odds of discretionary inquests were highest for fatal complications of medical care (OR 12.83, 95% CI 8.65–19.04) and lowest for suicides (OR 0.44, 95% CI 0.30–0.65).

Interpretation:

Deaths that coroners choose to take to inquest differ systematically from those they do not. Although this vetting process is invisible, it may influence the public’s understanding of safety risks, fatal injury and death.In Anglo-American legal systems, coroners operate as an inquisitorial branch of the judiciary, investigating the cause and circumstances of deaths reported to them.1,2 For most of the deaths investigated, coroners’ findings follow an administrative review of documentary evidence, including reports of postmortem examinations, police reports and witness statements.2 However, a small selection of cases proceed to an inquest — formal public hearings in which witnesses testify and parties connected to the death may retain lawyers. Many inquests draw public attention and coverage by media.3 They are arguably the most visible aspect of the work of coroners.For coroners in Australia, Canada, New Zealand and many other countries in the Commonwealth, inquests are held for two main reasons. Statutes governing coroners’ courts dictate that inquests must be held in certain specified circumstances (mandatory inquests). For cases that fall outside the mandatory criteria, coroners may choose to hold an inquest (discretionary inquests). A great deal of variation in the rates of inquests is evident between and within countries (1,46

Table 1:

Rates of coroners’ inquests in selected jurisdictions of Australia, the United Kingdom, New Zealand, the Republic of Ireland and Canada*
Jurisdiction and periodInquests per 1000 reported deaths, no.
Australia
 New South Wales 2000–200749
 Victoria 2000–200745
 Queensland 2001–200750
 Western Australia 2000–200742
United Kingdom
 England and Wales 2000–20074122
 Scotland 20015
 Northern Ireland 200154
New Zealand 2001286
Republic of Ireland 2001185
Canada
 Ontario 20014
 British Columbia 2002–20075,62
Open in a separate window*Unless otherwise stated, rates are adapted from data presented in the Luce report.1Rates in all Australian jurisdictions calculated directly from data in the National Coroners Information System.Procurators Fiscal perform an analogous role to coroners in Scotland; according to the Luce report, the deaths reported to and investigated by them are “comparable to the range handled in many coronial systems.”1The vetting process for determining which cases are subject to a discretionary inquest is invisible, but it may influence the public’s understanding of risks, fatal injuries and untimely death. As such, profiling which cases are selected for such inquests is valuable. Furthermore, because the investigations and recommendations generated by inquests are the centrepiece of the coroner’s role in preventing untimely deaths, the vetting process can shape their contribution to public health and safety.We examined the characteristics of discretionary inquests to determine whether these cases differed systematically from those resolved through administrative investigations.  相似文献   

19.

Background

Mental disorders are likely to be elevated in the Libyan population during the post-conflict period. We estimated cases of severe PTSD and depression and related health service requirements using modelling from existing epidemiological data and current recommended mental health service targets in low and middle income countries (LMIC’s).

Methods

Post-conflict prevalence estimates were derived from models based on a previously conducted systematic review and meta-regression analysis of mental health among populations living in conflict. Political terror ratings and intensity of exposure to traumatic events were used in predictive models. Prevalence of severe cases was applied to chosen populations along with uncertainty ranges. Six populations deemed to be affected by the conflict were chosen for modelling: Misrata (population of 444,812), Benghazi (pop. 674,094), Zintan (pop. 40,000), displaced people within Tripoli/Zlitan (pop. 49,000), displaced people within Misrata (pop. 25,000) and Ras Jdir camps (pop. 3,700). Proposed targets for service coverage, resource utilisation and full-time equivalent staffing for management of severe cases of major depression and post-traumatic stress disorder (PTSD) are based on a published model for LMIC’s.

Findings

Severe PTSD prevalence in populations exposed to a high level of political terror and traumatic events was estimated at 12.4% (95%CI 8.5–16.7) and was 19.8% (95%CI 14.0–26.3) for severe depression. Across all six populations (total population 1,236,600), the conflict could be associated with 123,200 (71,600–182,400) cases of severe PTSD and 228,100 (134,000–344,200) cases of severe depression; 50% of PTSD cases were estimated to co-occur with severe depression. Based upon service coverage targets, approximately 154 full-time equivalent staff would be required to respond to these cases sufficiently which is substantially below the current level of resource estimates for these regions.

Discussion

This is the first attempt to predict the mental health burden and consequent service response needs of such a conflict, and is crucially timed for Libya.  相似文献   

20.

Background:

The ratio of percutaneous coronary interventions to coronary artery bypass graft surgeries (PCI:CABG ratio) varies considerably across hospitals. We conducted a comprehensive study to identify clinical and nonclinical factors associated with variations in the ratio across 17 cardiac centres in the province of Ontario.

Methods:

In this retrospective cohort study, we selected a population-based sample of 8972 patients who underwent an index cardiac catheterization between April 2006 and March 2007 at any of 17 hospitals that perform invasive cardiac procedures in the province. We classified the hospitals into four groups by PCI:CABG ratio (low [< 2.0], low–medium [2.0–2.7], medium–high [2.8–3.2] and high [> 3.2]). We explored the relative contribution of patient, physician and hospital factors to variations in the likelihood of patients receiving PCI or CABG surgery within 90 days after the index catheterization.

Results:

The mean PCI:CABG ratio was 2.7 overall. We observed a threefold variation in the ratios across the four hospital ratio groups, from a mean of 1.6 in the lowest ratio group to a mean of 4.6 in the highest ratio group. Patients with single-vessel disease usually received PCI (88.4%–99.0%) and those with left main artery disease usually underwent CABG (80.8%–94.2%), regardless of the hospital’s procedure ratio. Variation in the management of patients with non-emergent multivessel disease accounted for most of the variation in the ratios across hospitals. The mode of revascularization largely reflected the recommendation of the physician performing the diagnostic catheterization and was also influenced by the revascularization “culture” at the treating hospital.

Interpretation:

The physician performing the diagnostic catheterization and the treating hospital were strong independent predictors of the mode of revascularization. Opportunities exist to improve transparency and consistency around the decision-making process for coronary revascularization, most notably among patients with non-emergent multivessel disease.Large inter-regional and inter-hospital variations exist in the ratio of percutaneous coronary intervention (PCI) procedures to coronary artery bypass graft (CABG) surgeries performed in many countries, but the reasons for these variations are uncertain.13 Bypass surgery was the first method of coronary revascularization to be developed.4 The less-invasive alternative of PCI was developed initially to treat single-vessel disease. However, advances in PCI technology (e.g., bare-metal stents and, later, drug-eluting stents) combined with increased operator experience have led to its use for a broader list of indications, including multivessel disease and acute coronary syndromes.57In Ontario, Canada’s most populous province, the overall PCI:CABG ratio has steadily increased, from 1.6 in 2001 to 2.7 in 2006 (unpublished data available from the authors upon request); similar increases have been observed in other jurisdictions.1,2,8 Although the change in ratio has been driven in part by expanded use of urgent PCI for acute myocardial infarction (MI), increased use of PCI in patients with multivessel disease has likely also been a contributing factor. This application of PCI is more controversial, because several studies, including the recent randomized SYNTAX (Synergy Between Percutaneous Coronary Intervention with TAXUS and Cardiac Surgery) trial, have shown that long-term outcomes of certain patients with multivessel disease were better with CABG surgery than with PCI.913In addition to an overall increase in the PCI:CABG ratio, the amount of variation in the ratio across cardiac centres in Ontario has also steadily increased over time, with more than a threefold regional variation observed in 2006 (unpublished data available from the authors upon request). This degree of variation has raised concerns among some policy-makers and clinicians as to why such striking variations exist in Ontario’s universal health care system. To address this issue, we conducted a comprehensive study to identify clinical and nonclinical factors associated with variations in the PCI:CABG ratio across the province’s 17 cardiac centres.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号