首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

Background:

The gut microbiota is essential to human health throughout life, yet the acquisition and development of this microbial community during infancy remains poorly understood. Meanwhile, there is increasing concern over rising rates of cesarean delivery and insufficient exclusive breastfeeding of infants in developed countries. In this article, we characterize the gut microbiota of healthy Canadian infants and describe the influence of cesarean delivery and formula feeding.

Methods:

We included a subset of 24 term infants from the Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort. Mode of delivery was obtained from medical records, and mothers were asked to report on infant diet and medication use. Fecal samples were collected at 4 months of age, and we characterized the microbiota composition using high-throughput DNA sequencing.

Results:

We observed high variability in the profiles of fecal microbiota among the infants. The profiles were generally dominated by Actinobacteria (mainly the genus Bifidobacterium) and Firmicutes (with diverse representation from numerous genera). Compared with breastfed infants, formula-fed infants had increased richness of species, with overrepresentation of Clostridium difficile. Escherichia–Shigella and Bacteroides species were underrepresented in infants born by cesarean delivery. Infants born by elective cesarean delivery had particularly low bacterial richness and diversity.

Interpretation:

These findings advance our understanding of the gut microbiota in healthy infants. They also provide new evidence for the effects of delivery mode and infant diet as determinants of this essential microbial community in early life.The human body harbours trillions of microbes, known collectively as the “human microbiome.” By far the highest density of commensal bacteria is found in the digestive tract, where resident microbes outnumber host cells by at least 10 to 1. Gut bacteria play a fundamental role in human health by promoting intestinal homeostasis, stimulating development of the immune system, providing protection against pathogens, and contributing to the processing of nutrients and harvesting of energy.1,2 The disruption of the gut microbiota has been linked to an increasing number of diseases, including inflammatory bowel disease, necrotizing enterocolitis, diabetes, obesity, cancer, allergies and asthma.1 Despite this evidence and a growing appreciation for the integral role of the gut microbiota in lifelong health, relatively little is known about the acquisition and development of this complex microbial community during infancy.3Two of the best-studied determinants of the gut microbiota during infancy are mode of delivery and exposure to breast milk.4,5 Cesarean delivery perturbs normal colonization of the infant gut by preventing exposure to maternal microbes, whereas breastfeeding promotes a “healthy” gut microbiota by providing selective metabolic substrates for beneficial bacteria.3,5 Despite recommendations from the World Health Organization,6 the rate of cesarean delivery has continued to rise in developed countries and rates of breastfeeding decrease substantially within the first few months of life.7,8 In Canada, more than 1 in 4 newborns are born by cesarean delivery, and less than 15% of infants are exclusively breastfed for the recommended duration of 6 months.9,10 In some parts of the world, elective cesarean deliveries are performed by maternal request, often because of apprehension about pain during childbirth, and sometimes for patient–physician convenience.11The potential long-term consequences of decisions regarding mode of delivery and infant diet are not to be underestimated. Infants born by cesarean delivery are at increased risk of asthma, obesity and type 1 diabetes,12 whereas breastfeeding is variably protective against these and other disorders.13 These long-term health consequences may be partially attributable to disruption of the gut microbiota.12,14Historically, the gut microbiota has been studied with the use of culture-based methodologies to examine individual organisms. However, up to 80% of intestinal microbes cannot be grown in culture.3,15 New technology using culture-independent DNA sequencing enables comprehensive detection of intestinal microbes and permits simultaneous characterization of entire microbial communities. Multinational consortia have been established to characterize the “normal” adult microbiome using these exciting new methods;16 however, these methods have been underused in infant studies. Because early colonization may have long-lasting effects on health, infant studies are vital.3,4 Among the few studies of infant gut microbiota using DNA sequencing, most were conducted in restricted populations, such as infants delivered vaginally,17 infants born by cesarean delivery who were formula-fed18 or preterm infants with necrotizing enterocolitis.19Thus, the gut microbiota is essential to human health, yet the acquisition and development of this microbial community during infancy remains poorly understood.3 In the current study, we address this gap in knowledge using new sequencing technology and detailed exposure assessments20 of healthy Canadian infants selected from a national birth cohort to provide representative, comprehensive profiles of gut microbiota according to mode of delivery and infant diet.  相似文献   

2.
3.
Elucidating the temporal order of silencing   总被引:1,自引:0,他引:1  
Izaurralde E 《EMBO reports》2012,13(8):662-663
  相似文献   

4.
Robin Skinner  Steven McFaull 《CMAJ》2012,184(9):1029-1034

Background:

Suicide is the second leading cause of death for young Canadians (10–19 years of age) — a disturbing trend that has shown little improvement in recent years. Our objective was to examine suicide trends among Canadian children and adolescents.

Methods:

We conducted a retrospective analysis of standardized suicide rates using Statistics Canada mortality data for the period spanning from 1980 to 2008. We analyzed the data by sex and by suicide method over time for two age groups: 10–14 year olds (children) and 15–19 year olds (adolescents). We quantified annual trends by calculating the average annual percent change (AAPC).

Results:

We found an average annual decrease of 1.0% (95% confidence interval [CI] −1.5 to −0.4) in the suicide rate for children and adolescents, but stratification by age and sex showed significant variation. We saw an increase in suicide by suffocation among female children (AAPC = 8.1%, 95% CI 6.0 to 10.4) and adolescents (AAPC = 8.0%, 95% CI 6.2 to 9.8). In addition, we noted a decrease in suicides involving poisoning and firearms during the study period.

Interpretation:

Our results show that suicide rates in Canada are increasing among female children and adolescents and decreasing among male children and adolescents. Limiting access to lethal means has some potential to mitigate risk. However, suffocation, which has become the predominant method for committing suicide for these age groups, is not amenable to this type of primary prevention.Suicide was ranked as the second leading cause of death among Canadians aged 10–34 years in 2008.1 It is recognized that suicidal behaviour and ideation is an important public health issue among children and adolescents; disturbingly, suicide is a leading cause of Canadian childhood mortality (i.e., among youths aged 10–19 years).2,3Between 1980 and 2008, there were substantial improvements in mortality attributable to unintentional injury among 10–19 year olds, with rates decreasing from 37.7 per 100 000 to 10.7 per 100 000; suicide rates, however, showed less improvement, with only a small reduction during the same period (from 6.2 per 100 000 in 1980 to 5.2 per 100 000 in 2008).1Previous studies that looked at suicides among Canadian adolescents and young adults (i.e., people aged 15–25 years) have reported rates as being generally stable over time, but with a marked increase in suicides by suffocation and a decrease in those involving firearms.2 There is limited literature on self-inflicted injuries among children 10–14 years of age in Canada and the United States, but there appears to be a trend toward younger children starting to self-harm.3,4 Furthermore, the trend of suicide by suffocation moving to younger ages may be partly due to cases of the “choking game” (self-strangulation without intent to cause permanent harm) that have been misclassified as suicides.57Risk factors for suicidal behaviour and ideation in young people include a psychiatric diagnosis (e.g., depression), substance abuse, past suicidal behaviour, family factors and other life stressors (e.g., relationships, bullying) that have complex interactions.8 A suicide attempt involves specific intent, plans and availability of lethal means, such as firearms,9 elevated structures10 or substances.11 The existence of “pro-suicide” sites on the Internet and in social media12 may further increase risk by providing details of various ways to commit suicide, as well as evaluations ranking these methods by effectiveness, amount of pain involved and length of time to produce death.1315Our primary objective was to present the patterns of suicide among children and adolescents (aged 10–19 years) in Canada.  相似文献   

5.
Martinson BC 《EMBO reports》2011,12(8):758-762
Universities have been churning out PhD students to reap financial and other rewards for training biomedical scientists. This deluge of cheap labour has created unhealthy competition, which encourages scientific misconduct.Most developed nations invest a considerable amount of public money in scientific research for a variety of reasons: most importantly because research is regarded as a motor for economic progress and development, and to train a research workforce for both academia and industry. Not surprisingly, governments are occasionally confronted with questions about whether the money invested in research is appropriate and whether taxpayers are getting the maximum value for their investments.…questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientistsThe training and maintenance of the research workforce is a large component of these investments. Yet discussions in the USA about the appropriate size of this workforce have typically been contentious, owing to an apparent lack of reliable data to tell us whether the system yields academic ‘reproduction rates'' that are above, below or at replacement levels. In the USA, questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientists. As Donald Kennedy, then Editor-in-Chief of Science, noted several years ago, leaders in prestigious academic institutions have repeatedly rung alarm bells about shortages in the science workforce. Less often does one see questions raised about whether too many scientists are being produced or concerns about unintended consequences that may result from such overproduction. Yet recognizing that resources are finite, it seems reasonable to ask what level of competition for resources is productive, and at what level does competition become counter-productive.Finding a proper balance between the size of the research workforce and the resources available to sustain it has other important implications. Unhealthy competition—too many people clamouring for too little money and too few desirable positions—creates its own problems, most notably research misconduct and lower-quality, less innovative research. If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edge. Moreover, many in the science community worry that every publicized case of research misconduct could jeopardize those resources, if politicians and taxpayers become unwilling to invest in a research system that seems to be riddled with fraud and misconduct.The biomedical research enterprise in the USA provides a useful context in which to examine the level of competition for resources among academic scientists. My thesis is that the system of publicly funded research in the USA as it is currently configured supports a feedback system of institutional incentives that generate excessive competition for resources in biomedical research. These institutional incentives encourage universities to overproduce graduate students and postdoctoral scientists, who are both trainees and a cheap source of skilled labour for research while in training. However, once they have completed their training, they become competitors for money and positions, thereby exacerbating competitive pressures.Questions raised about whether too many scientists are being produced or concerns about the unintended consequences of such overproduction are less commonThe resulting scarcity of resources, partly through its effect on peer review, leads to a shunting of resources away from both younger researchers and the most innovative ideas, which undermines the effectiveness of the research enterprise as a whole. Faced with an increasing number of grant applications and the consequent decrease in the percentage of projects that can be funded, reviewers tend to ‘play it safe'' and favour projects that have a higher likelihood of yielding results, even if the research is conservative in the sense that it does not explore new questions. Resource scarcity can also introduce unwanted randomness to the process of determining which research gets funded. A large group of scientists, led by a cancer biologist, has recently mounted a campaign against a change in a policy of the National Institutes of Health (NIH) to allow only one resubmission of an unfunded grant proposal (Wadman, 2011). The core of their argument is that peer reviewers are likely able to distinguish the top 20% of research applications from the rest, but that within that top 20%, distinguishing the top 5% or 10% means asking peer reviewers for a level of precision that is simply not possible. With funding levels in many NIH institutes now within that 5–10% range, the argument is that reviewers are being forced to choose at random which excellent applications do and do not get funding. In addition to the inefficiency of overproduction and excessive competition in terms of their costs to society and opportunity costs to individuals, these institutional incentives might undermine the integrity and quality of science, and reduce the likelihood of breakthroughs.My colleagues and I have expressed such concerns about workforce dynamics and related issues in several publications (Martinson, 2007; Martinson et al, 2005, 2006, 2009, 2010). Early on, we observed that, “missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures” (Martinson et al, 2005). Our more recent publications have been more specific about the institutional and systemic structures concerned. It seems that at least a few important leaders in science share these concerns.In April 2009, the NIH, through the National Institute of General Medical Sciences (NIGMS), issued a request for applications (RFA) calling for proposals to develop computational models of the research workforce (http://grants.nih.gov/grants/guide/rfa-files/RFA-GM-10-003.html). Although such an initiative might be premature given the current level of knowledge, the rationale behind the RFA seems irrefutable: “there is a need to […] pursue a systems-based approach to the study of scientific workforce dynamics.” Roughly four decades after the NIH appeared on the scene, this is, to my knowledge, the first official, public recognition that the biomedical workforce tends not to conform nicely to market forces of supply and demand, despite the fact that others have previously made such arguments.Early last year, Francis Collins, Director of the NIH, published a PolicyForum article in Science, voicing many of the concerns I have expressed about specific influences that have led to growth rates in the science workforce that are undermining the effectiveness of research in general, and biomedical research in particular. He notes the increasing stress in the biomedical research community after the end of the NIH “budget doubling” between 1998 and 2003, and the likelihood of further disruptions when the American Recovery and Reinvestment Act of 2009 (ARRA) funding ends in 2011. Arguing that innovation is crucial to the future success of biomedical research, he notes the tendency towards conservatism of the NIH peer-review process, and how this worsens in fiscally tight times. Collins further highlights the ageing of the NIH workforce—as grants increasingly go to older scientists—and the increasing time that researchers are spending in itinerant and low-paid postdoctoral positions as they stack up in a holding pattern, waiting for faculty positions that may or may not materialize. Having noted these challenging trends, and echoing the central concerns of a 2007 Nature commentary (Martinson, 2007), he concludes that “…it is time for NIH to develop better models to guide decisions about the optimum size and nature of the US workforce for biomedical research. A related issue that needs attention, though it will be controversial, is whether institutional incentives in the current system that encourage faculty to obtain up to 100% of their salary from grants are the best way to encourage productivity.”Similarly, Bruce Alberts, Editor-in-Chief of Science, writing about incentives for innovation, notes that the US biomedical research enterprise includes more than 100,000 graduate students and postdoctoral fellows. He observes that “only a select few will go on to become independent research scientists in academia”, and argues that “assuming that the system supporting this career path works well, these will be the individuals with the most talent and interest in such an endeavor” (Alberts, 2009).His editorial is not concerned with what happens to the remaining majority, but argues that even among the select few who manage to succeed, the funding process for biomedical research “forces them to avoid risk-taking and innovation”. The primary culprit, in his estimation, is the conservatism of the traditional peer-review system for federal grants, which values “research projects that are almost certain to ‘work''”. He continues, “the innovation that is essential for keeping science exciting and productive is replaced by […] research that has little chance of producing the breakthroughs needed to improve human health.”If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edgeAlthough I believe his assessment of the symptoms is correct, I think he has misdiagnosed the cause, in part because he has failed to identify which influence he is concerned with from the network of influences in biomedical research. To contextualize the influences of concern to Alberts, we must consider the remaining majority of doctorally trained individuals so easily dismissed in his editorial, and further examine what drives the dynamics of the biomedical research workforce.Labour economists might argue that market forces will always balance the number of individuals with doctorates with the number of appropriate jobs for them in the long term. Such arguments would ignore, however, the typical information asymmetry between incoming graduate students, whose knowledge about their eventual job opportunities and career options is by definition far more limited than that of those who run the training programmes. They would also ignore the fact that universities are generally not confronted with the externalities resulting from overproduction of PhDs, and have positive financial incentives that encourage overproduction. During the past 40 years, NIH ‘extramural'' funding has become crucial for graduate student training, faculty salaries and university overheads. For their part, universities have embraced NIH extramural funding as a primary revenue source that, for a time, allowed them to implement a business model based on the interconnected assumptions that, as one of the primary ‘outputs'' or ‘products'' of the university, more doctorally trained individuals are always better than fewer, and because these individuals are an excellent source of cheap, skilled labour during their training, they help to contain the real costs of faculty research.“…the current system has succeeded in maximizing the amount of research […] it has also degraded the quality of graduate training and led to an overproduction of PhDs…”However, it has also made universities increasingly dependent on NIH funding. As recently documented by the economist Paula Stephan, most faculty growth in graduate school programmes during the past decade has occurred in medical colleges, with the majority—more than 70%—in non-tenure-track positions. Arguably, this represents a shift of risk away from universities and onto their faculty. Despite perennial cries of concern about shortages in the research workforce (Butz et al, 2003; Kennedy et al, 2004; National Academy of Sciences et al, 2005) a number of commentators have recently expressed concerns that the current system of academic research might be overbuilt (Cech, 2005; Heinig et al, 2007; Martinson, 2007; Stephan, 2007). Some explicitly connect this to structural arrangements between the universities and NIH funding (Cech, 2005; Collins, 2007; Martinson, 2007; Stephan, 2007).In 1995, David Korn pointed out what he saw as some problematic aspects of the business model employed by Academic Medical Centers (AMCs) in the USA during the past few decades (Korn, 1995). He noted the reliance of AMCs on the relatively low-cost, but highly skilled labour represented by postdoctoral fellows, graduate students and others—who quickly start to compete with their own professors and mentors for resources. Having identified the economic dependence of the AMCs on these inexpensive labour pools, he noted additional problems with the graduate training programmes themselves. “These programs are […] imbued with a value system that clearly indicates to all participants that true success is only marked by the attainment of a faculty position in a high-profile research institution and the coveted status of principal investigator on NIH grants.” Pointing to “more than 10 years of severe supply/demand imbalance in NIH funds”, Korn concluded that, “considering the generative nature of each faculty mentor, this enterprise could only sustain itself in an inflationary environment, in which the society''s investment in biomedical research and clinical care was continuously and sharply expanding.” From 1994 to 2003, total funding for biomedical research in the USA increased at an annual rate of 7.8%, after adjustment for inflation. The comparable rate of growth between 2003 and 2007 was 3.4% (Dorsey et al, 2010). These observations resonate with the now classic observation by Derek J. de Solla Price, from more than 30 years before, that growth in science frequently follows an exponential pattern that cannot continue indefinitely; the enterprise must eventually come to a plateau (de Solla Price, 1963).In May 2009, echoing some of Korn''s observations, Nobel laureate Roald Hoffmann caused a stir in the US science community when he argued for a “de-coupling” of the dual roles of graduate students as trainees and cheap labour (Hoffmann, 2009). His suggestion was to cease supporting graduate students with faculty research grants, and to use the money instead to create competitive awards for which graduate students could apply, making them more similar to free agents. During the ensuing discussion, Shirley Tilghman, president of Princeton University, argued that “although the current system has succeeded in maximizing the amount of research performed […] it has also degraded the quality of graduate training and led to an overproduction of PhDs in some areas. Unhitching training from research grants would be a much-needed form of professional ‘birth control''” (Mervis, 2009).The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientistsAlthough the issue of what I will call the ‘academic birth rate'' is the central concern of this analysis, the ‘academic end-of-life'' also warrants some attention. The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientists. A 2008 news item in Science quoted then 70-year-old Robert Wells, a molecular geneticist at Texas A&M University, “‘if I and other old birds continue to land the grants, the [young scientists] are not going to get them.” He worries that the budget will not be able to support “the 100 people ‘I''ve trained […] to replace me''” (Kaiser, 2008). While his claim of 100 trainees might be astonishing, it might be more astonishing that his was the outlying perspective. The majority of senior scientists interviewed for that article voiced intentions to keep doing science—and going after NIH grants—until someone forced them to stop or they died.Some have looked at the current situation with concern, primarily because of the threats it poses to the financial and academic viability of universities (Korn, 1995; Heinig et al, 2007; Korn & Heinig, 2007), although most of those who express such concerns have been distinctly reticent to acknowledge the role of universities in creating and maintaining the situation. Others have expressed concerns about the differential impact of extreme competition and meagre job prospects on the recruitment, development and career survival of young and aspiring scientists (Freeman et al, 2001; Kennedy et al, 2004; Martinson et al, 2006; Anderson et al, 2007a; Martinson, 2007; Stephan, 2007). There seems to be little disagreement, however, that the system has generated excessively high competition for federal research funding, and that this threatens to undermine the very innovation and production of knowledge that is its raison d''etre.The production of knowledge in science, particularly of the ‘revolutionary'' variety, is generally not a linear input–output process with predictable returns on investment, clear timelines and high levels of certainty (Lane, 2009). On the contrary, it is arguable that “revolutionary science is a high risk and long-term endeavour which usually fails” (Charlton & Andras, 2008). Predicting where, when and by whom breakthroughs in understanding will be produced has proven to be an extremely difficult task. In the face of such uncertainty, and denying the realities of finite resources, some have argued that the best bet is to maximize the number of scientists, using that logic to justify a steady-state production of new PhDs, regardless of whether the labour market is sending signals of increasing or decreasing demand for that supply. Only recently have we begun to explore the effects of the current arrangement on the process of knowledge production, and on innovation in particular (Charlton & Andras, 2008; Kolata, 2009).…most of those who express such concerns have been reticent to acknowledge the role of universities themselves in creating and maintaining the situationBruce Alberts, in the above-mentioned editorial, points to several initiatives launched by the NIH that aim to get a larger share of NIH funding into the hands of young scientists with particularly innovative ideas. These include the “New Innovator Award,” the “Pioneer Award” and the “Transformational R01 Awards”. The proportion of NIH funding dedicated to these awards, however, amounts to “only 0.27% of the NIH budget” (Alberts, 2009). Such a small proportion of the NIH budget does not seem likely to generate a large amount of more innovative science. Moreover, to the extent that such initiatives actually succeed in enticing more young investigators to become dependent on NIH funds, any benefit these efforts have in terms of innovation may be offset by further increases in competition for resources that will come when these new ‘innovators'' reach the end of this specialty funding and add to the rank and file of those scrapping for funds through the standard mechanisms.Our studies on research integrity have been mostly oriented towards understanding how the influences within which academic scientists work might affect their behaviour, and thus the quality of the science they produce (Anderson et al, 2007a, 2007b; Martinson et al, 2009, 2010). My colleagues and I have focused on whether biomedical researchers perceive fairness in the various exchange relationships within their work systems. I am persuaded by the argument that expectations of fairness in exchange relationships have been hard-wired into us through evolution (Crockett et al, 2008; Hsu et al, 2008; Izuma et al, 2008; Pennisi, 2009), with the advent of modern markets being a primary manifestation of this. Thus, violations of these expectations strike me as potentially corrupting influences. Such violations might be prime motivators for ill will, possibly engendering bad-faith behaviour among those who perceive themselves to have been slighted, and therefore increasing the risk of research misconduct. They might also corrupt the enterprise by signalling to talented young people that biomedical research is an inhospitable environment in which to develop a career, possibly chasing away some of the most talented individuals, and encouraging a selection of characteristics that might not lead to optimal effectiveness, in terms of scientific innovation and productivity (Charlton, 2009).To the extent that we have an ecology with steep competition that is fraught with high risks of career failure for young scientists after they incur large costs of time, effort and sometimes financial resources to obtain a doctoral degree, why would we expect them to take on the additional, substantial risks involved in doing truly innovative science and asking risky research questions? And why, in such a cut-throat setting, would we not anticipate an increase in corner-cutting, and a corrosion of good scientific practice, collegiality, mentoring and sociability? Would we not also expect a reduction in high-risk, innovative science, and a reversion to a more career-safe type of ‘normal'' science? Would this not reduce the effectiveness of the institution of biomedical research? I do not claim to know the conditions needed to maximize the production of research that is novel, innovative and conducted with integrity. I am fairly certain, however, that putting scientists in tenuous positions in which their careers and livelihoods would be put at risk by pursuing truly revolutionary research is one way to insure against it.  相似文献   

6.
Elixirs of death     
Substandard and fake drugs are increasingly threatening lives in both the developed and developing world, but governments and industry are struggling to improve the situation.When people take medicine, they assume that it will make them better. However many patients cannot trust their drugs to be effective or even safe. Fake or substandard medicine is a major public health problem and it seems to be growing. More than 200 heart patients died in Pakistan in 2012 after taking a contaminated drug against hypertension [1]. In 2006, cough syrup that contained diethylene glycol as a cheap substitute for pharmaceutical-grade glycerin was distributed in Panama, causing the death of at least 219 people [2,3]. However, the problem is not restricted to developing countries. In 2012, more than 500 patients came down with fungal meningitis and several dozens died after receiving contaminated steroid injections from a compounding pharmacy in Massachusetts [4]. The same year, a fake version of the anti-cancer drug Avastin, which contained no active ingredient, was sold in the USA. The drug seemed to have entered the country through Turkey, Switzerland, Denmark and the UK [5].…many patients cannot trust their drugs to be effective or even safeThe extent of the problem is not really known, as companies and governments do not always report incidents [6]. However, the information that is available is alarming enough, especially in developing countries. One study found that 20% of antihypertensive drugs collected from pharmacies in Rwanda were substandard [7]. Similarly, in a survey of anti-malaria drugs in Southeast Asia and sub-Saharan Africa, 20–42% were found to be either of poor quality or outright fake [8], whilst 56% of amoxicillin capsules sampled in different Arab countries did not meet the US Pharmacopeia requirements [9].Developing countries are particularly susceptible to substandard and fake medicine. Regulatory authorities do not have the means or human resources to oversee drug manufacturing and distribution. A country plagued by civil war or famine might have more pressing problems—including shortages of medicine in the first place. The drug supply chain is confusingly complex with medicines passing through many different hands before they reach the patient, which creates many possible entry points for illegitimate products. Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insurance. Instead, they buy cheap medicine from street vendors at the market or on the bus (Fig 1; [2,10,11]). “People do not have the money to buy medicine at a reasonable price. But quality comes at a price. A reasonable margin is required to pay for a quality control system,” explained Hans Hogerzeil, Professor of Global Health at Groningen University in the Netherlands. In some countries, falsifying medicine has developed into a major business. The low risk of being detected combined with relatively low penalties has turned falsifying medicine into the “perfect crime” [2].Open in a separate windowFigure 1Women sell smuggled, counterfeit medicine on the Adjame market in Abidjan, Ivory Coast, in 2007. Fraudulent street medecine sales rose by 15–25% in the past two years in Ivory Coast.Issouf Sanogo/AFP Photo/Getty Images.There are two main categories of illegitimate drugs. ‘Substandard'' medicines might result from poor-quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intent. It might be manufactured outside the regulatory system, perhaps in an illegitimate production shack that blends chalk with other ingredients and presses it into pills [10]. Whilst falsified medicines do not typically contain any active ingredients, substandard medicine might contain subtherapeutic amounts. This is particularly problematic when it comes to anti-infectious drugs, as it facilitates the emergence and spread of drug resistance [12]. A sad example is the emergence of artemisinin-resistant Plasmodium strains at the Thai–Cambodia border [8] and the Thai–Myanmar border [13], and increasing multidrug-resistant tuberculosis might also be attributed to substandard medication [11].Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insuranceEven if a country effectively prosecutes falsified and substandard medicine within its borders, it is still vulnerable to fakes and low-quality drugs produced elsewhere where regulations are more lax. To address this problem, international initiatives are urgently required [10,14,15], but there is no internationally binding law to combat counterfeit and substandard medicine. Although drug companies, governments and NGOs are interested in good-quality medicines, the different parties seem to have difficulties coming to terms with how to proceed. What has held up progress is a conflation of health issues and economic interests: innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting quality of medicine [14,16].The concern that intellectual property (IP) interests threaten public health dates back to the ‘Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement'' of the World Trade Organization (WTO), adopted in 1994, to establish global protection of intellectual property rights, including patents for pharmaceuticals. The TRIPS Agreement had devastating consequences during the acquired immunodeficiency syndrome epidemic, as it blocked patients in developing countries from access to affordable medicine. Although it includes flexibility, such as the possibility for governments to grant compulsory licenses to manufacture or import a generic version of a patented drug, it has not always been clear how these can be used by countries [14,16,17].In response to public concerns over the public health consequences of TRIPS, the Doha Declaration on the TRIPS Agreement and Public Health was adopted at the WTO''s Ministerial Conference in 2001. It reaffirmed the right of countries to use TRIPS flexibilities and confirmed the primacy of public health over the enforcement of IP rights. Although things have changed for the better, the Doha Declaration did not solve all the problems associated with IP protection and public health. For example, anti-counterfeit legislation, encouraged by multi-national pharmaceutical industries and the EU, threatened to impede the availability of generic medicines in East Africa [14,16,18]. In 2008–2009, European customs authorities seized shipments of legitimate generic medicines in transit from India to other developing countries because they infringed European IP laws [14,16,17]. “We''re left with decisions being taken based on patents and trademarks that should be taken based on health,” commented Roger Bate, a global health expert and resident scholar at the American Enterprise Institute in Washington, USA. “The health community is shooting themselves in the foot.”Conflating health care and IP issues are reflected in the unclear use of the term ‘counterfeit'' [2,14]. “Since the 1990s the World Health Organization (WHO) has used the term ‘counterfeit'' in the sense we now use ‘falsified'',” explained Hogerzeil. “The confusion started in 1995 with the TRIPS agreement, through which the term ‘counterfeit'' got the very narrow meaning of trademark infringement.” As a consequence, an Indian generic, for example, which is legal in some countries but not in others, could be labelled as ‘counterfeit''—and thus acquire the negative connotation of bad quality. “The counterfeit discussion was very much used as a way to block the market of generics and to put them in a bad light,” Hogerzeil concluded.The rifts between the stakeholders have become so deep during the course of these discussions that progress is difficult to achieve. “India is not at all interested in any international regulation. And, unfortunately, it wouldn''t make much sense to do anything without them,” Hogerzeil explained. Indeed, India is a core player: not only does it have a large generics industry, but also the country seems to be, together with China, the biggest source of fake medical products [19,20]. The fact that India is so reluctant to react is tragically ironic, as this stance hampers the growth of its own generic companies like Ranbaxy, Cipla or Piramal. “I certainly don''t believe that Indian generics would lose market share if there was stronger action on public health,” Bate said. Indeed, stricter regulations and control systems would be advantageous, because they would keep fakers at bay. The Indian generic industry is a common target for fakers, because their products are broadly distributed. “The most likely example of a counterfeit product I have come across in emerging markets is a counterfeit Indian generic,” Bate said. Such fakes can damage a company''s reputation and have a negative impact on its revenues when customers stop buying the product.The WHO has had a key role in attempting to draft international regulations that would contain the spread of falsified and substandard medicine. It took a lead in 2006 with the launch of the International Medical Products Anti-Counterfeiting Taskforce (IMPACT). But IMPACT was not a success. Concerns were raised over the influence of multi-national drug companies and the possibility that issues on quality of medicines were conflated with the attempts to enforce stronger IP measures [17]. The WHO distanced itself from IMPACT after 2010. For example, it no longer hosts IMPACT''s secretariat at its headquarters in Geneva [2].‘Substandard'' medicines might result from poor quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intentIn 2010, the WHO''s member states established a working group to further investigate how to proceed, which led to the establishment of a new “Member State mechanism on substandard/spurious/falsely labelled/falsified/counterfeit medical products” (http://www.who.int/medicines/services/counterfeit/en/index.html). However, according to a publication by Amir Attaran from the University of Ottawa, Canada, and international colleagues, the working group “still cannot agree how to define the various poor-quality medicines, much less settle on any concrete actions” [14]. The paper''s authors demand more action and propose a binding legal framework: a treaty. “Until we have stronger public health law, I don''t think that we are going to resolve this problem,” Bate, who is one of the authors of the paper, said.Similarly, the US Food and Drug Administration (FDA) commissioned the Institute of Medicine (IOM) to convene a consensus committee on understanding the global public health implications of falsified and substandard pharmaceuticals [2]. Whilst others have called for a treaty, the IOM report calls on the World Health Assembly—the governing body of the WHO—to develop a code of practice such as a “voluntary soft law” that countries can sign to express their will to do better. “At the moment, there is not yet enough political interest in a treaty. A code of conduct may be more realistic,” Hogerzeil, who is also on the IOM committee, commented. Efforts to work towards a treaty should nonetheless be pursued, Bate insisted: “The IOM is right in that we are not ready to sign a treaty yet, but that does not mean you don''t start negotiating one.”Whilst a treaty might take some time, there are several ideas from the IOM report and elsewhere that could already be put into action to deal with this global health threat [10,12,14,15,19]. Any attempts to safeguard medicines need to address both falsified and substandard medicines, but the counter-measures are different [14]. Falsifying medicine is, by definition, a criminal act. To counteract fakers, action needs to be taken to ensure that the appropriate legal authorities deal with criminals. Substandard medicine, on the other hand, arises when mistakes are made in genuine manufacturing companies. Such mistakes can be reduced by helping companies do better and by improving quality control of drug regulatory authorities.Manufacturing pharmaceuticals is a difficult and costly business that requires clean water, high-quality chemicals, expensive equipment, technical expertise and distribution networks. Large and multi-national companies benefit from economies of scale to cope with these problems. But smaller companies often struggle and compromise in quality [2,21]. “India has 20–40 big companies and perhaps nearly 20,000 small ones. To me, it seems impossible for them to produce at good quality, if they remain so small,” Hogerzeil explained. “And only by being strict, can you force them to combine and to become bigger industries that can afford good-quality assurance systems.” Clamping down on drug quality will therefore lead to a consolidation of the industry, which is an essential step. “If you look at Europe and the US, there were hundreds of drug companies—now there are dozens. And if you look at the situation in India and China today, there are thousands and that will have to come down to dozens as well,” Bate explained.…innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting […] medicineIn addition to consolidating the market by applying stricter rules, the IOM has also suggested measures for supporting companies that observe best practices [2]. For example, the IOM proposes that the International Finance Corporation and the Overseas Private Investment Corporation, which promote private-sector development to reduce poverty, should create separate investment vehicles for pharmaceutical manufacturers who want to upgrade to international standards. Another suggestion is to harmonize market registration of pharmaceutical products, which would ease the regulatory burden for generic producers in developing countries and improve the efficiency of regulatory agencies.Once the medicine leaves the manufacturer, controlling distribution systems becomes another major challenge in combatting falsified and substandard medicine. Global drug supply chains have grown increasingly complicated; drugs cross borders, are sold back and forth between wholesalers and distributers, and are often repackaged. Still, there is a main difference between developing and developed countries. In the latter case, relatively few companies dominate the market, whereas in poorer nations, the distribution system is often fragmented and uncontrolled with parallel schemes, too few pharmacies, even fewer pharmacists and many unlicensed medical vendors. Every transaction creates an opportunity for falsified or substandard medicine to enter the market [2,10,19]. More streamlined and transparent supply chains and stricter licensing requirements would be crucial to improve drug quality. “And we can start in the US,” Hogerzeil commented.…India is a core player: not only does it have a large generics industry, but the country also seems to be, together with China, the biggest source of fake medical productsDistribution could be improved at different levels, starting with the import of medicine. “There are states in the USA where the regulation for medicine importation is very lax. Anyone can import; private clinics can buy medicine from Lebanon or elsewhere and fly them in,” Hogerzeil explained. The next level would be better control over the distribution system within the country. The IOM suggests that state boards should license wholesalers and distributors that meet the National Association of Boards of Pharmacy accreditation standards. “Everybody dealing with medicine has to be licensed,” Hogerzeil said. “And there should be a paper trail of who buys what from whom. That way you close the entry points for illegal drugs and prevent that falsified medicines enter the legal supply chain.” The last level would be a track-and-trace system to identify authentic drugs [2]. Every single package of medicine should be identifiable through an individual marker, such as a 3D bar code. Once it is sold, it is ticked off in a central database, so the marker cannot be reused.According to Hogerzeil, equivalent measures at these different levels should be established in every country. “I don''t believe in double standards”, he said. “Don''t say to Uganda: ‘you can''t do that''. Rather, indicate to them what a cost-effective system in the West looks like and help them, and give them the time, to create something in that direction that is feasible in their situation.”Nigeria, for instance, has demonstrated that with enough political will, it is possible to reduce the proliferation of falsified and substandard medicine. Nigeria had been a major source for falsified products, but things changed in 2001, when Dora Akunyili was appointed Director General of the National Agency for Food and Drug Administration and Control. Akunyili has a personal motivation for fighting falsified drugs: her sister Vivian, a diabetic patient, lost her life to fake insulin in 1988. Akunyili strengthened import controls, campaigned for public awareness, clamped down on counterfeit operations and pushed for harsher punishments [10,19]. Paul Orhii, Akunyili''s successor, is committed to continuing her work [10]. Although there are no exact figures, various surveys indicate that the rate of bad-quality medicine has dropped considerably in Nigeria [10].China is also addressing its drug-quality problems. In a highly publicized event, the former head of China''s State Food and Drug Administration, Zheng Xiaoyu, was executed in 2007 after he was found guilty of accepting bribes to approve untested medicine. Since then, China''s fight against falsified medicine has continued. As a result of heightened enforcement, the number of drug companies in China dwindled from 5,000 in 2004 to about 3,500 this year [2]. Moreover, in July 2012, more than 1,900 suspects were arrested for the sale of fake or counterfeit drugs.Quality comes at a price, however. It is expensive to produce high-quality medicine, and it is expensive to control the production and distribution of drugs. Many low- and middle-income countries might not have the resources to tackle the problem and might not see quality of medicine as a priority. But they should, and affluent countries should help. Not only because health is a human right, but also for economic reasons. A great deal of time and money is invested into testing the safety and efficacy of medicine during drug development, and these resources are wasted when drugs do not reach patients. Falsified and substandard medicines are a financial burden to health systems and the emergence of drug-resistant pathogens might make invaluable medications useless. Investing in the safety of medicine is therefore a humane and an economic imperative.  相似文献   

7.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

8.
Clinically, amniotic membrane (AM) suppresses inflammation, scarring, and angiogenesis. AM contains abundant hyaluronan (HA) but its function in exerting these therapeutic actions remains unclear. Herein, AM was extracted sequentially with buffers A, B, and C, or separately by phosphate-buffered saline (PBS) alone. Agarose gel electrophoresis showed that high molecular weight (HMW) HA (an average of ∼3000 kDa) was predominantly extracted in isotonic Extract A (70.1 ± 6.0%) and PBS (37.7 ± 3.2%). Western blot analysis of these extracts with hyaluronidase digestion or NaOH treatment revealed that HMW HA was covalently linked with the heavy chains (HCs) of inter-α-inhibitor (IαI) via a NaOH-sensitive bond, likely transferred by the tumor necrosis factor-α stimulated gene-6 protein (TSG-6). This HC·HA complex (nHC·HA) could be purified from Extract PBS by two rounds of CsCl/guanidine HCl ultracentrifugation as well as in vitro reconstituted (rcHC·HA) by mixing HMW HA, serum IαI, and recombinant TSG-6. Consistent with previous reports, Extract PBS suppressed transforming growth factor-β1 promoter activation in corneal fibroblasts and induced mac ro phage apo pto sis. However, these effects were abolished by hyaluronidase digestion or heat treatment. More importantly, the effects were retained in the nHC·HA or rcHC·HA. These data collectively suggest that the HC·HA complex is the active component in AM responsible in part for clinically observed anti-inflammatory and anti-scarring actions.Hyaluronan (HA)4 is widely distributed in extracellular matrices, tissues, body fluids, and even in intracellular compartments (reviewed in Refs. 1 and 2). The molecular weight of HA ranges from 200 to 10,000 kDa depending on the source (3), but can also exist as smaller fragments and oligosaccharides under certain physiological or pathological conditions (1). Investigations over the last 15 years have suggested that low Mr HA can induce the gene expression of proinflammatory mediators and proangiogenesis, whereas high molecular weight (HMW) HA inhibits these processes (47).Several proteins have been shown to bind to HA (8) such as aggrecan (9), cartilage link protein (10), versican (11), CD44 (12, 13), inter-α-inhibitor (IαI) (14, 15), and tumor necrosis factor-α stimulated gene-6 protein (TSG-6) (16, 17). IαI consists of two heavy chains (HCs) (HC1 and HC2), both of which are linked through ester bonds to a chondroitin sulfate chain that is attached to the light chain, i.e. bikunin. Among all HA-binding proteins, only the HCs of IαI have been clearly demonstrated to be covalently coupled to HA (14, 18). However, TSG-6 has also been reported to form stable, possibly covalent, complexes with HA, either alone (19, 20) or when associated with HC (21).The formation of covalent bonds between HCs and HA is mediated by TSG-6 (2224) where its expression is often induced by inflammatory mediators such as tumor necrosis factor-α and interleukin-1 (25, 26). TSG-6 is also expressed in inflammatory-like processes, such as ovulation (21, 27, 28) and cervical ripening (29). TSG-6 interacts with both HA (17) and IαI (21, 24, 3033), and is essential for covalently transferring HCs on to HA (2224). The TSG-6-mediated formation of the HC·HA complex has been demonstrated to play a crucial role in female fertility in mice. The HC·HA complex is an integral part of an expanded extracellular “cumulus” matrix around the oocyte, which plays a critical role in successful ovulation and fertilization in vivo (22, 34). HC·HA complexes have also been found at sites of inflammation (3538) where its pro- or anti-inflammatory role remain arguable (39, 40).Immunostaining reveals abundant HA in the avascular stromal matrix of the AM (41, 42).5 In ophthalmology, cryopreserved AM has been widely used as a surgical graft for ocular surface reconstruction and exerts clinically observable actions to promote epithelial wound healing and to suppress inflammation, scarring, and angiogenesis (for reviews see Refs. 4345). However, it is not clear whether HA in AM forms HC·HA complex, and if so whether such an HC·HA complex exerts any of the above therapeutic actions. To address these questions, we extracted AM with buffers of increasing salt concentration. Because HMW HA was found to form the HC·HA complex and was mainly extractable by isotonic solutions, we further purified it from the isotonic AM extract and reconstituted it in vitro from three defined components, i.e. HMW HA, serum IαI, and recombinant TSG-6. Our results showed that the HC·HA complex is an active component in AM responsible for the suppression of TGF-β1 promoter activity, linkable to the scarring process noted before by AM (4648) and by the AM soluble extract (49), as well as for the promotion of macrophage death, linkable to the inflammatory process noted by AM (50) and the AM soluble extract (51).  相似文献   

9.
10.
A central question in Wnt signaling is the regulation of β-catenin phosphorylation and degradation. Multiple kinases, including CKIα and GSK3, are involved in β-catenin phosphorylation. Protein phosphatases such as PP2A and PP1 have been implicated in the regulation of β-catenin. However, which phosphatase dephosphorylates β-catenin in vivo and how the specificity of β-catenin dephosphorylation is regulated are not clear. In this study, we show that PP2A regulates β-catenin phosphorylation and degradation in vivo. We demonstrate that PP2A is required for Wnt/β-catenin signaling in Drosophila. Moreover, we have identified PR55α as the regulatory subunit of PP2A that controls β-catenin phosphorylation and degradation. PR55α, but not the catalytic subunit, PP2Ac, directly interacts with β-catenin. RNA interference knockdown of PR55α elevates β-catenin phosphorylation and decreases Wnt signaling, whereas overexpressing PR55α enhances Wnt signaling. Taken together, our results suggest that PR55α specifically regulates PP2A-mediated β-catenin dephosphorylation and plays an essential role in Wnt signaling.Wnt/β-catenin signaling plays essential roles in development and tumorigenesis (13). Our previous work found that β-catenin is sequentially phosphorylated by CKIα4 and GSK3 (4), which creates a binding site for β-Trcp (5), leading to degradation via the ubiquitination/proteasome machinery (3). Mutations in β-catenin or APC genes that prevent β-catenin phosphorylation or ubiquitination/degradation lead ultimately to cancer (1, 2).In addition to the involvement of kinases, protein phosphatases, such as PP1, PP2A, and PP2C, are also implicated in Wnt/β-catenin regulation. PP2C and PP1 may regulate dephosphorylation of Axin and play positive roles in Wnt signaling (6, 7). PP2A is a multisubunit enzyme (810); it has been reported to play either positive or negative roles in Wnt signaling likely by targeting different components (1121). Toward the goal of understanding the mechanism of β-catenin phosphorylation, we carried out siRNA screening targeting several major phosphatases, in which we found that PP2A dephosphorylates β-catenin. This is consistent with a recent study where PP2A is shown to dephosphorylate β-catenin in a cell-free system (18).PP2A consists of a catalytic subunit (PP2Ac), a structure subunit (PR65/A), and variable regulatory B subunits (PR/B, PR/B′, PR/B″, or PR/B‴). The substrate specificity of PP2A is thought to be determined by its B subunit (9). By siRNA screening, we further identified that PR55α, a regulatory subunit of PP2A, specifically regulates β-catenin phosphorylation and degradation. Mechanistically, we found that PR55α directly interacts with β-catenin and regulates PP2A-mediated β-catenin dephosphorylation in Wnt signaling.  相似文献   

11.
Paneth cells are a secretory epithelial lineage that release dense core granules rich in host defense peptides and proteins from the base of small intestinal crypts. Enteric α-defensins, termed cryptdins (Crps) in mice, are highly abundant in Paneth cell secretions and inherently resistant to proteolysis. Accordingly, we tested the hypothesis that enteric α-defensins of Paneth cell origin persist in a functional state in the mouse large bowel lumen. To test this idea, putative Crps purified from mouse distal colonic lumen were characterized biochemically and assayed in vitro for bactericidal peptide activities. The peptides comigrated with cryptdin control peptides in acid-urea-PAGE and SDS-PAGE, providing identification as putative Crps. Matrix-assisted laser desorption ionization time-of-flight mass spectrometry experiments showed that the molecular masses of the putative α-defensins matched those of the six most abundant known Crps, as well as N-terminally truncated forms of each, and that the peptides contain six Cys residues, consistent with identities as α-defensins. N-terminal sequencing definitively revealed peptides with N termini corresponding to full-length, (des-Leu)-truncated, and (des-Leu-Arg)-truncated N termini of Crps 1–4 and 6. Crps from mouse large bowel lumen were bactericidal in the low micromolar range. Thus, Paneth cell α-defensins secreted into the small intestinal lumen persist as intact and functional forms throughout the intestinal tract, suggesting that the peptides may mediate enteric innate immunity in the colonic lumen, far from their upstream point of secretion in small intestinal crypts.Antimicrobial peptides (AMPs)2 are released by epithelial cells onto mucosal surfaces as effectors of innate immunity (15). In mammals, most AMPs derive from two major families, the cathelicidins and defensins (6). The defensins comprise the α-, β-, and θ-defensin subfamilies, which are defined by the presence of six cysteine residues paired in characteristic tridisulfide arrays (7). α-Defensins are highly abundant in two primary cell lineages: phagocytic leukocytes, primarily neutrophils, of myeloid origin and Paneth cells, which are secretory epithelial cells located at the base of the crypts of Lieberkühn in the small intestine (810). Neutrophil α-defensins are stored in azurophilic granules and contribute to non-oxidative microbial cell killing in phagolysosomes (11, 12), except in mice whose neutrophils lack defensins (13). In the small bowel, α-defensins and other host defense proteins (1418) are released apically as components of Paneth cell secretory granules in response to cholinergic stimulation and after exposure to bacterial antigens (19). Therefore, the release of Paneth cell products into the crypt lumen is inferred to protect mitotically active crypt cells from colonization by potential pathogens and confer protection against enteric infection (7, 20, 21).Under normal, homeostatic conditions, Paneth cells are not found outside the small bowel, although they may appear ectopically in response to local inflammation throughout the gastrointestinal tract (22, 23). Paneth cell numbers increase progressively throughout the small intestine, occurring at highest numbers in the distal ileum (24). Mouse Paneth cells express numerous α-defensin isoforms, termed cryptdins (Crps) (25), that have broad spectrum antimicrobial activities (6, 26). Collectively, α-defensins constitute approximately seventy percent of the bactericidal peptide activity in mouse Paneth cell secretions (19), selectively killing bacteria by membrane-disruptive mechanisms (2730). The role of Paneth cell α-defensins in gastrointestinal mucosal immunity is evident from studies of mice transgenic for human enteric α-defensin-5, HD-5, which are immune to infection by orally administered Salmonella enterica sv. typhimurium (S. typhimurium) (31).The biosynthesis of mature, bactericidal α-defensins from their inactive precursors requires activation by lineage-specific proteolytic convertases. In mouse Paneth cells, inactive ∼8.4-kDa Crp precursors are processed intracellularly into microbicidal ∼4-kDa Crps by specific cleavage events mediated by matrix metalloproteinase-7 (MMP-7) (32, 33). MMP-7 null mice exhibit increased susceptibility to systemic S. typhimurium infection and decreased clearance of orally administered non-invasive Escherichia coli (19, 32). Although the α-defensin proregions are sensitive to proteolysis, the mature, disulfide-stabilized peptides resist digestion by their converting enzymes in vitro, whether the convertase is MMP-7 (32), trypsin (34), or neutrophil serine proteinases (35). Because α-defensins resist proteolysis in vitro, we hypothesized that Paneth cell α-defensins resist degradation and remain in a functional state in the large bowel, a complex, hostile environment containing varied proteases of both host and microbial origin.Here, we report on the isolation and characterization of a population of enteric α-defensins from the mouse colonic lumen. Full-length and N-terminally truncated Paneth cell α-defensins were identified and are abundant in the distal large bowel lumen.  相似文献   

12.

Background

The pathogenesis of appendicitis is unclear. We evaluated whether exposure to air pollution was associated with an increased incidence of appendicitis.

Methods

We identified 5191 adults who had been admitted to hospital with appendicitis between Apr. 1, 1999, and Dec. 31, 2006. The air pollutants studied were ozone, nitrogen dioxide, sulfur dioxide, carbon monoxide, and suspended particulate matter of less than 10 μ and less than 2.5 μ in diameter. We estimated the odds of appendicitis relative to short-term increases in concentrations of selected pollutants, alone and in combination, after controlling for temperature and relative humidity as well as the effects of age, sex and season.

Results

An increase in the interquartile range of the 5-day average of ozone was associated with appendicitis (odds ratio [OR] 1.14, 95% confidence interval [CI] 1.03–1.25). In summer (July–August), the effects were most pronounced for ozone (OR 1.32, 95% CI 1.10–1.57), sulfur dioxide (OR 1.30, 95% CI 1.03–1.63), nitrogen dioxide (OR 1.76, 95% CI 1.20–2.58), carbon monoxide (OR 1.35, 95% CI 1.01–1.80) and particulate matter less than 10 μ in diameter (OR 1.20, 95% CI 1.05–1.38). We observed a significant effect of the air pollutants in the summer months among men but not among women (e.g., OR for increase in the 5-day average of nitrogen dioxide 2.05, 95% CI 1.21–3.47, among men and 1.48, 95% CI 0.85–2.59, among women). The double-pollutant model of exposure to ozone and nitrogen dioxide in the summer months was associated with attenuation of the effects of ozone (OR 1.22, 95% CI 1.01–1.48) and nitrogen dioxide (OR 1.48, 95% CI 0.97–2.24).

Interpretation

Our findings suggest that some cases of appendicitis may be triggered by short-term exposure to air pollution. If these findings are confirmed, measures to improve air quality may help to decrease rates of appendicitis.Appendicitis was introduced into the medical vernacular in 1886.1 Since then, the prevailing theory of its pathogenesis implicated an obstruction of the appendiceal orifice by a fecalith or lymphoid hyperplasia.2 However, this notion does not completely account for variations in incidence observed by age,3,4 sex,3,4 ethnic background,3,4 family history,5 temporal–spatial clustering6 and seasonality,3,4 nor does it completely explain the trends in incidence of appendicitis in developed and developing nations.3,7,8The incidence of appendicitis increased dramatically in industrialized nations in the 19th century and in the early part of the 20th century.1 Without explanation, it decreased in the middle and latter part of the 20th century.3 The decrease coincided with legislation to improve air quality. For example, after the United States Clean Air Act was passed in 1970,9 the incidence of appendicitis decreased by 14.6% from 1970 to 1984.3 Likewise, a 36% drop in incidence was reported in the United Kingdom between 1975 and 199410 after legislation was passed in 1956 and 1968 to improve air quality and in the 1970s to control industrial sources of air pollution. Furthermore, appendicitis is less common in developing nations; however, as these countries become more industrialized, the incidence of appendicitis has been increasing.7Air pollution is known to be a risk factor for multiple conditions, to exacerbate disease states and to increase all-cause mortality.11 It has a direct effect on pulmonary diseases such as asthma11 and on nonpulmonary diseases including myocardial infarction, stroke and cancer.1113 Inflammation induced by exposure to air pollution contributes to some adverse health effects.1417 Similar to the effects of air pollution, a proinflammatory response has been associated with appendicitis.1820We conducted a case–crossover study involving a population-based cohort of patients admitted to hospital with appendicitis to determine whether short-term increases in concentrations of selected air pollutants were associated with hospital admission because of appendicitis.  相似文献   

13.
Myofilament proteins are responsible for cardiac contraction. The myofilament subproteome, however, has not been comprehensively analyzed thus far. In the present study, cardiomyocytes were isolated from rodent hearts and stimulated with endothelin-1 and isoproterenol, potent inducers of myofilament protein phosphorylation. Subsequently, cardiomyocytes were “skinned,” and the myofilament subproteome was analyzed using a high mass accuracy ion trap tandem mass spectrometer (LTQ Orbitrap XL) equipped with electron transfer dissociation. As expected, a small number of myofilament proteins constituted the majority of the total protein mass with several known phosphorylation sites confirmed by electron transfer dissociation. More than 600 additional proteins were identified in the cardiac myofilament subproteome, including kinases and phosphatase subunits. The proteomic comparison of myofilaments from control and treated cardiomyocytes suggested that isoproterenol treatment altered the subcellular localization of protein phosphatase 2A regulatory subunit B56α. Immunoblot analysis of myocyte fractions confirmed that β-adrenergic stimulation by isoproterenol decreased the B56α content of the myofilament fraction in the absence of significant changes for the myosin phosphatase target subunit isoforms 1 and 2 (MYPT1 and MYPT2). Furthermore, immunolabeling and confocal microscopy revealed the spatial redistribution of these proteins with a loss of B56α from Z-disc and M-band regions but increased association of MYPT1/2 with A-band regions of the sarcomere following β-adrenergic stimulation. In summary, we present the first comprehensive proteomics data set of skinned cardiomyocytes and demonstrate the potential of proteomics to unravel dynamic changes in protein composition that may contribute to the neurohormonal regulation of myofilament contraction.Myofilament proteins comprise the fundamental contractile apparatus of the heart, the cardiac sarcomere. They are subdivided into thin filament proteins, including actin, tropomyosin, the troponin complex (troponin C, troponin I, and troponin T), and thick filament proteins, including myosin heavy chains, myosin light chains, and myosin-binding protein C. Although calcium is the principal regulator of cardiac contraction through the excitation-contraction coupling process that culminates in calcium binding to troponin C, myofilament function is also significantly modulated by phosphorylation of constituent proteins, such as cardiac troponin I (cTnI),1 cardiac myosin-binding protein C (cMyBP-C), and myosin regulatory light chain (MLC-2). “Skinned” myocyte preparations from rodent hearts, in which the sarcolemmal envelope is disrupted through the use of detergents, have been invaluable in providing mechanistic information on the functional consequences of myofilament protein phosphorylation following exposure to neurohormonal stimuli that activate pertinent kinases prior to skinning or direct exposure to such kinases in active form after skinning (for recent examples, see studies on the phosphorylation of cTnI (13), cMyBP-C (46), and MLC-2 (79)). Nevertheless, to date, only a few myofilament proteins have been studied using proteomics (1019), and a detailed proteomic characterization of the myofilament subproteome and its associated proteins from skinned myocytes has not been performed. In the present analysis, we used an LTQ Orbitrap XL equipped with ETD (20) to analyze the subproteome of skinned cardiomyocytes with or without prior stimulation. Endothelin-1 and isoproterenol were used to activate the endothelin receptor/protein kinase C and β-adrenoreceptor/protein kinase A pathway, respectively (21, 22). Importantly, the mass accuracy of the Orbitrap mass analyzer helped to distinguish true phosphorylation sites from false assignments, and the sensitivity of the ion trap provided novel insights into the translocation of phosphatase regulatory and targeting subunits following β-adrenergic stimulation.  相似文献   

14.
Rachel Mann  Joy Adamson  Simon M. Gilbody 《CMAJ》2012,184(8):E424-E430

Background:

Guidelines for perinatal mental health care recommend the use of two case-finding questions about depressed feelings and loss of interest in activities, despite the absence of validation studies in this context. We examined the diagnostic accuracy of these questions and of a third question about the need for help asked of women receiving perinatal care.

Methods:

We evaluated self-reported responses to two case-finding questions against an interviewer-assessed diagnostic standard (DSM-IV criteria for major depressive disorder) among 152 women receiving antenatal care at 26–28 weeks’ gestation and postnatal care at 5–13 weeks after delivery. Among women who answered “yes” to either question, we assessed the usefulness of asking a third question about the need for help. We calculated sensitivity, specificity and likelihood ratios for the two case-finding questions and for the added question about the need for help.

Results:

Antenatally, the two case-finding questions had a sensitivity of 100% (95% confidence interval [CI] 77%–100%), a specificity of 68% (95% CI 58%–76%), a positive likelihood ratio of 3.03 (95% CI 2.28–4.02) and a negative likelihood ratio of 0.041 (95% CI 0.003–0.63) in identifying perinatal depression. Postnatal results were similar. Among the women who screened positive antenatally, the additional question about the need for help had a sensitivity of 58% (95% CI 38%–76%), a specificity of 91% (95% CI 78%–97%), a positive likelihood ratio of 6.86 (95% CI 2.16–21.7) and a negative likelihood ratio of 0.45 (95% CI 0.25–0.80), with lower sensitivity and higher specificity postnatally.

Interpretation:

Negative responses to both of the case-finding questions showed acceptable accuracy for ruling out perinatal depression. For positive responses, the use of a third question about the need for help improved specificity and the ability to rule in depression.The occurrence of depressive symptoms during the perinatal period is well-recognized. The estimated prevalence is 7.4%–20% antenatally1,2 and up to 19.2% in the first three postnatal months.3 Antenatal depression is associated with malnutrition, substance and alcohol abuse, poor self-reported health, poor use of antenatal care services and adverse neonatal outcomes.4 Postnatal depression has a substantial impact on the mother and her partner, the family, mother–baby interaction and on the longer-term emotional and cognitive development of the baby.5Screening strategies to identify perinatal depression have been advocated, and specific questionnaires for use in the perinatal period, such as the Edinburgh Postnatal Depression Scale,6 were developed. However, in their current recommendations, the UK National Screening Committee7 and the US Committee on Obstetric Practice8 state that there is insufficient evidence to support the implementation of universal perinatal screening programs. The initial decision in 2001 by the National Screening Committee to not support universal perinatal screening9 attracted particular controversy in the United Kingdom; some service providers subsequently withdrew resources for treatment of postnatal depression, and subsequent pressure by perinatal community practitioners led to modification of the screening guidance in order to clarify the role of screening questionnaires in the assessment of perinatal depression.10In 2007, the National Institute for Health and Clinical Excellence issued clinical guidelines for perinatal mental health care in the UK, which included guidance on the use of questionnaires to identify antenatal and postnatal depression.11 In this guidance, a case-finding approach to identify perinatal depression was strongly recommended; it involved the use of two case-finding questions (sometimes referred to as the Whooley questions), and an additional question about the need for help asked of women who answered “yes” to either of the initial questions (Box 1).

Box 1:

Case-finding questions recommended for the identification of perinatal depression10

  • “During the past month, have you often been bothered by feeling down, depressed or hopeless?”
  • “During the past month, have you often been bothered by having little interest or pleasure in doing things?”
  • A third question should be considered if the woman answers “yes” to either of the initial screening questions: “Is this something you feel you need or want help with?”
Useful case-finding questions should be both sensitive and specific so they accurately identify those with and without the condition. The two case-finding questions have been validated in primary care samples12,13 and examined in other clinical populations1416 and are endorsed in recommendations by US and Canadian bodies for screening depression in adults.17,18 However, at the time the guidance from the National Institute for Health and Clinical Excellence was issued, there were no validation studies conducted in perinatal populations. A recent systematic review19 identified one study conducted in the United States that validated the two questions against established diagnostic criteria in 506 women attending well-child visits postnatally;20 sensitivity and specificity of the questions were 100% and 44% respectively at four weeks. The review failed to identify studies that validated the two questions and the additional question about the need for help against a gold-standard measure.We conducted a validation study to assess the diagnostic accuracy of this brief case-finding approach against gold-standard psychiatric diagnostic criteria for depression in a population of women receiving perinatal care.  相似文献   

15.

Background

Fractures have largely been assessed by their impact on quality of life or health care costs. We conducted this study to evaluate the relation between fractures and mortality.

Methods

A total of 7753 randomly selected people (2187 men and 5566 women) aged 50 years and older from across Canada participated in a 5-year observational cohort study. Incident fractures were identified on the basis of validated self-report and were classified by type (vertebral, pelvic, forearm or wrist, rib, hip and “other”). We subdivided fracture groups by the year in which the fracture occurred during follow-up; those occurring in the fourth and fifth years were grouped together. We examined the relation between the time of the incident fracture and death.

Results

Compared with participants who had no fracture during follow-up, those who had a vertebral fracture in the second year were at increased risk of death (adjusted hazard ratio [HR] 2.7, 95% confidence interval [CI] 1.1–6.6); also at risk were those who had a hip fracture during the first year (adjusted HR 3.2, 95% CI 1.4–7.4). Among women, the risk of death was increased for those with a vertebral fracture during the first year (adjusted HR 3.7, 95% CI 1.1–12.8) or the second year of follow-up (adjusted HR 3.2, 95% CI 1.2–8.1). The risk of death was also increased among women with hip fracture during the first year of follow-up (adjusted HR 3.0, 95% CI 1.0–8.7).

Interpretation

Vertebral and hip fractures are associated with an increased risk of death. Interventions that reduce the incidence of these fractures need to be implemented to improve survival.Osteoporosis-related fractures are a major health concern, affecting a growing number of individuals worldwide. The burden of fracture has largely been assessed by the impact on health-related quality of life and health care costs.1,2 Fractures can also be associated with death. However, trials that have examined the relation between fractures and mortality have had limitations that may influence their results and the generalizability of the studies, including small samples,3,4 the examination of only 1 type of fracture,410 the inclusion of only women,8,11 the enrolment of participants from specific areas (i.e., hospitals or certain geographic regions),3,4,7,8,10,12 the nonrandom selection of participants311 and the lack of statistical adjustment for confounding factors that may influence mortality.3,57,12We evaluated the relation between incident fractures and mortality over a 5-year period in a cohort of men and women 50 years of age and older. In addition, we examined whether other characteristics of participants were risk factors for death.  相似文献   

16.

Background:

Optimization of systolic blood pressure and lipid levels are essential for secondary prevention after ischemic stroke, but there are substantial gaps in care, which could be addressed by nurse- or pharmacist-led care. We compared 2 types of case management (active prescribing by pharmacists or nurse-led screening and feedback to primary care physicians) in addition to usual care.

Methods:

We performed a prospective randomized controlled trial involving adults with recent minor ischemic stroke or transient ischemic attack whose systolic blood pressure or lipid levels were above guideline targets. Participants in both groups had a monthly visit for 6 months with either a nurse or pharmacist. Nurses measured cardiovascular risk factors, counselled patients and faxed results to primary care physicians (active control). Pharmacists did all of the above as well as prescribed according to treatment algorithms (intervention).

Results:

Most of the 279 study participants (mean age 67.6 yr, mean systolic blood pressure 134 mm Hg, mean low-density lipoprotein [LDL] cholesterol 3.23 mmol/L) were already receiving treatment at baseline (antihypertensives: 78.1%; statins: 84.6%), but none met guideline targets (systolic blood pressure ≤ 140 mm Hg, fasting LDL cholesterol ≤ 2.0 mmol/L). Substantial improvements were observed in both groups after 6 months: 43.4% of participants in the pharmacist case manager group met both systolic blood pressure and LDL guideline targets compared with 30.9% in the nurse-led group (12.5% absolute difference; number needed to treat = 8, p = 0.03).

Interpretation:

Compared with nurse-led case management (risk factor evaluation, counselling and feedback to primary care providers), active case management by pharmacists substantially improved risk factor control at 6 months among patients who had experienced a stroke. Trial registration: ClinicalTrials.gov, no. NCT00931788The risk of cardiovascular events is high for patients who survive a stroke or transient ischemic attack.1,2 Treatment of hypertension and dyslipidemia can substantially reduce this risk.37 However, vascular risk factors are often suboptimally managed after stroke or transient ischemic attack, even among patients admitted to hospital or seen in specialized stroke prevention clinics.810Multiple barriers are responsible for the suboptimal control of risk factors, and traditional means of educating practitioners and patients have limited effectiveness.11 Although it has been suggested that “case managers” may be able to improve the management of risk factors, evidence is sparse and inconsistent between studies.1216 The most recent Cochrane review on this topic concluded that “nurse- or pharmacist-led care may be a promising way forward … but these interventions require further evaluation.”16 Thus, we designed this trial to evaluate whether a pharmacist case manager could improve risk factors among survivors of stroke or transient ischemic attack.17 Because we have previously shown that hypertension control can be improved by monthly evaluation by nurses (with patient counselling and faxing of blood pressure measurements with guideline recommendations to primary care physicians),18 and this is an alternate method of case management implemented in many health organizations, we used this approach as the active control group for this study. Thus, our study represents a controlled comparison of 2 modes of case management: active prescribing (pharmacist-led case management) versus screening and delegating to primary care physicians (nurse-led case management).  相似文献   

17.

Background:

The ABCD2 score (Age, Blood pressure, Clinical features, Duration of symptoms and Diabetes) is used to identify patients having a transient ischemic attack who are at high risk for imminent stroke. However, despite its widespread implementation, the ABCD2 score has not yet been prospectively validated. We assessed the accuracy of the ABCD2 score for predicting stroke at 7 (primary outcome) and 90 days.

Methods:

This prospective cohort study enrolled adults from eight Canadian emergency departments who had received a diagnosis of transient ischemic attack. Physicians completed data forms with the ABCD2 score before disposition. The outcome criterion, stroke, was established by a treating neurologist or by an Adjudication Committee. We calculated the sensitivity and specificity for predicting stroke 7 and 90 days after visiting the emergency department using the original “high-risk” cutpoint of an ABCD2 score of more than 5, and the American Heart Association recommendation of a score of more than 2.

Results:

We enrolled 2056 patients (mean age 68.0 yr, 1046 (50.9%) women) who had a rate of stroke of 1.8% at 7 days and 3.2% at 90 days. An ABCD2 score of more than 5 had a sensitivity of 31.6% (95% confidence interval [CI] 19.1–47.5) for stroke at 7 days and 29.2% (95% CI 19.6–41.2) for stroke at 90 days. An ABCD2 score of more than 2 resulted in sensitivity of 94.7% (95% CI 82.7–98.5) for stroke at 7 days with a specificity of 12.5% (95% CI 11.2–14.1). The accuracy of the ABCD2 score as calculated by either the enrolling physician (area under the curve 0.56; 95% CI 0.47–0.65) or the coordinating centre (area under the curve 0.65; 95% CI 0.57–0.73) was poor.

Interpretation:

This multicentre prospective study involving patients in emergency departments with transient ischemic attack found the ABCD2 score to be inaccurate, at any cut-point, as a predictor of imminent stroke. Furthermore, the ABCD2 score of more than 2 that is recommended by the American Heart Association is nonspecific.There are approximately 100 visits to the emergency department per 100 000 population for transient ischemic attack each year.1 Although often considered benign, transient ischemic attack carries a risk of imminent stroke. Studies have shown that the risk of stroke is 0.2%–10% within 7 days of the first transient ischemic attack, and this risk increases to 1.2%–12% at 90 days.29 Stroke continues to be the leading cause of disability among adults and the third-leading cause of death in North America.10,11 Identifying people with transient ischemic attack who are at high risk of stroke is an opportunity to prevent stroke.3,4 However, urgent investigation of all transient ischemic attacks would require substantial resources. Three studies have attempted to develop clinical decision rules (i.e., scores) for assessing whether a patient with transient ischemic attack is at high risk of stroke.9,12,13 Combined, these studies led to the development of the ABCD2 (Age, Blood pressure, Clinical features, Duration of symptoms and Diabetes) score. However, despite its widespread implementation, the ABCD2 score has not yet been prospectively validated.12,1418 This essential step in the development of rules for making clinical predictions has recently been requested.14,1921The objective of this study was to externally validate the ABCD2 score as a tool for identifying patients seen in the emergency department with transient ischemic attack who are at high risk of stroke within 7 (primary outcome) and 90 days (one of the secondary outcomes).  相似文献   

18.

Background:

Falls cause more than 60% of head injuries in older adults. Lack of objective evidence on the circumstances of these events is a barrier to prevention. We analyzed video footage to determine the frequency of and risk factors for head impact during falls in older adults in 2 long-term care facilities.

Methods:

Over 39 months, we captured on video 227 falls involving 133 residents. We used a validated questionnaire to analyze the mechanisms of each fall. We then examined whether the probability for head impact was associated with upper-limb protective responses (hand impact) and fall direction.

Results:

Head impact occurred in 37% of falls, usually onto a vinyl or linoleum floor. Hand impact occurred in 74% of falls but had no significant effect on the probability of head impact (p = 0.3). An increased probability of head impact was associated with a forward initial fall direction, compared with backward falls (odds ratio [OR] 2.7, 95% confidence interval [CI] 1.3–5.9) or sideways falls (OR 2.8, 95% CI 1.2–6.3). In 36% of sideways falls, residents rotated to land backwards, which reduced the probability of head impact (OR 0.2, 95% CI 0.04–0.8).

Interpretation:

Head impact was common in observed falls in older adults living in long-term care facilities, particularly in forward falls. Backward rotation during descent appeared to be protective, but hand impact was not. Attention to upper-limb strength and teaching rotational falling techniques (as in martial arts training) may reduce fall-related head injuries in older adults.Falls from standing height or lower are the cause of more than 60% of hospital admissions for traumatic brain injury in adults older than 65 years.15 Traumatic brain injury accounts for 32% of hospital admissions and more than 50% of deaths from falls in older adults.1,68 Furthermore, the incidence and age-adjusted rate of fall-related traumatic brain injury is increasing,1,9 especially among people older than 80 years, among whom rates have increased threefold over the past 30 years.10 One-quarter of fall-related traumatic brain injuries in older adults occur in long-term care facilities.1The development of improved strategies to prevent fall-related traumatic brain injuries is an important but challenging task. About 60% of residents in long-term care facilities fall at least once per year,11 and falls result from complex interactions of physiologic, environmental and situational factors.1216 Any fall from standing height has sufficient energy to cause brain injury if direct impact occurs between the head and a rigid floor surface.1719 Improved understanding is needed of the factors that separate falls that result in head impact and injury from those that do not.1,10 Falls in young adults rarely result in head impact, owing to protective responses such as use of the upper limbs to stop the fall, trunk flexion and rotation during descent.2023 We have limited evidence of the efficacy of protective responses to falls among older adults.In the current study, we analyzed video footage of real-life falls among older adults to estimate the prevalence of head impact from falls, and to examine the association between head impact, and biomechanical and situational factors.  相似文献   

19.
Background:Rates of imaging for low-back pain are high and are associated with increased health care costs and radiation exposure as well as potentially poorer patient outcomes. We conducted a systematic review to investigate the effectiveness of interventions aimed at reducing the use of imaging for low-back pain.Methods:We searched MEDLINE, Embase, CINAHL and the Cochrane Central Register of Controlled Trials from the earliest records to June 23, 2014. We included randomized controlled trials, controlled clinical trials and interrupted time series studies that assessed interventions designed to reduce the use of imaging in any clinical setting, including primary, emergency and specialist care. Two independent reviewers extracted data and assessed risk of bias. We used raw data on imaging rates to calculate summary statistics. Study heterogeneity prevented meta-analysis.Results:A total of 8500 records were identified through the literature search. Of the 54 potentially eligible studies reviewed in full, 7 were included in our review. Clinical decision support involving a modified referral form in a hospital setting reduced imaging by 36.8% (95% confidence interval [CI] 33.2% to 40.5%). Targeted reminders to primary care physicians of appropriate indications for imaging reduced referrals for imaging by 22.5% (95% CI 8.4% to 36.8%). Interventions that used practitioner audits and feedback, practitioner education or guideline dissemination did not significantly reduce imaging rates. Lack of power within some of the included studies resulted in lack of statistical significance despite potentially clinically important effects.Interpretation:Clinical decision support in a hospital setting and targeted reminders to primary care doctors were effective interventions in reducing the use of imaging for low-back pain. These are potentially low-cost interventions that would substantially decrease medical expenditures associated with the management of low-back pain.Current evidence-based clinical practice guidelines recommend against the routine use of imaging in patients presenting with low-back pain.13 Despite this, imaging rates remain high,4,5 which indicates poor concordance with these guidelines.6,7Unnecessary imaging for low-back pain has been associated with poorer patient outcomes, increased radiation exposure and higher health care costs.8 No short- or long-term clinical benefits have been shown with routine imaging of the low back, and the diagnostic value of incidental imaging findings remains uncertain.912 A 2008 systematic review found that imaging accounted for 7% of direct costs associated with low-back pain, which in 1998 translated to more than US$6 billion in the United States and £114 million in the United Kingdom.13 Current costs are likely to be substantially higher, with an estimated 65% increase in spine-related expenditures between 1997 and 2005.14Various interventions have been tried for reducing imaging rates among people with low-back pain. These include strategies targeted at the practitioner such as guideline dissemination,1517 education workshops,18,19 audit and feedback of imaging use,7,20,21 ongoing reminders7 and clinical decision support.2224 It is unclear which, if any, of these strategies are effective.25 We conducted a systematic review to investigate the effectiveness of interventions designed to reduce imaging rates for the management of low-back pain.  相似文献   

20.
Accumulation of amyloid β (Aβ) oligomers in the brain is toxic to synapses and may play an important role in memory loss in Alzheimer disease. However, how these toxins are built up in the brain is not understood. In this study we investigate whether impairments of insulin and insulin-like growth factor-1 (IGF-1) receptors play a role in aggregation of Aβ. Using primary neuronal culture and immortal cell line models, we show that expression of normal insulin or IGF-1 receptors confers cells with abilities to reduce exogenously applied Aβ oligomers (also known as ADDLs) to monomers. In contrast, transfection of malfunctioning human insulin receptor mutants, identified originally from patient with insulin resistance syndrome, or inhibition of insulin and IGF-1 receptors via pharmacological reagents increases ADDL levels by exacerbating their aggregation. In healthy cells, activation of insulin and IGF-1 receptor reduces the extracellular ADDLs applied to cells via seemingly the insulin-degrading enzyme activity. Although insulin triggers ADDL internalization, IGF-1 appears to keep ADDLs on the cell surface. Nevertheless, both insulin and IGF-1 reduce ADDL binding, protect synapses from ADDL synaptotoxic effects, and prevent the ADDL-induced surface insulin receptor loss. Our results suggest that dysfunctions of brain insulin and IGF-1 receptors contribute to Aβ aggregation and subsequent synaptic loss.Abnormal protein misfolding and aggregation are common features in neurodegenerative diseases such as Alzheimer (AD),2 Parkinson, Huntington, and prion diseases (13). In the AD brain, intracellular accumulation of hyperphosphorylated Tau aggregates and extracellular amyloid deposits comprise the two major pathological hallmarks of the disease (1, 4). Aβ aggregation has been shown to initiate from Aβ1–42, a peptide normally cleaved from the amyloid precursor protein (APP) via activities of α- and γ-secretases (5, 6). A large body of evidence in the past decade has indicated that accumulated soluble oligomers of Aβ1–42, likely the earliest or intermediate forms of Aβ deposition, are potently toxic to neurons. The toxic effects of Aβ oligomers include synaptic structural deterioration (7, 8) and functional deficits such as inhibition of synaptic transmission (9) and synaptic plasticity (1013), as well as memory loss (11, 14, 15). Accumulation of high levels of these oligomers may also trigger inflammatory processes and oxidative stress in the brain probably due to activation of astrocytes and microglia (16, 17). Thus, to understand how a physiologically produced peptide becomes a misfolded toxin has been one of the key issues in uncovering the molecular pathogenesis of the disease.Aβ accumulation and aggregation could derive from overproduction or impaired clearance. Mutations of APP or presenilins 1 and 2, for example, are shown to cause overproduction of Aβ1–42 and amyloid deposits in the brain of early onset AD (18, 19). Because early onset AD accounts for less than 5% of entire AD population, APP and presenilin mutations cannot represent a universal mechanism for accumulation/aggregation of Aβ in the majority of AD cases. With respect to clearance, Aβ is normally removed by both global and local mechanisms, with the former requiring vascular transport across the blood-brain barrier (20, 21) and the latter via local enzymatic digestions by several metalloproteases, including neprilysin, insulin-degrading enzyme (IDE), and endothelin converting enzymes 1 and 2 (2224).The fact that insulin is a common substrate for most of the identified Aβ-degrading enzymes has drawn attention of investigators to roles of insulin signaling in Aβ clearance. Increases in insulin levels frequently seen in insulin resistance may compete for these enzymes and thus contribute to Aβ accumulation. Indeed, insulin signaling has been shown to regulate expression of metalloproteases such as IDE (25, 26), and influence aspects of Aβ metabolism and catabolism (27). In the endothelium of the brain-blood barrier and glial cells, insulin signaling is reported to regulate protein-protein interactions in an uptake cascade involving low density lipoprotein receptor-related protein and its ligands ApoE and α2-macroglobulin, a system known to bind and clear Aβ via endocytosis and/or vascular transport (28, 29). Similarly, circulating IGF-1 has been reported to play a role in Aβ clearance probably via facilitating brain-blood barrier transportation (30, 31).In the brain, insulin signaling plays a role in learning and memory (3234), potentially linking insulin resistance to AD dementia. Recently we and others have shown that Aβ oligomers interact with neuronal insulin receptors to cause impairments of the receptor expression and function (3537). These impairments mimic the Aβ oligomer-induced synaptic long term potentiation inhibition and can be overcome by insulin treatment (35, 38). Consistently, impairments of both IR and IGF-1R have been reported in the AD brain (3941).Based on these results, we ask whether impairment of insulin and IGF-1 signaling contribute to Aβ oligomer build-up in brain cells. To address this question, we set out to test roles of IR and IGF-1R in cellular clearance and transport of Aβ oligomers (ADDLs) applied to primary neuronal cultures and cell lines overexpressing IR and IGF-1R. Our results show that insulin and IGF-1 receptors function to reduce Aβ oligomers to monomers, and prevent Aβ oligomer-induced synaptic toxicity both at the level of synapse composition and structure. By contrast, receptor impairments resulting from “kinase-dead” insulin receptor mutations, a tyrosine kinase inhibitor of the insulin and IGF-1 receptor, or an inhibitory IGF-1 receptor antibody increase ADDL aggregation in the extracellular medium. Our results provide cellular evidence linking insulin and IGF-1 signaling to amyloidogenesis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号