首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The public view of life-extension technologies is more nuanced than expected and researchers must engage in discussions if they hope to promote awareness and acceptanceThere is increasing research and commercial interest in the development of novel interventions that might be able to extend human life expectancy by decelerating the ageing process. In this context, there is unabated interest in the life-extending effects of caloric restriction in mammals, and there are great hopes for drugs that could slow human ageing by mimicking its effects (Fontana et al, 2010). The multinational pharmaceutical company GlaxoSmithKline, for example, acquired Sirtris Pharmaceuticals in 2008, ostensibly for their portfolio of drugs targeting ‘diseases of ageing''. More recently, the immunosuppressant drug rapamycin has been shown to extend maximum lifespan in mice (Harrison et al, 2009). Such findings have stoked the kind of enthusiasm that has become common in media reports of life-extension and anti-ageing research, with claims that rapamycin might be “the cure for all that ails” (Hasty, 2009), or that it is an “anti-aging drug [that] could be used today” (Blagosklonny, 2007).Given the academic, commercial and media interest in prolonging human lifespan—a centuries-old dream of humanity—it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives, and to ask whether they would be willing to buy and use drugs that slow the ageing process. Surveys that have addressed these questions, have given some rather surprising results, contrary to the expectations of many researchers in the field. They have also highlighted that although human life extension (HLE) and ageing are topics with enormous implications for society and individuals, scientists have not communicated efficiently with the public about their research and its possible applications.Given the academic, commercial and media interest in prolonging human lifespan […] it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives…Proponents and opponents of HLE often assume that public attitudes towards ageing interventions will be strongly for or against, but until now, there has been little empirical evidence with which to test these assumptions (Lucke & Hall, 2005). We recently surveyed members of the public in Australia and found a variety of opinions, including some ambivalence towards the development and use of drugs that could slow ageing and increase lifespan. Our findings suggest that many members of the public anticipate both positive and negative outcomes from this work (Partridge 2009a, b, 2010; Underwood et al, 2009).In a community survey of public attitudes towards HLE we found that around two-thirds of a sample of 605 Australian adults supported research with the potential to increase the maximum human lifespan by slowing ageing (Partridge et al, 2010). However, only one-third expressed an interest in using an anti-ageing pill if it were developed. Half of the respondents were not interested in personally using such a pill and around one in ten were undecided.Some proponents of HLE anticipate their research being impeded by strong public antipathy (Miller, 2002, 2009). Richard Miller has claimed that opposition to the development of anti-ageing interventions often exists because of an “irrational public predisposition” to think that increased lifespans will only lead to elongation of infirmity. He has called this “gerontologiphobia”—a shared feeling among laypeople that while research to cure age-related diseases such as dementia is laudable, research that aims to intervene in ageing is a “public menace” (Miller, 2002).We found broad support for the amelioration of age-related diseases and for technologies that might preserve quality of life, but scepticism about a major promise of HLE—that it will delay the onset of age-related diseases and extend an individual''s healthy lifespan. From the people we interviewed, the most commonly cited potential negative personal outcome of HLE was that it would extend the number of years a person spent with chronic illnesses and poor quality of life (Partridge et al, 2009a). Although some members of the public envisioned more years spent in good health, almost 40% of participants were concerned that a drug to slow ageing would do more harm than good to them personally; another 13% were unsure about the benefits and costs (Partridge et al, 2010).…it might be that advocates of HLE have failed to persuade the public on this issueIt would be unwise to label such concerns as irrational, when it might be that advocates of HLE have failed to persuade the public on this issue. Have HLE researchers explained what they have discovered about ageing and what it means? Perhaps the public see the claims that have been made about HLE as ‘too good to be true‘.Results of surveys of biogerontologists suggest that they are either unaware or dismissive of public concerns about HLE. They often ignore them, dismiss them as “far-fetched”, or feel no responsibility “to respond” (Settersten Jr et al, 2008). Given this attitude, it is perhaps not surprising that the public are sceptical of their claims.Scientists are not always clear about the outcomes of their work, biogerontologists included. Although the life-extending effects of interventions in animal models are invoked as arguments for supporting anti-ageing research, it is not certain that these interventions will also extend healthy lifespans in humans. Miller (2009) reassuringly claims that the available evidence consistently suggests that quality of life is maintained in laboratory animals with extended lifespans, but he acknowledges that the evidence is “sparse” and urges more research on the topic (Miller, 2009). In the light of such ambiguity, researchers need to respond to public concerns in ways that reflect the available evidence and the potential of their work, without becoming apostles for technologies that have not yet been developed. An anti-ageing drug that extends lifespan without maintaining quality of life is clearly undesirable, but the public needs to be persuaded that such an outcome can be avoided.The public is also concerned about the possible adverse side effects of anti-ageing drugs. Many people were bemused when they discovered that members of the Caloric Restriction Society experienced a loss of libido and loss of muscle mass as a result of adhering to a low-calorie diet to extend their longevity—for many people, such side effects would not be worth the promise of some extra years of life. Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humans (Fontana et al, 2010). If researchers do not discuss these possible effects, then a curious public might draw their own conclusions.Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humansSome HLE advocates seem eager to tout potential anti-ageing drugs as being free from adverse side effects. For example, Blagosklonny (2007) has argued that rapamycin could be used to prevent age-related diseases in humans because it is “a non-toxic, well tolerated drug that is suitable for everyday oral administration” with its major “side-effects” being anti-tumour, bone-protecting, and mimicking caloric restriction effects. By contrast, Kaeberlein & Kennedy (2009) have advised the public against using the drug because of its immunosuppressive effects.Aubrey de Grey has called for scientists to provide more optimistic timescales for HLE on several occasions. He claims that public opposition to interventions in ageing is based on “extraordinarily transparently flawed opinions” that HLE would be unethical and unsustainable (de Grey, 2004). In his view, public opposition is driven by scepticism about whether HLE will be possible, and that concerns about extending infirmity, injustice or social harms are simply excuses to justify people''s belief that ageing is ‘not so bad'' (de Grey, 2007). He argues that this “pro-ageing trance” can only be broken by persuading the public that HLE technologies are just around the corner.Contrary to de Grey''s expectations of public pessimism, 75% of our survey participants thought that HLE technologies were likely to be developed in the near future. Furthermore, concerns about the personal, social and ethical implications of ageing interventions and HLE were not confined to those who believed that HLE is not feasible (Partridge et al, 2010).Juengst et al (2003) have rightly pointed out that any interventions that slow ageing and substantially increase human longevity might generate more social, economic, political, legal, ethical and public health issues than any other technological advance in biomedicine. Our survey supports this idea; the major ethical concerns raised by members of the public reflect the many and diverse issues that are discussed in the bioethics literature (Partridge et al, 2009b; Partridge & Hall, 2007).When pressed, even enthusiasts admit that a drastic extension of human life might be a mixed blessing. A recent review by researchers at the US National Institute on Aging pointed to several economic and social challenges that arise from longevity extension (Sierra et al, 2009). Perry (2004) suggests that the ability to slow ageing will cause “profound changes” and a “firestorm of controversy”. Even de Grey (2005) concedes that the development of an effective way to slow ageing will cause “mayhem” and “absolute pandemonium”. If even the advocates of anti-ageing and HLE anticipate widespread societal disruption, the public is right to express concerns about the prospect of these things becoming reality. It is accordingly unfair to dismiss public concerns about the social and ethical implications as “irrational”, “inane” or “breathtakingly stupid” (de Grey, 2004).The breadth of the possible implications of HLE reinforces the need for more discussion about the funding of such research and management of its outcomes ( Juengst et al, 2003). Biogerontologists need to take public concerns more seriously if they hope to foster support for their work. If there are misperceptions about the likely outcomes of intervention in ageing, then biogerontologists need to better explain their research to the public and discuss how their concerns will be addressed. It is not enough to hope that a breakthrough in human ageing research will automatically assuage public concerns about the effects of HLE on quality of life, overpopulation, economic sustainability, the environment and inequities in access to such technologies. The trajectories of other controversial research areas—such as human embryonic stem cell research and assisted reproductive technologies (Deech & Smajdor, 2007)—have shown that “listening to public concerns on research and responding appropriately” is a more effective way of fostering support than arrogant dismissal of public concerns (Anon, 2009).Biogerontologists need to take public concerns more seriously if they hope to foster support for their work? Open in a separate windowBrad PartridgeOpen in a separate windowJayne LuckeOpen in a separate windowWayne Hall  相似文献   

2.
Martinson BC 《EMBO reports》2011,12(8):758-762
Universities have been churning out PhD students to reap financial and other rewards for training biomedical scientists. This deluge of cheap labour has created unhealthy competition, which encourages scientific misconduct.Most developed nations invest a considerable amount of public money in scientific research for a variety of reasons: most importantly because research is regarded as a motor for economic progress and development, and to train a research workforce for both academia and industry. Not surprisingly, governments are occasionally confronted with questions about whether the money invested in research is appropriate and whether taxpayers are getting the maximum value for their investments.…questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientistsThe training and maintenance of the research workforce is a large component of these investments. Yet discussions in the USA about the appropriate size of this workforce have typically been contentious, owing to an apparent lack of reliable data to tell us whether the system yields academic ‘reproduction rates'' that are above, below or at replacement levels. In the USA, questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientists. As Donald Kennedy, then Editor-in-Chief of Science, noted several years ago, leaders in prestigious academic institutions have repeatedly rung alarm bells about shortages in the science workforce. Less often does one see questions raised about whether too many scientists are being produced or concerns about unintended consequences that may result from such overproduction. Yet recognizing that resources are finite, it seems reasonable to ask what level of competition for resources is productive, and at what level does competition become counter-productive.Finding a proper balance between the size of the research workforce and the resources available to sustain it has other important implications. Unhealthy competition—too many people clamouring for too little money and too few desirable positions—creates its own problems, most notably research misconduct and lower-quality, less innovative research. If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edge. Moreover, many in the science community worry that every publicized case of research misconduct could jeopardize those resources, if politicians and taxpayers become unwilling to invest in a research system that seems to be riddled with fraud and misconduct.The biomedical research enterprise in the USA provides a useful context in which to examine the level of competition for resources among academic scientists. My thesis is that the system of publicly funded research in the USA as it is currently configured supports a feedback system of institutional incentives that generate excessive competition for resources in biomedical research. These institutional incentives encourage universities to overproduce graduate students and postdoctoral scientists, who are both trainees and a cheap source of skilled labour for research while in training. However, once they have completed their training, they become competitors for money and positions, thereby exacerbating competitive pressures.Questions raised about whether too many scientists are being produced or concerns about the unintended consequences of such overproduction are less commonThe resulting scarcity of resources, partly through its effect on peer review, leads to a shunting of resources away from both younger researchers and the most innovative ideas, which undermines the effectiveness of the research enterprise as a whole. Faced with an increasing number of grant applications and the consequent decrease in the percentage of projects that can be funded, reviewers tend to ‘play it safe'' and favour projects that have a higher likelihood of yielding results, even if the research is conservative in the sense that it does not explore new questions. Resource scarcity can also introduce unwanted randomness to the process of determining which research gets funded. A large group of scientists, led by a cancer biologist, has recently mounted a campaign against a change in a policy of the National Institutes of Health (NIH) to allow only one resubmission of an unfunded grant proposal (Wadman, 2011). The core of their argument is that peer reviewers are likely able to distinguish the top 20% of research applications from the rest, but that within that top 20%, distinguishing the top 5% or 10% means asking peer reviewers for a level of precision that is simply not possible. With funding levels in many NIH institutes now within that 5–10% range, the argument is that reviewers are being forced to choose at random which excellent applications do and do not get funding. In addition to the inefficiency of overproduction and excessive competition in terms of their costs to society and opportunity costs to individuals, these institutional incentives might undermine the integrity and quality of science, and reduce the likelihood of breakthroughs.My colleagues and I have expressed such concerns about workforce dynamics and related issues in several publications (Martinson, 2007; Martinson et al, 2005, 2006, 2009, 2010). Early on, we observed that, “missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures” (Martinson et al, 2005). Our more recent publications have been more specific about the institutional and systemic structures concerned. It seems that at least a few important leaders in science share these concerns.In April 2009, the NIH, through the National Institute of General Medical Sciences (NIGMS), issued a request for applications (RFA) calling for proposals to develop computational models of the research workforce (http://grants.nih.gov/grants/guide/rfa-files/RFA-GM-10-003.html). Although such an initiative might be premature given the current level of knowledge, the rationale behind the RFA seems irrefutable: “there is a need to […] pursue a systems-based approach to the study of scientific workforce dynamics.” Roughly four decades after the NIH appeared on the scene, this is, to my knowledge, the first official, public recognition that the biomedical workforce tends not to conform nicely to market forces of supply and demand, despite the fact that others have previously made such arguments.Early last year, Francis Collins, Director of the NIH, published a PolicyForum article in Science, voicing many of the concerns I have expressed about specific influences that have led to growth rates in the science workforce that are undermining the effectiveness of research in general, and biomedical research in particular. He notes the increasing stress in the biomedical research community after the end of the NIH “budget doubling” between 1998 and 2003, and the likelihood of further disruptions when the American Recovery and Reinvestment Act of 2009 (ARRA) funding ends in 2011. Arguing that innovation is crucial to the future success of biomedical research, he notes the tendency towards conservatism of the NIH peer-review process, and how this worsens in fiscally tight times. Collins further highlights the ageing of the NIH workforce—as grants increasingly go to older scientists—and the increasing time that researchers are spending in itinerant and low-paid postdoctoral positions as they stack up in a holding pattern, waiting for faculty positions that may or may not materialize. Having noted these challenging trends, and echoing the central concerns of a 2007 Nature commentary (Martinson, 2007), he concludes that “…it is time for NIH to develop better models to guide decisions about the optimum size and nature of the US workforce for biomedical research. A related issue that needs attention, though it will be controversial, is whether institutional incentives in the current system that encourage faculty to obtain up to 100% of their salary from grants are the best way to encourage productivity.”Similarly, Bruce Alberts, Editor-in-Chief of Science, writing about incentives for innovation, notes that the US biomedical research enterprise includes more than 100,000 graduate students and postdoctoral fellows. He observes that “only a select few will go on to become independent research scientists in academia”, and argues that “assuming that the system supporting this career path works well, these will be the individuals with the most talent and interest in such an endeavor” (Alberts, 2009).His editorial is not concerned with what happens to the remaining majority, but argues that even among the select few who manage to succeed, the funding process for biomedical research “forces them to avoid risk-taking and innovation”. The primary culprit, in his estimation, is the conservatism of the traditional peer-review system for federal grants, which values “research projects that are almost certain to ‘work''”. He continues, “the innovation that is essential for keeping science exciting and productive is replaced by […] research that has little chance of producing the breakthroughs needed to improve human health.”If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edgeAlthough I believe his assessment of the symptoms is correct, I think he has misdiagnosed the cause, in part because he has failed to identify which influence he is concerned with from the network of influences in biomedical research. To contextualize the influences of concern to Alberts, we must consider the remaining majority of doctorally trained individuals so easily dismissed in his editorial, and further examine what drives the dynamics of the biomedical research workforce.Labour economists might argue that market forces will always balance the number of individuals with doctorates with the number of appropriate jobs for them in the long term. Such arguments would ignore, however, the typical information asymmetry between incoming graduate students, whose knowledge about their eventual job opportunities and career options is by definition far more limited than that of those who run the training programmes. They would also ignore the fact that universities are generally not confronted with the externalities resulting from overproduction of PhDs, and have positive financial incentives that encourage overproduction. During the past 40 years, NIH ‘extramural'' funding has become crucial for graduate student training, faculty salaries and university overheads. For their part, universities have embraced NIH extramural funding as a primary revenue source that, for a time, allowed them to implement a business model based on the interconnected assumptions that, as one of the primary ‘outputs'' or ‘products'' of the university, more doctorally trained individuals are always better than fewer, and because these individuals are an excellent source of cheap, skilled labour during their training, they help to contain the real costs of faculty research.“…the current system has succeeded in maximizing the amount of research […] it has also degraded the quality of graduate training and led to an overproduction of PhDs…”However, it has also made universities increasingly dependent on NIH funding. As recently documented by the economist Paula Stephan, most faculty growth in graduate school programmes during the past decade has occurred in medical colleges, with the majority—more than 70%—in non-tenure-track positions. Arguably, this represents a shift of risk away from universities and onto their faculty. Despite perennial cries of concern about shortages in the research workforce (Butz et al, 2003; Kennedy et al, 2004; National Academy of Sciences et al, 2005) a number of commentators have recently expressed concerns that the current system of academic research might be overbuilt (Cech, 2005; Heinig et al, 2007; Martinson, 2007; Stephan, 2007). Some explicitly connect this to structural arrangements between the universities and NIH funding (Cech, 2005; Collins, 2007; Martinson, 2007; Stephan, 2007).In 1995, David Korn pointed out what he saw as some problematic aspects of the business model employed by Academic Medical Centers (AMCs) in the USA during the past few decades (Korn, 1995). He noted the reliance of AMCs on the relatively low-cost, but highly skilled labour represented by postdoctoral fellows, graduate students and others—who quickly start to compete with their own professors and mentors for resources. Having identified the economic dependence of the AMCs on these inexpensive labour pools, he noted additional problems with the graduate training programmes themselves. “These programs are […] imbued with a value system that clearly indicates to all participants that true success is only marked by the attainment of a faculty position in a high-profile research institution and the coveted status of principal investigator on NIH grants.” Pointing to “more than 10 years of severe supply/demand imbalance in NIH funds”, Korn concluded that, “considering the generative nature of each faculty mentor, this enterprise could only sustain itself in an inflationary environment, in which the society''s investment in biomedical research and clinical care was continuously and sharply expanding.” From 1994 to 2003, total funding for biomedical research in the USA increased at an annual rate of 7.8%, after adjustment for inflation. The comparable rate of growth between 2003 and 2007 was 3.4% (Dorsey et al, 2010). These observations resonate with the now classic observation by Derek J. de Solla Price, from more than 30 years before, that growth in science frequently follows an exponential pattern that cannot continue indefinitely; the enterprise must eventually come to a plateau (de Solla Price, 1963).In May 2009, echoing some of Korn''s observations, Nobel laureate Roald Hoffmann caused a stir in the US science community when he argued for a “de-coupling” of the dual roles of graduate students as trainees and cheap labour (Hoffmann, 2009). His suggestion was to cease supporting graduate students with faculty research grants, and to use the money instead to create competitive awards for which graduate students could apply, making them more similar to free agents. During the ensuing discussion, Shirley Tilghman, president of Princeton University, argued that “although the current system has succeeded in maximizing the amount of research performed […] it has also degraded the quality of graduate training and led to an overproduction of PhDs in some areas. Unhitching training from research grants would be a much-needed form of professional ‘birth control''” (Mervis, 2009).The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientistsAlthough the issue of what I will call the ‘academic birth rate'' is the central concern of this analysis, the ‘academic end-of-life'' also warrants some attention. The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientists. A 2008 news item in Science quoted then 70-year-old Robert Wells, a molecular geneticist at Texas A&M University, “‘if I and other old birds continue to land the grants, the [young scientists] are not going to get them.” He worries that the budget will not be able to support “the 100 people ‘I''ve trained […] to replace me''” (Kaiser, 2008). While his claim of 100 trainees might be astonishing, it might be more astonishing that his was the outlying perspective. The majority of senior scientists interviewed for that article voiced intentions to keep doing science—and going after NIH grants—until someone forced them to stop or they died.Some have looked at the current situation with concern, primarily because of the threats it poses to the financial and academic viability of universities (Korn, 1995; Heinig et al, 2007; Korn & Heinig, 2007), although most of those who express such concerns have been distinctly reticent to acknowledge the role of universities in creating and maintaining the situation. Others have expressed concerns about the differential impact of extreme competition and meagre job prospects on the recruitment, development and career survival of young and aspiring scientists (Freeman et al, 2001; Kennedy et al, 2004; Martinson et al, 2006; Anderson et al, 2007a; Martinson, 2007; Stephan, 2007). There seems to be little disagreement, however, that the system has generated excessively high competition for federal research funding, and that this threatens to undermine the very innovation and production of knowledge that is its raison d''etre.The production of knowledge in science, particularly of the ‘revolutionary'' variety, is generally not a linear input–output process with predictable returns on investment, clear timelines and high levels of certainty (Lane, 2009). On the contrary, it is arguable that “revolutionary science is a high risk and long-term endeavour which usually fails” (Charlton & Andras, 2008). Predicting where, when and by whom breakthroughs in understanding will be produced has proven to be an extremely difficult task. In the face of such uncertainty, and denying the realities of finite resources, some have argued that the best bet is to maximize the number of scientists, using that logic to justify a steady-state production of new PhDs, regardless of whether the labour market is sending signals of increasing or decreasing demand for that supply. Only recently have we begun to explore the effects of the current arrangement on the process of knowledge production, and on innovation in particular (Charlton & Andras, 2008; Kolata, 2009).…most of those who express such concerns have been reticent to acknowledge the role of universities themselves in creating and maintaining the situationBruce Alberts, in the above-mentioned editorial, points to several initiatives launched by the NIH that aim to get a larger share of NIH funding into the hands of young scientists with particularly innovative ideas. These include the “New Innovator Award,” the “Pioneer Award” and the “Transformational R01 Awards”. The proportion of NIH funding dedicated to these awards, however, amounts to “only 0.27% of the NIH budget” (Alberts, 2009). Such a small proportion of the NIH budget does not seem likely to generate a large amount of more innovative science. Moreover, to the extent that such initiatives actually succeed in enticing more young investigators to become dependent on NIH funds, any benefit these efforts have in terms of innovation may be offset by further increases in competition for resources that will come when these new ‘innovators'' reach the end of this specialty funding and add to the rank and file of those scrapping for funds through the standard mechanisms.Our studies on research integrity have been mostly oriented towards understanding how the influences within which academic scientists work might affect their behaviour, and thus the quality of the science they produce (Anderson et al, 2007a, 2007b; Martinson et al, 2009, 2010). My colleagues and I have focused on whether biomedical researchers perceive fairness in the various exchange relationships within their work systems. I am persuaded by the argument that expectations of fairness in exchange relationships have been hard-wired into us through evolution (Crockett et al, 2008; Hsu et al, 2008; Izuma et al, 2008; Pennisi, 2009), with the advent of modern markets being a primary manifestation of this. Thus, violations of these expectations strike me as potentially corrupting influences. Such violations might be prime motivators for ill will, possibly engendering bad-faith behaviour among those who perceive themselves to have been slighted, and therefore increasing the risk of research misconduct. They might also corrupt the enterprise by signalling to talented young people that biomedical research is an inhospitable environment in which to develop a career, possibly chasing away some of the most talented individuals, and encouraging a selection of characteristics that might not lead to optimal effectiveness, in terms of scientific innovation and productivity (Charlton, 2009).To the extent that we have an ecology with steep competition that is fraught with high risks of career failure for young scientists after they incur large costs of time, effort and sometimes financial resources to obtain a doctoral degree, why would we expect them to take on the additional, substantial risks involved in doing truly innovative science and asking risky research questions? And why, in such a cut-throat setting, would we not anticipate an increase in corner-cutting, and a corrosion of good scientific practice, collegiality, mentoring and sociability? Would we not also expect a reduction in high-risk, innovative science, and a reversion to a more career-safe type of ‘normal'' science? Would this not reduce the effectiveness of the institution of biomedical research? I do not claim to know the conditions needed to maximize the production of research that is novel, innovative and conducted with integrity. I am fairly certain, however, that putting scientists in tenuous positions in which their careers and livelihoods would be put at risk by pursuing truly revolutionary research is one way to insure against it.  相似文献   

3.

Background:

The gut microbiota is essential to human health throughout life, yet the acquisition and development of this microbial community during infancy remains poorly understood. Meanwhile, there is increasing concern over rising rates of cesarean delivery and insufficient exclusive breastfeeding of infants in developed countries. In this article, we characterize the gut microbiota of healthy Canadian infants and describe the influence of cesarean delivery and formula feeding.

Methods:

We included a subset of 24 term infants from the Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort. Mode of delivery was obtained from medical records, and mothers were asked to report on infant diet and medication use. Fecal samples were collected at 4 months of age, and we characterized the microbiota composition using high-throughput DNA sequencing.

Results:

We observed high variability in the profiles of fecal microbiota among the infants. The profiles were generally dominated by Actinobacteria (mainly the genus Bifidobacterium) and Firmicutes (with diverse representation from numerous genera). Compared with breastfed infants, formula-fed infants had increased richness of species, with overrepresentation of Clostridium difficile. Escherichia–Shigella and Bacteroides species were underrepresented in infants born by cesarean delivery. Infants born by elective cesarean delivery had particularly low bacterial richness and diversity.

Interpretation:

These findings advance our understanding of the gut microbiota in healthy infants. They also provide new evidence for the effects of delivery mode and infant diet as determinants of this essential microbial community in early life.The human body harbours trillions of microbes, known collectively as the “human microbiome.” By far the highest density of commensal bacteria is found in the digestive tract, where resident microbes outnumber host cells by at least 10 to 1. Gut bacteria play a fundamental role in human health by promoting intestinal homeostasis, stimulating development of the immune system, providing protection against pathogens, and contributing to the processing of nutrients and harvesting of energy.1,2 The disruption of the gut microbiota has been linked to an increasing number of diseases, including inflammatory bowel disease, necrotizing enterocolitis, diabetes, obesity, cancer, allergies and asthma.1 Despite this evidence and a growing appreciation for the integral role of the gut microbiota in lifelong health, relatively little is known about the acquisition and development of this complex microbial community during infancy.3Two of the best-studied determinants of the gut microbiota during infancy are mode of delivery and exposure to breast milk.4,5 Cesarean delivery perturbs normal colonization of the infant gut by preventing exposure to maternal microbes, whereas breastfeeding promotes a “healthy” gut microbiota by providing selective metabolic substrates for beneficial bacteria.3,5 Despite recommendations from the World Health Organization,6 the rate of cesarean delivery has continued to rise in developed countries and rates of breastfeeding decrease substantially within the first few months of life.7,8 In Canada, more than 1 in 4 newborns are born by cesarean delivery, and less than 15% of infants are exclusively breastfed for the recommended duration of 6 months.9,10 In some parts of the world, elective cesarean deliveries are performed by maternal request, often because of apprehension about pain during childbirth, and sometimes for patient–physician convenience.11The potential long-term consequences of decisions regarding mode of delivery and infant diet are not to be underestimated. Infants born by cesarean delivery are at increased risk of asthma, obesity and type 1 diabetes,12 whereas breastfeeding is variably protective against these and other disorders.13 These long-term health consequences may be partially attributable to disruption of the gut microbiota.12,14Historically, the gut microbiota has been studied with the use of culture-based methodologies to examine individual organisms. However, up to 80% of intestinal microbes cannot be grown in culture.3,15 New technology using culture-independent DNA sequencing enables comprehensive detection of intestinal microbes and permits simultaneous characterization of entire microbial communities. Multinational consortia have been established to characterize the “normal” adult microbiome using these exciting new methods;16 however, these methods have been underused in infant studies. Because early colonization may have long-lasting effects on health, infant studies are vital.3,4 Among the few studies of infant gut microbiota using DNA sequencing, most were conducted in restricted populations, such as infants delivered vaginally,17 infants born by cesarean delivery who were formula-fed18 or preterm infants with necrotizing enterocolitis.19Thus, the gut microbiota is essential to human health, yet the acquisition and development of this microbial community during infancy remains poorly understood.3 In the current study, we address this gap in knowledge using new sequencing technology and detailed exposure assessments20 of healthy Canadian infants selected from a national birth cohort to provide representative, comprehensive profiles of gut microbiota according to mode of delivery and infant diet.  相似文献   

4.
L Bornmann 《EMBO reports》2012,13(8):673-676
The global financial crisis has changed how nations and agencies prioritize research investment. There has been a push towards science with expected benefits for society, yet devising reliable tools to predict and measure the social impact of research remains a major challenge.Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier, and it has become the underlying rationale for public support and funding of science.However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [1]. In fact, there is no direct link between the scientific quality of a research project and its societal value. As Paul Nightingale and Alister Scott of the University of Sussex''s Science and Technology Policy Research centre have pointed out: “research that is highly cited or published in top journals may be good for the academic discipline but not for society” [2]. Moreover, it might take years, or even decades, until a particular body of knowledge yields new products or services that affect society. By way of example, in an editorial on the topic in the British Medical Journal, editor Richard Smith cites the original research into apoptosis as work that is of high quality, but that has had “no measurable impact on health” [3]. He contrasts this with, for example, research into “the cost effectiveness of different incontinence pads”, which is certainly not seen as high value by the scientific community, but which has had an immediate and important societal impact.…the growth of scientific research during the past decades has outpaced the public resources available to fund itThe problem actually begins with defining the ‘societal impact of research''. A series of different concepts has been introduced: ‘third-stream activities'' [4], ‘societal benefits'' or ‘societal quality'' [5], ‘usefulness'' [6], ‘public values'' [7], ‘knowledge transfer'' [8] and ‘societal relevance'' [9, 10]. Yet, each of these concepts is ultimately concerned with measuring the social, cultural, environmental and economic returns from publicly funded research, be they products or ideas.In this context, ‘societal benefits'' refers to the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making. ‘Cultural benefits'' are those that add to the cultural capital of a nation, for example, by giving insight into how we relate to other societies and cultures, by providing a better understanding of our history and by contributing to cultural preservation and enrichment. ‘Environmental benefits'' benefit the natural capital of a nation, by reducing waste and pollution, and by increasing natural preserves or biodiversity. Finally, ‘economic benefits'' increase the economic capital of a nation by enhancing its skills base and by improving its productivity [11].Given the variability and the complexity of evaluating the societal impact of research, Barend van der Meulen at the Rathenau Institute for research and debate on science and technology in the Netherlands, and Arie Rip at the School of Management and Governance of the University of Twente, the Netherlands, have noted that “it is not clear how to evaluate societal quality, especially for basic and strategic research” [5]. There is no accepted framework with adequate datasets comparable to,for example, Thomson Reuters'' Web of Science, which enables the calculation of bibliometric values such as the h index [12] or journal impact factor [13]. There are also no criteria or methods that can be applied to the evaluation of societal impact, whilst conventional research and development (R&D) indicators have given little insight, with the exception of patent data. In fact, in many studies, the societal impact of research has been postulated rather than demonstrated [14]. For Benoît Godin at the Institut National de la Recherche Scientifique (INRS) in Quebec, Canada, and co-author Christian Doré, “systematic measurements and indicators [of the] impact on the social, cultural, political, and organizational dimensions are almost totally absent from the literature” [15]. Furthermore, they note, most research in this field is primarily concerned with economic impact.A presentation by Ben Martin from the Science and Technology Policy Research Unit at Sussex University, UK, cites four common problems that arise in the context of societal impact measurements [16]. The first is the causality problem—it is not clear which impact can be attributed to which cause. The second is the attribution problem, which arises because impact can be diffuse or complex and contingent, and it is not clear what should be attributed to research or to other inputs. The third is the internationality problem that arises as a result of the international nature of R&D and innovation, which makes attribution virtually impossible. Finally, the timescale problem arises because the premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impact.…in many studies, the societal impact of research has been postulated rather than demonstratedIn addition, there are four other problems. First, it is hard to find experts to assess societal impact that is based on peer evaluation. As Robert Frodeman and James Britt Holbrook at the University of North Texas, USA, have noted, “[s]cientists generally dislike impacts considerations” and evaluating research in terms of its societal impact “takes scientists beyond the bounds of their disciplinary expertise” [10]. Second, given that the scientific work of an engineer has a different impact than the work of a sociologist or historian, it will hardly be possible to have a single assessment mechanism [4, 17]. Third, societal impact measurement should take into account that there is not just one model of a successful research institution. As such, assessment should be adapted to the institution''s specific strengths in teaching and research, the cultural context in which it exists and national standards. Finally, the societal impact of research is not always going to be desirable or positive. For example, Les Rymer, graduate education policy advisor to the Australian Group of Eight (Go8) network of university vice-chancellors, noted in a report for the Go8 that, “environmental research that leads to the closure of a fishery might have an immediate negative economic impact, even though in the much longer term it will preserve a resource that might again become available for use. The fishing industry and conservationists might have very different views as to the nature of the initial impact—some of which may depend on their view about the excellence of the research and its disinterested nature” [18].Unlike scientific impact measurement, for which there are numerous established methods that are continually refined, research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishments. Even so, governments already conduct budget-relevant measurements, or plan to do so. The best-known national evaluation system is the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. Efforts are under way to set up the Research Excellence Framework (REF), which is set to replace the RAE in 2014 “to support the desire of modern research policy for promoting problem-solving research” [21]. In order to develop the new arrangements for the assessment and funding of research in the REF, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches for evaluating the impact of research [20]. The recommendation from this consultation is that impact should be measured in a quantifiable way, and expert panels should review narrative evidence in case studies supported by appropriate indicators [19,21].…premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impactMany of the studies that have carried out societal impact measurement chose to do so on the basis of case studies. Although this method is labour-intensive and a craft rather than a quantitative activity, it seems to be the best way of measuring the complex phenomenon that is societal impact. The HEFCE stipulates that “case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe” [22]. Claire Donovan at Brunel University, London, UK, considers the preference for a case-study approach in the REF to be “the ‘state of the art'' [for providing] the necessary evidence-base for increased financial support of university research across all fields” [23]. According to Finn Hansson from the Department of Leadership, Policy and Philosophy at the Copenhagen Business School, Denmark, and co-author Erik Ernø-Kjølhede, the new REF is “a clear political signal that the traditional model for assessing research quality based on a discipline-oriented Mode 1 perception of research, first and foremost in the form of publication in international journals, was no longer considered sufficient by the policy-makers” [19]. ‘Mode 1'' describes research governed by the academic interests of a specific community, whereas ‘Mode 2'' is characterized by collaboration—both within the scientific realm and with other stakeholders—transdisciplinarity and basic research that is being conducted in the context of application [19].The new REF will also entail changes in budget allocations. The evaluation of a research unit for the purpose of allocations will determine 20% of the societal influence dimension [19]. The final REF guidance contains lists of examples for different types of societal impact [24].Societal impact is much harder to measure than scientific impact, and there are probably no indicators that can be used across all disciplines and institutions for collation in databases [17]. Societal impact often takes many years to become apparent, and “[t]he routes through which research can influence individual behaviour or inform social policy are often very diffuse” [18].Yet, the practitioners of societal impact measurement should not conduct this exercise alone; scientists should also take part. According to Steve Hanney at Brunel University, an expert in assessing payback or impacts from health research, and his co-authors, many scientists see societal impact measurement as a threat to their scientific freedom and often reject it [25]. If the allocation of funds is increasingly oriented towards societal impact issues, it challenges the long-standing reward system in science whereby scientists receive credits—not only citations and prizes but also funds—for their contributions to scientific advancement. However, given that societal impact measurement is already important for various national evaluations—and other countries will follow probably—scientists should become more concerned with this aspect of their research. In fact, scientists are often unaware that their research has a societal impact. “The case study at BRASS [Centre for Business Relationships, Accountability, Sustainability and Society] uncovered activities that were previously ‘under the radar'', that is, researchers have been involved in activities they realised now can be characterized as productive interactions” [26] between them and societal stakeholders. It is probable that research in many fields already has a direct societal impact, or induces productive interactions, but that it is not yet perceived as such by the scientists conducting the work.…research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishmentsThe involvement of scientists is also necessary in the development of mechanisms to collect accurate and comparable data [27]. Researchers in a particular discipline will be able to identify appropriate indicators to measure the impact of their kind of work. If the approach to establishing measurements is not sufficiently broad in scope, there is a danger that readily available indicators will be used for evaluations, even if they do not adequately measure societal impact [16]. There is also a risk that scientists might base their research projects and grant applications on readily available and ultimately misleading indicators. As Hansson and Ernø-Kjølhede point out, “the obvious danger is that researchers and universities intensify their efforts to participate in activities that can be directly documented rather than activities that are harder to document but in reality may be more useful to society” [19]. Numerous studies have documented that scientists already base their activities on the criteria and indicators that are applied in evaluations [19, 28, 29].Until reliable and robust methods to assess impact are developed, it makes sense to use expert panels to qualitatively assess the societal relevance of research in the first instance. Rymer has noted that, “just as peer review can be useful in assessing the quality of academic work in an academic context, expert panels with relevant experience in different areas of potential impact can be useful in assessing the difference that research has made” [18].Whether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting the public funding and support of basic researchWhether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting public funding and support of basic research. This has always been the case, but new research into measures that can assess the societal impact of research would provide better qualitative and quantitative data on which funding agencies and politicians could base decisions. At the same time, such measurement should not come at the expense of basic, blue-sky research, given that it is and will remain near-impossible to predict the impact of certain research projects years or decades down the line.  相似文献   

5.
6.

Background:

There have been postmarketing reports of adverse cardiovascular events associated with the use of varenicline, a widely used smoking cessation drug. We conducted a systematic review and meta-analysis of randomized controlled trials to ascertain the serious adverse cardiovascular effects of varenicline compared with placebo among tobacco users.

Methods:

We searched MEDLINE, EMBASE, the Cochrane Database of Systematic Reviews, websites of regulatory authorities and registries of clinical trials, with no date or language restrictions, through September 2010 (updated March 2011) for published and unpublished studies. We selected double-blind randomized controlled trials of at least one week’s duration involving smokers or people who used smokeless tobacco that reported on cardiovascular events (ischemia, arrhythmia, congestive heart failure, sudden death or cardiovascular-related death) as serious adverse events asociated with the use of varenicline.

Results:

We analyzed data from 14 double-blind randomized controlled trials involving 8216 participants. The trials ranged in duration from 7 to 52 weeks. Varenicline was associated with a significantly increased risk of serious adverse cardiovascular events compared with placebo (1.06% [52/4908] in varenicline group v. 0.82% [27/3308] in placebo group; Peto odds ratio [OR] 1.72, 95% confidence interval [CI] 1.09–2.71; I2 = 0%). The results of various sensitivity analyses were consistent with those of the main analysis, and a funnel plot showed no publication bias. There were too few deaths to allow meaningful comparisons of mortality.

Interpretation:

Our meta-analysis raises safety concerns about the potential for an increased risk of serious adverse cardiovascular events associated with the use of varenicline among tobacco users.Varenicline is one of the most widely used drugs for smoking cessation. It is a partial agonist at the α4–β2 nicotinic acetylcholine receptors and a full agonist at the α7 nicotinic acetylcholine receptor.1,2 The drug modulates parasympathetic output from the brainstem to the heart because of activities of the α7 receptor.3 Acute nicotine administration can induce thrombosis.4 Possible mechanisms by which varenicline may be associated with cardiovascular disease might include the action of varenicline at the α7 receptor in the brainstem or, similar to nicotine, a prothrombotic effect.24At the time of its priority safety review of varenicline in 2006, the US Food and Drug Administration (FDA) noted that “[t]he serious adverse event data suggest that varenicline may possibly increase the risk of cardiac events, both ischemic and arrhythmic, particularly over longer treatment period.”5 Subsequently, the product label was updated: “Post marketing reports of myocardial infarction and cerebrovascular accidents including ischemic and hemorrhagic events have been reported in patients taking Chantix.”6 There are published reports of cardiac arrest associated with varenicline.7Cardiovascular disease is an important cause of morbidity and mortality among tobacco users. The long-term cardiovascular benefits of smoking cessation are well established.8 Although one statistically underpowered trial reported a trend toward excess cardiovascular events associated with the use of varenicline,9 a systematic review of information on the cardiovascular effects of varenicline is unavailable to clinicians.We conducted a systematic review and meta-analysis of randomized controlled trials (RCTs) to ascertain the serious adverse cardiovascular effects of varenicline compared with placebo among tobacco users.  相似文献   

7.
Elixirs of death     
Substandard and fake drugs are increasingly threatening lives in both the developed and developing world, but governments and industry are struggling to improve the situation.When people take medicine, they assume that it will make them better. However many patients cannot trust their drugs to be effective or even safe. Fake or substandard medicine is a major public health problem and it seems to be growing. More than 200 heart patients died in Pakistan in 2012 after taking a contaminated drug against hypertension [1]. In 2006, cough syrup that contained diethylene glycol as a cheap substitute for pharmaceutical-grade glycerin was distributed in Panama, causing the death of at least 219 people [2,3]. However, the problem is not restricted to developing countries. In 2012, more than 500 patients came down with fungal meningitis and several dozens died after receiving contaminated steroid injections from a compounding pharmacy in Massachusetts [4]. The same year, a fake version of the anti-cancer drug Avastin, which contained no active ingredient, was sold in the USA. The drug seemed to have entered the country through Turkey, Switzerland, Denmark and the UK [5].…many patients cannot trust their drugs to be effective or even safeThe extent of the problem is not really known, as companies and governments do not always report incidents [6]. However, the information that is available is alarming enough, especially in developing countries. One study found that 20% of antihypertensive drugs collected from pharmacies in Rwanda were substandard [7]. Similarly, in a survey of anti-malaria drugs in Southeast Asia and sub-Saharan Africa, 20–42% were found to be either of poor quality or outright fake [8], whilst 56% of amoxicillin capsules sampled in different Arab countries did not meet the US Pharmacopeia requirements [9].Developing countries are particularly susceptible to substandard and fake medicine. Regulatory authorities do not have the means or human resources to oversee drug manufacturing and distribution. A country plagued by civil war or famine might have more pressing problems—including shortages of medicine in the first place. The drug supply chain is confusingly complex with medicines passing through many different hands before they reach the patient, which creates many possible entry points for illegitimate products. Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insurance. Instead, they buy cheap medicine from street vendors at the market or on the bus (Fig 1; [2,10,11]). “People do not have the money to buy medicine at a reasonable price. But quality comes at a price. A reasonable margin is required to pay for a quality control system,” explained Hans Hogerzeil, Professor of Global Health at Groningen University in the Netherlands. In some countries, falsifying medicine has developed into a major business. The low risk of being detected combined with relatively low penalties has turned falsifying medicine into the “perfect crime” [2].Open in a separate windowFigure 1Women sell smuggled, counterfeit medicine on the Adjame market in Abidjan, Ivory Coast, in 2007. Fraudulent street medecine sales rose by 15–25% in the past two years in Ivory Coast.Issouf Sanogo/AFP Photo/Getty Images.There are two main categories of illegitimate drugs. ‘Substandard'' medicines might result from poor-quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intent. It might be manufactured outside the regulatory system, perhaps in an illegitimate production shack that blends chalk with other ingredients and presses it into pills [10]. Whilst falsified medicines do not typically contain any active ingredients, substandard medicine might contain subtherapeutic amounts. This is particularly problematic when it comes to anti-infectious drugs, as it facilitates the emergence and spread of drug resistance [12]. A sad example is the emergence of artemisinin-resistant Plasmodium strains at the Thai–Cambodia border [8] and the Thai–Myanmar border [13], and increasing multidrug-resistant tuberculosis might also be attributed to substandard medication [11].Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insuranceEven if a country effectively prosecutes falsified and substandard medicine within its borders, it is still vulnerable to fakes and low-quality drugs produced elsewhere where regulations are more lax. To address this problem, international initiatives are urgently required [10,14,15], but there is no internationally binding law to combat counterfeit and substandard medicine. Although drug companies, governments and NGOs are interested in good-quality medicines, the different parties seem to have difficulties coming to terms with how to proceed. What has held up progress is a conflation of health issues and economic interests: innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting quality of medicine [14,16].The concern that intellectual property (IP) interests threaten public health dates back to the ‘Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement'' of the World Trade Organization (WTO), adopted in 1994, to establish global protection of intellectual property rights, including patents for pharmaceuticals. The TRIPS Agreement had devastating consequences during the acquired immunodeficiency syndrome epidemic, as it blocked patients in developing countries from access to affordable medicine. Although it includes flexibility, such as the possibility for governments to grant compulsory licenses to manufacture or import a generic version of a patented drug, it has not always been clear how these can be used by countries [14,16,17].In response to public concerns over the public health consequences of TRIPS, the Doha Declaration on the TRIPS Agreement and Public Health was adopted at the WTO''s Ministerial Conference in 2001. It reaffirmed the right of countries to use TRIPS flexibilities and confirmed the primacy of public health over the enforcement of IP rights. Although things have changed for the better, the Doha Declaration did not solve all the problems associated with IP protection and public health. For example, anti-counterfeit legislation, encouraged by multi-national pharmaceutical industries and the EU, threatened to impede the availability of generic medicines in East Africa [14,16,18]. In 2008–2009, European customs authorities seized shipments of legitimate generic medicines in transit from India to other developing countries because they infringed European IP laws [14,16,17]. “We''re left with decisions being taken based on patents and trademarks that should be taken based on health,” commented Roger Bate, a global health expert and resident scholar at the American Enterprise Institute in Washington, USA. “The health community is shooting themselves in the foot.”Conflating health care and IP issues are reflected in the unclear use of the term ‘counterfeit'' [2,14]. “Since the 1990s the World Health Organization (WHO) has used the term ‘counterfeit'' in the sense we now use ‘falsified'',” explained Hogerzeil. “The confusion started in 1995 with the TRIPS agreement, through which the term ‘counterfeit'' got the very narrow meaning of trademark infringement.” As a consequence, an Indian generic, for example, which is legal in some countries but not in others, could be labelled as ‘counterfeit''—and thus acquire the negative connotation of bad quality. “The counterfeit discussion was very much used as a way to block the market of generics and to put them in a bad light,” Hogerzeil concluded.The rifts between the stakeholders have become so deep during the course of these discussions that progress is difficult to achieve. “India is not at all interested in any international regulation. And, unfortunately, it wouldn''t make much sense to do anything without them,” Hogerzeil explained. Indeed, India is a core player: not only does it have a large generics industry, but also the country seems to be, together with China, the biggest source of fake medical products [19,20]. The fact that India is so reluctant to react is tragically ironic, as this stance hampers the growth of its own generic companies like Ranbaxy, Cipla or Piramal. “I certainly don''t believe that Indian generics would lose market share if there was stronger action on public health,” Bate said. Indeed, stricter regulations and control systems would be advantageous, because they would keep fakers at bay. The Indian generic industry is a common target for fakers, because their products are broadly distributed. “The most likely example of a counterfeit product I have come across in emerging markets is a counterfeit Indian generic,” Bate said. Such fakes can damage a company''s reputation and have a negative impact on its revenues when customers stop buying the product.The WHO has had a key role in attempting to draft international regulations that would contain the spread of falsified and substandard medicine. It took a lead in 2006 with the launch of the International Medical Products Anti-Counterfeiting Taskforce (IMPACT). But IMPACT was not a success. Concerns were raised over the influence of multi-national drug companies and the possibility that issues on quality of medicines were conflated with the attempts to enforce stronger IP measures [17]. The WHO distanced itself from IMPACT after 2010. For example, it no longer hosts IMPACT''s secretariat at its headquarters in Geneva [2].‘Substandard'' medicines might result from poor quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intentIn 2010, the WHO''s member states established a working group to further investigate how to proceed, which led to the establishment of a new “Member State mechanism on substandard/spurious/falsely labelled/falsified/counterfeit medical products” (http://www.who.int/medicines/services/counterfeit/en/index.html). However, according to a publication by Amir Attaran from the University of Ottawa, Canada, and international colleagues, the working group “still cannot agree how to define the various poor-quality medicines, much less settle on any concrete actions” [14]. The paper''s authors demand more action and propose a binding legal framework: a treaty. “Until we have stronger public health law, I don''t think that we are going to resolve this problem,” Bate, who is one of the authors of the paper, said.Similarly, the US Food and Drug Administration (FDA) commissioned the Institute of Medicine (IOM) to convene a consensus committee on understanding the global public health implications of falsified and substandard pharmaceuticals [2]. Whilst others have called for a treaty, the IOM report calls on the World Health Assembly—the governing body of the WHO—to develop a code of practice such as a “voluntary soft law” that countries can sign to express their will to do better. “At the moment, there is not yet enough political interest in a treaty. A code of conduct may be more realistic,” Hogerzeil, who is also on the IOM committee, commented. Efforts to work towards a treaty should nonetheless be pursued, Bate insisted: “The IOM is right in that we are not ready to sign a treaty yet, but that does not mean you don''t start negotiating one.”Whilst a treaty might take some time, there are several ideas from the IOM report and elsewhere that could already be put into action to deal with this global health threat [10,12,14,15,19]. Any attempts to safeguard medicines need to address both falsified and substandard medicines, but the counter-measures are different [14]. Falsifying medicine is, by definition, a criminal act. To counteract fakers, action needs to be taken to ensure that the appropriate legal authorities deal with criminals. Substandard medicine, on the other hand, arises when mistakes are made in genuine manufacturing companies. Such mistakes can be reduced by helping companies do better and by improving quality control of drug regulatory authorities.Manufacturing pharmaceuticals is a difficult and costly business that requires clean water, high-quality chemicals, expensive equipment, technical expertise and distribution networks. Large and multi-national companies benefit from economies of scale to cope with these problems. But smaller companies often struggle and compromise in quality [2,21]. “India has 20–40 big companies and perhaps nearly 20,000 small ones. To me, it seems impossible for them to produce at good quality, if they remain so small,” Hogerzeil explained. “And only by being strict, can you force them to combine and to become bigger industries that can afford good-quality assurance systems.” Clamping down on drug quality will therefore lead to a consolidation of the industry, which is an essential step. “If you look at Europe and the US, there were hundreds of drug companies—now there are dozens. And if you look at the situation in India and China today, there are thousands and that will have to come down to dozens as well,” Bate explained.…innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting […] medicineIn addition to consolidating the market by applying stricter rules, the IOM has also suggested measures for supporting companies that observe best practices [2]. For example, the IOM proposes that the International Finance Corporation and the Overseas Private Investment Corporation, which promote private-sector development to reduce poverty, should create separate investment vehicles for pharmaceutical manufacturers who want to upgrade to international standards. Another suggestion is to harmonize market registration of pharmaceutical products, which would ease the regulatory burden for generic producers in developing countries and improve the efficiency of regulatory agencies.Once the medicine leaves the manufacturer, controlling distribution systems becomes another major challenge in combatting falsified and substandard medicine. Global drug supply chains have grown increasingly complicated; drugs cross borders, are sold back and forth between wholesalers and distributers, and are often repackaged. Still, there is a main difference between developing and developed countries. In the latter case, relatively few companies dominate the market, whereas in poorer nations, the distribution system is often fragmented and uncontrolled with parallel schemes, too few pharmacies, even fewer pharmacists and many unlicensed medical vendors. Every transaction creates an opportunity for falsified or substandard medicine to enter the market [2,10,19]. More streamlined and transparent supply chains and stricter licensing requirements would be crucial to improve drug quality. “And we can start in the US,” Hogerzeil commented.…India is a core player: not only does it have a large generics industry, but the country also seems to be, together with China, the biggest source of fake medical productsDistribution could be improved at different levels, starting with the import of medicine. “There are states in the USA where the regulation for medicine importation is very lax. Anyone can import; private clinics can buy medicine from Lebanon or elsewhere and fly them in,” Hogerzeil explained. The next level would be better control over the distribution system within the country. The IOM suggests that state boards should license wholesalers and distributors that meet the National Association of Boards of Pharmacy accreditation standards. “Everybody dealing with medicine has to be licensed,” Hogerzeil said. “And there should be a paper trail of who buys what from whom. That way you close the entry points for illegal drugs and prevent that falsified medicines enter the legal supply chain.” The last level would be a track-and-trace system to identify authentic drugs [2]. Every single package of medicine should be identifiable through an individual marker, such as a 3D bar code. Once it is sold, it is ticked off in a central database, so the marker cannot be reused.According to Hogerzeil, equivalent measures at these different levels should be established in every country. “I don''t believe in double standards”, he said. “Don''t say to Uganda: ‘you can''t do that''. Rather, indicate to them what a cost-effective system in the West looks like and help them, and give them the time, to create something in that direction that is feasible in their situation.”Nigeria, for instance, has demonstrated that with enough political will, it is possible to reduce the proliferation of falsified and substandard medicine. Nigeria had been a major source for falsified products, but things changed in 2001, when Dora Akunyili was appointed Director General of the National Agency for Food and Drug Administration and Control. Akunyili has a personal motivation for fighting falsified drugs: her sister Vivian, a diabetic patient, lost her life to fake insulin in 1988. Akunyili strengthened import controls, campaigned for public awareness, clamped down on counterfeit operations and pushed for harsher punishments [10,19]. Paul Orhii, Akunyili''s successor, is committed to continuing her work [10]. Although there are no exact figures, various surveys indicate that the rate of bad-quality medicine has dropped considerably in Nigeria [10].China is also addressing its drug-quality problems. In a highly publicized event, the former head of China''s State Food and Drug Administration, Zheng Xiaoyu, was executed in 2007 after he was found guilty of accepting bribes to approve untested medicine. Since then, China''s fight against falsified medicine has continued. As a result of heightened enforcement, the number of drug companies in China dwindled from 5,000 in 2004 to about 3,500 this year [2]. Moreover, in July 2012, more than 1,900 suspects were arrested for the sale of fake or counterfeit drugs.Quality comes at a price, however. It is expensive to produce high-quality medicine, and it is expensive to control the production and distribution of drugs. Many low- and middle-income countries might not have the resources to tackle the problem and might not see quality of medicine as a priority. But they should, and affluent countries should help. Not only because health is a human right, but also for economic reasons. A great deal of time and money is invested into testing the safety and efficacy of medicine during drug development, and these resources are wasted when drugs do not reach patients. Falsified and substandard medicines are a financial burden to health systems and the emergence of drug-resistant pathogens might make invaluable medications useless. Investing in the safety of medicine is therefore a humane and an economic imperative.  相似文献   

8.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

9.

Background:

Optimization of systolic blood pressure and lipid levels are essential for secondary prevention after ischemic stroke, but there are substantial gaps in care, which could be addressed by nurse- or pharmacist-led care. We compared 2 types of case management (active prescribing by pharmacists or nurse-led screening and feedback to primary care physicians) in addition to usual care.

Methods:

We performed a prospective randomized controlled trial involving adults with recent minor ischemic stroke or transient ischemic attack whose systolic blood pressure or lipid levels were above guideline targets. Participants in both groups had a monthly visit for 6 months with either a nurse or pharmacist. Nurses measured cardiovascular risk factors, counselled patients and faxed results to primary care physicians (active control). Pharmacists did all of the above as well as prescribed according to treatment algorithms (intervention).

Results:

Most of the 279 study participants (mean age 67.6 yr, mean systolic blood pressure 134 mm Hg, mean low-density lipoprotein [LDL] cholesterol 3.23 mmol/L) were already receiving treatment at baseline (antihypertensives: 78.1%; statins: 84.6%), but none met guideline targets (systolic blood pressure ≤ 140 mm Hg, fasting LDL cholesterol ≤ 2.0 mmol/L). Substantial improvements were observed in both groups after 6 months: 43.4% of participants in the pharmacist case manager group met both systolic blood pressure and LDL guideline targets compared with 30.9% in the nurse-led group (12.5% absolute difference; number needed to treat = 8, p = 0.03).

Interpretation:

Compared with nurse-led case management (risk factor evaluation, counselling and feedback to primary care providers), active case management by pharmacists substantially improved risk factor control at 6 months among patients who had experienced a stroke. Trial registration: ClinicalTrials.gov, no. NCT00931788The risk of cardiovascular events is high for patients who survive a stroke or transient ischemic attack.1,2 Treatment of hypertension and dyslipidemia can substantially reduce this risk.37 However, vascular risk factors are often suboptimally managed after stroke or transient ischemic attack, even among patients admitted to hospital or seen in specialized stroke prevention clinics.810Multiple barriers are responsible for the suboptimal control of risk factors, and traditional means of educating practitioners and patients have limited effectiveness.11 Although it has been suggested that “case managers” may be able to improve the management of risk factors, evidence is sparse and inconsistent between studies.1216 The most recent Cochrane review on this topic concluded that “nurse- or pharmacist-led care may be a promising way forward … but these interventions require further evaluation.”16 Thus, we designed this trial to evaluate whether a pharmacist case manager could improve risk factors among survivors of stroke or transient ischemic attack.17 Because we have previously shown that hypertension control can be improved by monthly evaluation by nurses (with patient counselling and faxing of blood pressure measurements with guideline recommendations to primary care physicians),18 and this is an alternate method of case management implemented in many health organizations, we used this approach as the active control group for this study. Thus, our study represents a controlled comparison of 2 modes of case management: active prescribing (pharmacist-led case management) versus screening and delegating to primary care physicians (nurse-led case management).  相似文献   

10.
Gronich N  Lavi I  Rennert G 《CMAJ》2011,183(18):E1319-E1325

Background:

Combined oral contraceptives are a common method of contraception, but they carry a risk of venous and arterial thrombosis. We assessed whether use of drospirenone was associated with an increase in thrombotic risk relative to third-generation combined oral contraceptives.

Methods:

Using computerized records of the largest health care provider in Israel, we identified all women aged 12 to 50 years for whom combined oral contraceptives had been dispensed between Jan. 1, 2002, and Dec. 31, 2008. We followed the cohort until 2009. We used Poisson regression models to estimate the crude and adjusted rate ratios for risk factors for venous thrombotic events (specifically deep vein thrombosis and pulmonary embolism) and arterial thromboic events (specifically transient ischemic attack and cerebrovascular accident). We performed multivariable analyses to compare types of contraceptives, with adjustment for the various risk factors.

Results:

We identified a total of 1017 (0.24%) venous and arterial thrombotic events among 431 223 use episodes during 819 749 woman-years of follow-up (6.33 venous events and 6.10 arterial events per 10 000 woman-years). In a multivariable model, use of drospirenone carried an increased risk of venous thrombotic events, relative to both third-generation combined oral contraceptives (rate ratio [RR] 1.43, 95% confidence interval [CI] 1.15–1.78) and second-generation combined oral contraceptives (RR 1.65, 95% CI 1.02–2.65). There was no increase in the risk of arterial thrombosis with drospirenone.

Interpretation:

Use of drospirenone-containing oral contraceptives was associated with an increased risk of deep vein thrombosis and pulmonary embolism, but not transient ischemic attack or cerebrovascular attack, relative to second- and third-generation combined oral contraceptives.Oral hormonal therapy is the preferred method of contraception, especially among young women. In the United States in 2002, 12 million women were using “the pill.”1 In a survey of households in Great Britain conducted in 2005 and 2006, one-quarter of women aged 16 to 49 years of age were using this form of contraception.2 A large variety of combined oral contraceptive preparations are available, differing in terms of estrogen dose and in terms of the dose and type of the progestin component. Among preparations currently in use, the estrogen dose ranges from 15 to 35 μg, and the progestins are second-generation, third-generation or newer. The second-generation progestins (levonorgestrel and norgestrel), which are derivatives of testosterone, have differing degrees of androgenic and estrogenic activities. The structure of these agents was modified to reduce the androgenic activity, thus producing the third-generation progestins (desogestrel, gestodene and norgestimate). Newer progestins are chlormadinone acetate, a derivative of progesterone, and drospirenone, an analogue of the aldosterone antagonist spironolactone having antimineralo-corticoid and antiandrogenic activities. Drospirenone is promoted as causing less weight gain and edema than other forms of oral contraceptives, but few well-designed studies have compared the minor adverse effects of these drugs.3The use of oral contraceptives has been reported to confer an increased risk of venous and arterial thrombotic events,47 specifically an absolute risk of venous thrombosis of 6.29 per 10 000 woman-years, compared with 3.01 per 10 000 woman-years among nonusers.8 It has long been accepted that there is a dose–response relationship between estrogen and the risk of venous thrombotic events. Reducing the estrogen dose from 50 μg to 20–30 μg has reduced the risk.9 Studies published since the mid-1990s have suggested a greater risk of venous thrombotic events with third-generation oral contraceptives than with second-generation formulations,1013 indicating that the risk is also progestin-dependent. The pathophysiological mechanism of the risk with different progestins is unknown. A twofold increase in the risk of arterial events (specifically ischemic stroke6,14 and myocardial infarction7) has been observed in case–control studies for users of second-generation pills and possibly also third-generation preparations.7,14Conflicting information is available regarding the risk of venous and arterial thrombotic events associated with drospirenone. An increased risk of venous thromboembolism, relative to second-generation pills, has been reported recently,8,15,16 whereas two manufacturer-sponsored studies claimed no increase in risk.17,18 In the study reported here, we investigated the risk of venous and arterial thrombotic events among users of various oral contraceptives in a large population-based cohort.  相似文献   

11.

Background:

Acute kidney injury is a serious complication of elective major surgery. Acute dialysis is used to support life in the most severe cases. We examined whether rates and outcomes of acute dialysis after elective major surgery have changed over time.

Methods:

We used data from Ontario’s universal health care databases to study all consecutive patients who had elective major surgery at 118 hospitals between 1995 and 2009. Our primary outcomes were acute dialysis within 14 days of surgery, death within 90 days of surgery and chronic dialysis for patients who did not recover kidney function.

Results:

A total of 552 672 patients underwent elective major surgery during the study period, 2231 of whom received acute dialysis. The incidence of acute dialysis increased steadily from 0.2% in 1995 (95% confidence interval [CI] 0.15–0.2) to 0.6% in 2009 (95% CI 0.6–0.7). This increase was primarily in cardiac and vascular surgeries. Among patients who received acute dialysis, 937 died within 90 days of surgery (42.0%, 95% CI 40.0–44.1), with no change in 90-day survival over time. Among the 1294 patients who received acute dialysis and survived beyond 90 days, 352 required chronic dialysis (27.2%, 95% CI 24.8–29.7), with no change over time.

Interpretation:

The use of acute dialysis after cardiac and vascular surgery has increased substantially since 1995. Studies focusing on interventions to better prevent and treat perioperative acute kidney injury are needed.More than 230 million elective major surgeries are done annually worldwide.1 Acute kidney injury is a serious complication of major surgery. It represents a sudden loss of kidney function that affects morbidity, mortality and health care costs.2 Dialysis is used for the most severe forms of acute kidney injury. In the nonsurgical setting, the incidence of acute dialysis has steadily increased over the last 15 years, and patients are now more likely to survive to discharge from hospital.35 Similarly, in the surgical setting, the incidence of acute dialysis appears to be increasing over time,610 with declining inhospital mortality.8,10,11Although previous studies have improved our understanding of the epidemiology of acute dialysis in the surgical setting, several questions remain. Many previous studies were conducted at a single centre, thereby limiting their generalizability.6,1214 Most multicentre studies were conducted in the nonsurgical setting and used diagnostic codes for acute kidney injury not requiring dialysis; however, these codes can be inaccurate.15,16 In contrast, a procedure such as dialysis is easily determined. The incidence of acute dialysis after elective surgery is of particular interest given the need for surgical consent, the severe nature of the event and the potential for mitigation. The need for chronic dialysis among patients who do not recover renal function after surgery has been poorly studied, yet this condition has a major affect on patient survival and quality of life.17 For these reasons, we studied secular trends in acute dialysis after elective major surgery, focusing on incidence, 90-day mortality and need for chronic dialysis.  相似文献   

12.

Background:

Many people with depression experience repeated episodes. Previous research into the predictors of chronic depression has focused primarily on the clinical features of the disease; however, little is known about the broader spectrum of sociodemographic and health factors inherent in its development. Our aim was to identify factors associated with a long-term negative prognosis of depression.

Methods:

We included 585 people aged 16 years and older who participated in the 2000/01 cycle of the National Population Health Survey and who reported experiencing a major depressive episode in 2000/01. The primary outcome was the course of depression until 2006/07. We grouped individuals into trajectories of depression using growth trajectory models. We included demographic, mental and physical health factors as predictors in the multivariable regression model to compare people with different trajectories.

Results:

Participants fell into two main depression trajectories: those whose depression resolved and did not recur (44.7%) and those who experienced repeated episodes (55.3%). In the multivariable model, daily smoking (OR 2.68, 95% CI 1.54–4.67), low mastery (i.e., feeling that life circumstances are beyond one’s control) (OR 1.10, 95% CI 1.03–1.18) and history of depression (OR 3.5, 95% CI 1.95–6.27) were significant predictors (p < 0.05) of repeated episodes of depression.

Interpretation:

People with major depression who were current smokers or had low levels of mastery were at an increased risk of repeated episodes of depression. Future studies are needed to confirm the predictive value of these variables and to evaluate their accuracy for diagnosis and as a guide to treatment.Depression is a common and often recurrent disorder that compromises daily functioning and is associated with a decrease in quality of life.13 Guidelines for the treatment of depression, such as those published by the Canadian Network for Mood and Anxiety Treatments (CANMAT)5 and the National Institute for Health and Clinical Excellence (NICE) in the United Kingdom,4 often recommend antidepressant treatment in patients with severe symptoms and outline specific risk factors supporting long-term treatment maintenance.4,5 However, for patients who do not meet the criteria for treatment of depression, the damaging sequelae of depression are frequently compounded without treatment.5 In such cases, early treatment for depression may result in an improved long-term prognosis.68A small but growing number of studies have begun to characterize the long-term course of depression in terms of severity,9 life-time prevalence10 and patterns of recurrence.11 However, a recent systematic review of the risk factors of chronic depression highlighted a need for longitudinal studies to better identify prognostic factors.12 The capacity to distinguish long-term patterns of recurrence of depression in relation to the wide range of established clinical and nonclinical factors for depression could be highly beneficial. Our objective was to use a population-based cohort to identify and understand the baseline factors associated with a long-term negative prognosis of depression.  相似文献   

13.
The erythropoietin receptor (EpoR) was discovered and described in red blood cells (RBCs), stimulating its proliferation and survival. The target in humans for EpoR agonists drugs appears clear—to treat anemia. However, there is evidence of the pleitropic actions of erythropoietin (Epo). For that reason, rhEpo therapy was suggested as a reliable approach for treating a broad range of pathologies, including heart and cardiovascular diseases, neurodegenerative disorders (Parkinson’s and Alzheimer’s disease), spinal cord injury, stroke, diabetic retinopathy and rare diseases (Friedreich ataxia). Unfortunately, the side effects of rhEpo are also evident. A new generation of nonhematopoietic EpoR agonists drugs (asialoEpo, Cepo and ARA 290) have been investigated and further developed. These EpoR agonists, without the erythropoietic activity of Epo, while preserving its tissue-protective properties, will provide better outcomes in ongoing clinical trials. Nonhematopoietic EpoR agonists represent safer and more effective surrogates for the treatment of several diseases such as brain and peripheral nerve injury, diabetic complications, renal ischemia, rare diseases, myocardial infarction, chronic heart disease and others.In principle, the erythropoietin receptor (EpoR) was discovered and described in red blood cell (RBC) progenitors, stimulating its proliferation and survival. Erythropoietin (Epo) is mainly synthesized in fetal liver and adult kidneys (13). Therefore, it was hypothesized that Epo act exclusively on erythroid progenitor cells. Accordingly, the target in humans for EpoR agonists drugs (such as recombinant erythropoietin [rhEpo], in general, called erythropoiesis-simulating agents) appears clear (that is, to treat anemia). However, evidence of a kaleidoscope of pleitropic actions of Epo has been provided (4,5). The Epo/EpoR axis research involved an initial journey from laboratory basic research to clinical therapeutics. However, as a consequence of clinical observations, basic research on Epo/EpoR comes back to expand its clinical therapeutic applicability.Although kidney and liver have long been considered the major sources of synthesis, Epo mRNA expression has also been detected in the brain (neurons and glial cells), lung, heart, bone marrow, spleen, hair follicles, reproductive tract and osteoblasts (617). Accordingly, EpoR was detected in other cells, such as neurons, astrocytes, microglia, immune cells, cancer cell lines, endothelial cells, bone marrow stromal cells and cells of heart, reproductive system, gastrointestinal tract, kidney, pancreas and skeletal muscle (1827). Conversely, Sinclair et al.(28) reported data questioning the presence or function of EpoR on nonhematopoietic cells (endothelial, neuronal and cardiac cells), suggesting that further studies are needed to confirm the diversity of EpoR. Elliott et al.(29) also showed that EpoR is virtually undetectable in human renal cells and other tissues with no detectable EpoR on cell surfaces. These results have raised doubts about the preclinical basis for studies exploring pleiotropic actions of rhEpo (30).For the above-mentioned data, a return to basic research studies has become necessary, and many studies in animal models have been initiated or have already been performed. The effect of rhEpo administration on angiogenesis, myogenesis, shift in muscle fiber types and oxidative enzyme activities in skeletal muscle (4,31), cardiac muscle mitochondrial biogenesis (32), cognitive effects (31), antiapoptotic and antiinflammatory actions (3337) and plasma glucose concentrations (38) has been extensively studied. Neuro- and cardioprotection properties have been mainly described. Accordingly, rhEpo therapy was suggested as a reliable approach for treating a broad range of pathologies, including heart and cardiovascular diseases, neurodegenerative disorders (Parkinson’s and Alzheimer’s disease), spinal cord injury, stroke, diabetic retinopathy and rare diseases (Friedreich ataxia).Unfortunately, the side effects of rhEpo are also evident. Epo is involved in regulating tumor angiogenesis (39) and probably in the survival and growth of tumor cells (25,40,41). rhEpo administration also induces serious side effects such as hypertension, polycythemia, myocardial infarction, stroke and seizures, platelet activation and increased thromboembolic risk, and immunogenicity (4246), with the most common being hypertension (47,48). A new generation of nonhematopoietic EpoR agonists drugs have hence been investigated and further developed in animals models. These compounds, namely asialoerythropoietin (asialoEpo) and carbamylated Epo (Cepo), were developed for preserving tissue-protective properties but reducing the erythropoietic activity of native Epo (49,50). These drugs will provide better outcome in ongoing clinical trials. The advantage of using nonhematopoietic Epo analogs is to avoid the stimulation of hematopoiesis and thereby the prevention of an increased hematocrit with a subsequent procoagulant status or increased blood pressure. In this regard, a new study by van Rijt et al. has shed new light on this topic (51). A new nonhematopoietic EpoR agonist analog named ARA 290 has been developed, promising cytoprotective capacities to prevent renal ischemia/reperfusion injury (51). ARA 290 is a short peptide that has shown no safety concerns in preclinical and human studies. In addition, ARA 290 has proven efficacious in cardiac disorders (52,53), neuropathic pain (54) and sarcoidosis-induced chronic neuropathic pain (55). Thus, ARA 290 is a novel nonhematopoietic EpoR agonist with promising therapeutic options in treating a wide range of pathologies and without increased risks of cardiovascular events.Overall, this new generation of EpoR agonists without the erythropoietic activity of Epo while preserving tissue-protective properties of Epo will provide better outcomes in ongoing clinical trials (49,50). Nonhematopoietic EpoR agonists represent safer and more effective surrogates for the treatment of several diseases, such as brain and peripheral nerve injury, diabetic complications, renal ischemia, rare diseases, myocardial infarction, chronic heart disease and others.  相似文献   

14.
15.

Background:

Falls cause more than 60% of head injuries in older adults. Lack of objective evidence on the circumstances of these events is a barrier to prevention. We analyzed video footage to determine the frequency of and risk factors for head impact during falls in older adults in 2 long-term care facilities.

Methods:

Over 39 months, we captured on video 227 falls involving 133 residents. We used a validated questionnaire to analyze the mechanisms of each fall. We then examined whether the probability for head impact was associated with upper-limb protective responses (hand impact) and fall direction.

Results:

Head impact occurred in 37% of falls, usually onto a vinyl or linoleum floor. Hand impact occurred in 74% of falls but had no significant effect on the probability of head impact (p = 0.3). An increased probability of head impact was associated with a forward initial fall direction, compared with backward falls (odds ratio [OR] 2.7, 95% confidence interval [CI] 1.3–5.9) or sideways falls (OR 2.8, 95% CI 1.2–6.3). In 36% of sideways falls, residents rotated to land backwards, which reduced the probability of head impact (OR 0.2, 95% CI 0.04–0.8).

Interpretation:

Head impact was common in observed falls in older adults living in long-term care facilities, particularly in forward falls. Backward rotation during descent appeared to be protective, but hand impact was not. Attention to upper-limb strength and teaching rotational falling techniques (as in martial arts training) may reduce fall-related head injuries in older adults.Falls from standing height or lower are the cause of more than 60% of hospital admissions for traumatic brain injury in adults older than 65 years.15 Traumatic brain injury accounts for 32% of hospital admissions and more than 50% of deaths from falls in older adults.1,68 Furthermore, the incidence and age-adjusted rate of fall-related traumatic brain injury is increasing,1,9 especially among people older than 80 years, among whom rates have increased threefold over the past 30 years.10 One-quarter of fall-related traumatic brain injuries in older adults occur in long-term care facilities.1The development of improved strategies to prevent fall-related traumatic brain injuries is an important but challenging task. About 60% of residents in long-term care facilities fall at least once per year,11 and falls result from complex interactions of physiologic, environmental and situational factors.1216 Any fall from standing height has sufficient energy to cause brain injury if direct impact occurs between the head and a rigid floor surface.1719 Improved understanding is needed of the factors that separate falls that result in head impact and injury from those that do not.1,10 Falls in young adults rarely result in head impact, owing to protective responses such as use of the upper limbs to stop the fall, trunk flexion and rotation during descent.2023 We have limited evidence of the efficacy of protective responses to falls among older adults.In the current study, we analyzed video footage of real-life falls among older adults to estimate the prevalence of head impact from falls, and to examine the association between head impact, and biomechanical and situational factors.  相似文献   

16.

Background:

Previous studies have suggested that the immunochemical fecal occult blood test has superior specificity for detecting bleeding in the lower gastrointestinal tract even if bleeding occurs in the upper tract. We conducted a large population-based study involving asymptomatic adults in Taiwan, a population with prevalent upper gastrointestinal lesions, to confirm this claim.

Methods:

We conducted a prospective cohort study involving asymptomatic people aged 18 years or more in Taiwan recruited to undergo an immunochemical fecal occult blood test, colonoscopy and esophagogastroduodenoscopy between August 2007 and July 2009. We compared the prevalence of lesions in the lower and upper gastrointestinal tracts between patients with positive and negative fecal test results. We also identified risk factors associated with a false-positive fecal test result.

Results:

Of the 2796 participants, 397 (14.2%) had a positive fecal test result. The sensitivity of the test for predicting lesions in the lower gastrointestinal tract was 24.3%, the specificity 89.0%, the positive predictive value 41.3%, the negative predictive value 78.7%, the positive likelihood ratio 2.22, the negative likelihood ratio 0.85 and the accuracy 73.4%. The prevalence of lesions in the lower gastrointestinal tract was higher among those with a positive fecal test result than among those with a negative result (41.3% v. 21.3%, p < 0.001). The prevalence of lesions in the upper gastrointestinal tract did not differ significantly between the two groups (20.7% v. 17.5%, p = 0.12). Almost all of the participants found to have colon cancer (27/28, 96.4%) had a positive fecal test result; in contrast, none of the three found to have esophageal or gastric cancer had a positive fecal test result (p < 0.001). Among those with a negative finding on colonoscopy, the risk factors associated with a false-positive fecal test result were use of antiplatelet drugs (adjusted odds ratio [OR] 2.46, 95% confidence interval [CI] 1.21–4.98) and a low hemoglobin concentration (adjusted OR 2.65, 95% CI 1.62–4.33).

Interpretation:

The immunochemical fecal occult blood test was specific for predicting lesions in the lower gastrointestinal tract. However, the test did not adequately predict lesions in the upper gastrointestinal tract.The fecal occult blood test is a convenient tool to screen for asymptomatic gastrointestinal bleeding.1 When the test result is positive, colonoscopy is the strategy of choice to investigate the source of bleeding.2,3 However, 13%–42% of patients can have a positive test result but a negative colonoscopy,4 and it has not yet been determined whether asymptomatic patients should then undergo evaluation of the upper gastrointestinal tract.Previous studies showed that the frequency of lesions in the upper gastrointestinal tract was comparable or even higher than that of colonic lesions59 and that the use of esophagogastroduodenoscopy may change clinical management.10,11 Some studies showed that evaluation of the upper gastrointestinal tract helped to identify important lesions in symptomatic patients and those with iron deficiency anemia;12,13 however, others concluded that esophagogastroduodenoscopy was unjustified because important findings in the upper gastrointestinal tract were rare1417 and sometimes irrelevant to the results of fecal occult blood testing.1821 This controversy is related to the heterogeneity of study populations and to the limitations of the formerly used guaiac-based fecal occult blood test,520 which was not able to distinguish bleeding in the lower gastrointestinal tract from that originating in the upper tract.The guaiac-based fecal occult blood test is increasingly being replaced by the immunochemical-based test. The latter is recommended for detecting bleeding in the lower gastrointestinal tract because it reacts with human globin, a protein that is digested by enzymes in the upper gastrointestinal tract.22 With this advantage, the occurrence of a positive fecal test result and a negative finding on colonoscopy is expected to decrease.We conducted a population-based study in Taiwan to verify the performance of the immunochemical fecal occult blood test in predicting lesions in the lower gastrointestinal tract and to confirm that results are not confounded by the presence of lesions in the upper tract. In Taiwan, the incidence of colorectal cancer is rapidly increasing, and Helicobacter pylori-related lesions in the upper gastrointestinal tract remain highly prevalent.23 Same-day bidirectional endoscopies are therefore commonly used for cancer screening.24 This screening strategy provides an opportunity to evaluate the performance of the immunochemical fecal occult blood test.  相似文献   

17.
Master Z  Resnik DB 《EMBO reports》2011,12(10):992-995
Stem-cell tourism exploits the hope of patients desperate for therapies and cures. Scientists have both a special responsibility and a unique role to play in addressing this problem.During the past decade, thousands of patients with a variety of diseases unresponsive to conventional treatment have gone abroad to receive stem-cell therapies. This phenomenon, commonly referred to as ‘stem-cell tourism'', raises significant ethical concerns, because patients often receive treatments that are not only unproven, but also unregulated, potentially dangerous or even fraudulent (Kiatpongsan & Sipp, 2009; Lindvall & Hyun, 2009). Stem-cell clinics have sprung up in recent years to take advantage of desperate patients who have exhausted other alternatives (Ryan et al, 2010). These clinics usually advertise their services directly to consumers through the Internet, make extravagant claims about the benefits, downplay the risks involved and charge hefty fees of US $20,000 or more for treatments (Lau et al, 2008; Regenberg et al, 2009).Stem-cell tourism is regarded as ethically problematic because patients receive unproven therapies from untrustworthy sourcesWith a few exceptions—such as the use of bone-marrow haematopoietic cells to treat leukaemia—novel stem-cell therapies are often unproven in clinical trials (Lindvall & Hyun, 2009). Even well-proven therapies can lead to tumour formation, tissue rejection, autoimmunity, permanent disability and death (Gallagher & Forrest, 2007; Murphy & Blazar, 1999). The risks of unproven and unregulated therapies are potentially much worse (Barclay, 2009).In this commentary, we argue that stem-cell scientists have a unique and important role to play in addressing the problem of stem-cell tourism. Stem-cell scientists should carefully examine all requests to provide cell lines and other materials, and share them only with responsible investigators or clinicians. They should require recipients of stem cells to sign material transfer agreements (MTAs) that describe how the cells may be used, and to provide documentation about their scientific or medical qualifications.In discussing these ethical and regulatory issues, it is important to distinguish between stem-cell tourism and other types of travel to receive medical treatment including stem-cell therapy. Stem-cell tourism is regarded as ethically problematic because patients receive unproven therapies from untrustworthy sources. Other forms of travel usually do not raise troubling ethical issues (Lindvall & Hyun, 2009). Many patients go to other countries to receive proven stem-cell therapies—such as haematopoietic cells to treat leukaemia—from responsible physicians. Other patients obtain unproven stem-cell treatments by participating in scientifically valid, legally sanctioned clinical trials, or by receiving ethically responsible, innovative medical care (Lindvall & Hyun, 2009). In some cases, patients need to travel because the therapy is approved in only some countries; by way of example, on 1 July, Korea was the first country that approved the clinical use of adult stem cells to treat heart attack victims (Heejung & Yi, 2011).…even when regulations are in place, unscrupulous individuals might still evade these rulesAny medical innovation is ethically responsible when it is based on animal studies or other research that guarantee evidence of safety and clinical efficacy. Adequate measures must also be taken to protect patients from harm, such as clinical monitoring, follow-up, exclusion of individuals who are likely to be harmed or are unlikely to benefit, use of only clinical-grade stem cells, careful attention to dosing strategies and informed consent (Lindvall & Hyun, 2009).Many of the articles examining the ethics of stem-cell tourism have focused on the need for more regulatory oversight and education to prevent harm (Lindvall & Hyun, 2009; Caplan & Levine, 2010; Cohen & Cohen, 2010; Zarzeczny & Caulfield, 2010). We agree that additional regulations are needed, as there is little oversight of stem-cell research or therapy at present. Although most countries have regulations for conducting research with human subjects, as well as medical malpractice and licensing laws, these provide general guidance and do not directly address stem-cell therapy.Regulations have significant limitations, however. First, regulations apply intra-nationally, not internationally. If a country passes laws designed to oversee therapy and research, these laws would not apply in another nation. Physicians and investigators who do not want to adhere to these rules can simply move to another country that has a permissive legal environment. International agreements can help to close this regulatory gap, but there will still be countries that do not accept or abide by these agreements. Second, even when regulations are in place, unscrupulous individuals might still evade these rules (Resnik, 1999).Educating patients about the risks of unproven therapies can also help to address the problem of stem-cell tourism. However, education too has significant limitations, since many people will remain ignorant of the dangers of unproven therapies, or they will simply ignore warnings and prudent advice. For many years, cancer patients have travelled to foreign countries to receive unconventional and unproven treatments, despite educational campaigns and media reports discussing the dangers of these therapies. Since the 1970s, thousands of patients have travelled to cancer clinics in Mexico to receive medical treatments not available in the USA (Moss, 2005).Education for physicians on the dangers of unproven stem-cell therapies can be helpful, but this strategy also has limitations, since many will not receive this education or will choose to ignore it. Additionally, responsible physicians might still find it difficult to persuade their patients not to receive an unproven therapy, especially when conventional treatments have failed. The history of cancer treatment offers important lessons here, since many oncologists have tried, unsuccessfully, to convince their patients not travel to foreign countries to receive questionable treatments (Moss, 2005).Since regulation and education have significant shortcomings, it is worth considering another strategy for dealing with the problem of stem-cell tourism, one that focuses on the social responsibilities of stem-cell scientists.Many codes of ethics adopted by scientific associations include provisions relating to social responsibilities (Shamoo & Resnik, 2009). For example, the Code of Ethics of the American Society for Biochemistry and Molecular Biology states that “investigators will promote and follow practices that enhance the public interest or well-being” (American Society of Microbiology, 2011). Social responsibilities in science include an obligation to avoid causing harm and an obligation to benefit the public (Shamoo & Resnik, 2009).…education too has significant limitations, since many people will remain ignorant of the dangers of unproven therapies, or they will simply ignore warnings and prudent adviceThere are two distinct rationales for social responsibility. First, scientists should be accountable to the public since the public provides scientists with funding, facilities and staff (Shamoo & Resnik, 2009). Second, stem-cell scientists are uniquely positioned to exercise their social responsibilities and take effective action pertaining to stem-cell tourism. They understand the science behind stem-cell research, including the potential for harm and the likely clinical efficacy. This knowledge can be used to evaluate the scientific validity of the different uses of stem cells, especially clinical uses. Stem-cell scientists also have control over cell lines and other materials that they may or may not choose to share with other researchers or physicians.Many of the private clinics that offer stem-cell treatments are relatively small and often depend on acquiring resources from scientists working in the field. The materials they might require could include adult, embryonic and fetal stem-cell lines; vectors that can be used to induce pluripotency in isolated adult cells; genes, DNA and RNA sequences; antibodies; purified protein products, such as growth factors; and special cocktails, media or extracellular matrices to culture specific stem-cell types.Social responsibilities in science include an obligation to avoid causing harm and an obligation to benefit the publicOne way in which stem-cell scientists can help to address the problem of stem-cell tourism is to refuse to share cell lines or other materials with physicians or investigators whom they believe might be behaving irresponsibly. To decide whether someone who requests materials is a responsible individual, stem-cell scientists should ask recipients to supply documentation, such as a CV, website, a research or clinical protocol, or clinical trial number, as evidence of their work and expertise in stem cells. This would ensure that the stem cells and other materials are going to be used in the course of responsible biomedical research, a legally sanctioned clinical trial, or in responsible medical innovation. If the recipients provide insufficient documentation, scientists should refuse to honour their requests for materials.Stem-cell scientists should also require recipients to sign MTAs that describe what will be done with the material supplied. MTAs are contracts governing the transfer of materials between organizations and typically include a variety of terms and conditions, such as the purposes for which the materials may be used—commercial or academic research, for example—modification of the materials, transfers to third parties, intellectual property rights, and compliance with legal, regulatory and other policies (Rodriguez, 2005).To help address the problem of stem-cell tourism, MTAs should state whether the materials will be used in humans, and under what conditions. If the stem cells are not clinical grade, the MTA should state that they will not be transplanted into humans, unless the recipients have a well-developed and legally sanctioned procedure—approved by the Food and Drug Administration or other relevant agency—for verifying the quality of the cells and performing the necessary changes to make them acceptable for human use. For example, the recipients could test the cells for viral and bacterial infections, mutations, chemical impurities or other factors that would compromise their clinical utility in an attempt to develop clinical grade cell lines.In addition, the MTA could stipulate that scientists must follow the ethical Guidelines for Clinical Translation of Stem Cells set forth by the International Society for Stem Cell Research (Hyun et al, 2008). These guidelines set forth various preclinical and clinical conditions for stem-cell interventions. Describing such conditions might help to deter unscrupulous individuals from using stem cells for scientifically and ethically questionable practices. By evaluating a recipient''s qualifications and intended uses of stem-cell lines and other reagents, scientists demonstrate social responsibility and uphold public trust when sharing materials.Stem-cell scientists also have control over cell lines and other materials that they may or may not choose to share with other researchers or physiciansSince an MTA is a type of contract between institutions, there is legal recourse if it is broken. A plaintiff could sue a defendant that violates an MTA for breach of contract. Also, if the aggrieved party is a funding agency, it could withhold research funding from the offending party. The onus is on the plaintiff—the scientist and scientific organization providing the materials—to file a lawsuit against the defendants for breach of contract and this requires the scientist or others in the organization to follow-up and ensure that the materials transferred are being used in compliance with the conditions set forth in the MTA.Some might object to our proposal because it violates the principle of scientific openness, which is an integral part of the ethos of science (Shamoo & Resnik, 2009). Scientists have an obligation to share data, reagents, cell lines, methods and other research tools because sharing is vital to the progress of science. Many granting agencies and journals also have policies that require scientists to make data and materials available to other scientists on request (Shamoo & Resnik, 2009).Although openness is vital to the ethical practice of science, it can be superseded by other important factors, such as protecting the privacy and confidentiality of human research subjects, safeguarding proprietary or classified research, securing intellectual property or scientific priority, or preventing bioterrorism (Shamoo & Resnik, 2009). We consider tackling the problem of stem-cell tourism to be a sufficiently important reason for refusing to share research materials in some situations.Although openness is vital to the ethical practice of science, it can be superseded by other important factors…Some might also object to our proposal on the grounds that it places unnecessary burdens on already overworked scientists, or that unscrupulous scientists and physicians will find alternative ways to obtain stem cells, even if investigators refuse to share them.We recognize the need to avoid burdening researchers unnecessarily with administrative work, but we think that verifying the qualifications of a recipient and reviewing a protocol is a reasonable burden. If principal investigators do not wish to shoulder this responsibility, they can ask a postdoctoral fellow or another senior member of the laboratory or faculty to help them. Far from being a waste of time and effort, taking some simple steps to determine whether requests for stem cells come from responsible physicians or investigators can be an important part of the scientific community''s response to stem-cell tourism.A month before his death in 1963, former US President John F. Kennedy (1917-1963) made an address at the Centennial Convocation of the National Academy of Sciences in which he said: “If scientific discovery has not been an unalloyed blessing, if it has conferred on mankind the power not only to create but also to annihilate, it has at the same time provided humanity with a supreme challenge and a supreme testing.” Stem-cell scientists can rise to this challenge and address the problem of stem-cell tourism by ensuring that the products of their research are controlled responsibly and shared wisely with genuine investigators or clinicians through the use of MTAs. Doing so should help to deter fraudulent scientists or physicians from exploiting patients who travel to foreign countries in their desperate search for cures.? Open in a separate windowZubin MasterOpen in a separate windowDavid B Resnik  相似文献   

18.

Background:

Clinical trials are commonly done without blinded outcome assessors despite the risk of bias. We wanted to evaluate the effect of nonblinded outcome assessment on estimated effects in randomized clinical trials with outcomes that involved subjective measurement scales.

Methods:

We conducted a systematic review of randomized clinical trials with both blinded and nonblinded assessment of the same measurement scale outcome. We searched PubMed, EMBASE, PsycINFO, CINAHL, Cochrane Central Register of Controlled Trials, HighWire Press and Google Scholar for relevant studies. Two investigators agreed on the inclusion of trials and the outcome scale. For each trial, we calculated the difference in effect size (i.e., standardized mean difference between nonblinded and blinded assessments). A difference in effect size of less than 0 suggested that nonblinded assessors generated more optimistic estimates of effect. We pooled the differences in effect size using inverse variance random-effects meta-analysis and used metaregression to identify potential reasons for variation.

Results:

We included 24 trials in our review. The main meta-analysis included 16 trials (involving 2854 patients) with subjective outcomes. The estimated treatment effect was more beneficial when based on nonblinded assessors (pooled difference in effect size −0.23 [95% confidence interval (CI) −0.40 to −0.06]). In relative terms, nonblinded assessors exaggerated the pooled effect size by 68% (95% CI 14% to 230%). Heterogeneity was moderate (I2 = 46%, p = 0.02) and unexplained by metaregression.

Interpretation:

We provide empirical evidence for observer bias in randomized clinical trials with subjective measurement scale outcomes. A failure to blind assessors of outcomes in such trials results in a high risk of substantial bias.A failure to blind assessors of outcomes in randomized clinical trials may result in bias. Observer bias, sometimes called “detection bias” or “ascertainment bias,” occurs when outcome assessments are systematically influenced by the assessors’ conscious or unconscious predispositions — for example, because of hope or expectations, often favouring the experimental intervention.1Blinded outcome assessors are used in many trials to avoid such bias. However, the use of non-blinded assessors remains common,24 especially in nonpharmacological trials; for example, non-blinded outcome assessment was used in 90% of trials involving orthopedic traumatology3 and 74% of trials involving strength training for muscles.4Unfortunately, the empirical evidence on observer bias in randomized clinical trials has been incomplete. Meta-epidemiological studies have compared double-blind trials with similar trials that were not double-blind.5,6 However, such studies address blinding crudely because “double-blind” is an ambiguous term.3,7 Furthermore, the risk of confounding is considerable in indirect between-trial analyses, as “double-blind” trials may have better overall methods and larger sample sizes than trials that are not reported as “double-blind.”A more reliable approach involves analyses of trials that use both blinded and nonblinded outcome assessors, because such a within-trial design provides a direct comparison between blinded and nonblinded assessments of the same outcome in the same patients. Our previous analysis of such trials with binary outcomes found substantial observer bias.8Although subjective measurement scales such as illness severity scores are popular, they may be susceptible to observer bias. They are frequently used as outcomes in clinical scenarios with no naturally distinct categories, and adjacent subcategories on a scale typically involve minor and vaguely defined differences.We decided to systematically review trials with both blinded and nonblinded assessment of outcomes using the same measurement scales. Our primary objective was to evaluate the impact of nonblinded outcome assessment on estimated treatment effects in randomized clinical trials. Our secondary objective was to examine reasons for variation in observer bias.  相似文献   

19.

Background:

Uncircumcised boys are at higher risk for urinary tract infections than circumcised boys. Whether this risk varies with the visibility of the urethral meatus is not known. Our aim was to determine whether there is a hierarchy of risk among uncircumcised boys whose urethral meatuses are visible to differing degrees.

Methods:

We conducted a prospective cross-sectional study in one pediatric emergency department. We screened 440 circumcised and uncircumcised boys. Of these, 393 boys who were not toilet trained and for whom the treating physician had requested a catheter urine culture were included in our analysis. At the time of catheter insertion, a nurse characterized the visibility of the urethral meatus (phimosis) using a 3-point scale (completely visible, partially visible or nonvisible). Our primary outcome was urinary tract infection, and our primary exposure variable was the degree of phimosis: completely visible versus partially or nonvisible urethral meatus.

Results:

Cultures grew from urine samples from 30.0% of uncircumcised boys with a completely visible meatus, and from 23.8% of those with a partially or nonvisible meatus (p = 0.4). The unadjusted odds ratio (OR) for culture growth was 0.73 (95% confidence interval [CI] 0.35–1.52), and the adjusted OR was 0.41 (95% CI 0.17–0.95). Of the boys who were circumcised, 4.8% had urinary tract infections, which was significantly lower than the rate among uncircumcised boys with a completely visible urethral meatus (unadjusted OR 0.12 [95% CI 0.04–0.39], adjusted OR 0.07 [95% CI 0.02–0.26]).

Interpretation:

We did not see variation in the risk of urinary tract infection with the visibility of the urethral meatus among uncircumcised boys. Compared with circumcised boys, we saw a higher risk of urinary tract infection in uncircumcised boys, irrespective of urethral visibility.Urinary tract infections are one of the most common serious bacterial infections in young children.16 Prompt diagnosis is important, because children with urinary tract infection are at risk for bacteremia6 and renal scarring.1,7 Uncircumcised boys have a much higher risk of urinary tract infection than circumcised boys,1,3,4,6,812 likely as a result of heavier colonization under the foreskin with pathogenic bacteria, which leads to ascending infections.13,14 The American Academy of Pediatrics recently suggested that circumcision status be used to select which boys should be evaluated for urinary tract infection.1 However, whether all uncircumcised boys are at equal risk for infection, or whether the risk varies with the visibility of the urethral opening, is not known. It has been suggested that a subset of uncircumcised boys with a poorly visible urethral opening are at increased risk of urinary tract infection,1517 leading some experts to consider giving children with tight foreskins topical cortisone or circumcision to prevent urinary tract infections.13,1821We designed a study to challenge the opinion that all uncircumcised boys are at increased risk for urinary tract infections. We hypothesized a hierarchy of risk among uncircumcised boys depending on the visibility of the urethral meatus, with those with a partially or nonvisible meatus at highest risk, and those with a completely visible meatus having a level of risk similar to that of boys who have been circumcised. Our primary aim was to compare the proportions of urinary tract infections among uncircumcised boys with a completely visible meatus with those with a partially or nonvisible meatus.  相似文献   

20.
Understanding the structural and assembly dynamics of the amyloid β-protein (Aβ) has direct relevance to the development of therapeutic agents for Alzheimer disease. To elucidate these dynamics, we combined scanning amino acid substitution with a method for quantitative determination of the Aβ oligomer frequency distribution, photo-induced cross-linking of unmodified proteins (PICUP), to perform “scanning PICUP.” Tyr, a reactive group in PICUP, was substituted at position 1, 10, 20, 30, or 40 (for Aβ40) or 42 (for Aβ42). The effects of these substitutions were probed using circular dichroism spectroscopy, thioflavin T binding, electron microscopy, PICUP, and mass spectrometry. All peptides displayed a random coil → α/β → β transition, but substitution-dependent alterations in assembly kinetics and conformer complexity were observed. Tyr1-substituted homologues of Aβ40 and Aβ42 assembled the slowest and yielded unusual patterns of oligomer bands in gel electrophoresis experiments, suggesting oligomer compaction had occurred. Consistent with this suggestion was the observation of relatively narrow [Tyr1]Aβ40 fibrils. Substitution of Aβ40 at the C terminus decreased the population conformational complexity and substantially extended the highest order of oligomers observed. This latter effect was observed in both Aβ40 and Aβ42 as the Tyr substitution position number increased. The ability of a single substitution (Tyr1) to alter Aβ assembly kinetics and the oligomer frequency distribution suggests that the N terminus is not a benign peptide segment, but rather that Aβ conformational dynamics and assembly are affected significantly by the competition between the N and C termini to form a stable complex with the central hydrophobic cluster.Alzheimer disease (AD)4 is the most common cause of late-life dementia (1) and is estimated to afflict more than 27 million people worldwide (2). An important etiologic hypothesis is that amyloid β-protein (Aβ) oligomers are the proximate neurotoxins in AD. Substantial in vivo and in vitro evidence supports this hypothesis (312). Neurotoxicity studies have shown that Aβ assemblies are potent neurotoxins (5, 1320), and the toxicity of some oligomers can be greater than that of the corresponding fibrils (21). Soluble Aβ oligomers inhibit hippocampal long term potentiation (4, 5, 13, 15, 17, 18, 22) and disrupt cognitive function (23). Compounds that bind and disrupt the formation of oligomers have been shown to block the neurotoxicity of Aβ (24, 25). Importantly, recent studies in higher vertebrates (dogs) have shown that substantial reduction in amyloid deposits in the absence of decreases in oligomer concentration has little effect on recovery of neurological function (26).Recent studies of Aβ oligomers have sought to correlate oligomer size and biological activity. Oligomers in the supernatants of fibril preparations centrifuged at 100,000 × g caused sustained calcium influx in rat hippocampal neurons, leading to calpain activation and dynamin 1 degradation (27). Aβ-derived diffusible ligand-like Aβ42 oligomers induced inflammatory responses in cultured rat astrocytes (28). A 90-kDa Aβ42 oligomer (29) has been shown to activate ERK1/2 in rat hippocampal slices (30) and bind avidly to human cortical neurons (31), in both cases causing apoptotic cell death. A comparison of the time dependence of the toxic effects of the 90-kDa assembly with that of Aβ-derived diffusible ligands revealed a 5-fold difference, Aβ-derived diffusible ligands requiring more time for equivalent effects (31). A 56-kDa oligomer, “Aβ*56,” was reported to cause memory impairment in middle-aged transgenic mice expressing human amyloid precursor protein (32). A nonamer also had adverse effects. Impaired long term potentiation in rat brain slices has been attributed to Aβ trimers identified in media from cultured cells expressing human amyloid precursor protein (33). Dimers and trimers from this medium also have been found to cause progressive loss of synapses in organotypic rat hippocampal slices (10). In mice deficient in neprilysin, an enzyme that has been shown to degrade Aβ in vivo (34), impairment in neuronal plasticity and cognitive function correlated with significant increases in Aβ dimer levels and synapse-associated Aβ oligomers (35).The potent pathologic effects of Aβ oligomers provide a compelling reason for elucidating the mechanism(s) of their formation. This has been a difficult task because of the metastability and polydispersity of Aβ assemblies (36). To obviate these problems, we introduced the use of the method of photo-induced cross-linking of unmodified proteins (PICUP) to rapidly (<1 s) and covalently stabilize oligomer mixtures (for reviews see Refs. 37, 38). Oligomers thus stabilized no longer exist in equilibrium with monomers or each other, allowing determination of oligomer frequency distributions by simple techniques such as SDS-PAGE (37). Recently, to obtain population-average information on contributions to fibril formation of amino acid residues at specific sites in Aβ, we employed a scanning intrinsic fluorescence approach (39). Tyr was used because it is a relatively small fluorophore, exists natively in Aβ, and possesses the side chain most reactive in the PICUP chemistry (40). Using this approach, we found that the central hydrophobic cluster region (Leu17–Ala21) was particularly important in controlling fibril formation of Aβ40, whereas the C terminus was the predominant structural element controlling Aβ42 assembly (39). Here we present results of studies in which key strategic features of the two methods have been combined to enable execution of “scanning PICUP” and the consequent revelation of site-specific effects on Aβ oligomerization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号