首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
How does the womb determine the future? Scientists have begun to uncover how environmental and maternal factors influence our long-term health prospects.About two decades ago, David Barker, Professor of Clinical Epidemiology at the University of Southampton, UK, proposed a hypothesis that malnutrition during pregnancy and resultant low birth weight increase the risk of developing cardiovascular disease in adulthood. “The womb may be more important than the home,” remarked Barker in a note about his theory (Barker, 1990). “The old model of adult degenerative disease was based on the interaction between genes and an adverse environment in adult life. The new model that is developing will include programming by the environment in fetal and infant life.”This new idea about the influence of the environment during prenatal development on adult disease risk comes with a better understanding of epigenetic processes…The ‘Barker theory'' has been increasingly accepted and been expanded to other diseases, prominently diabetes and obesity, but also osteoporosis and allergies. “In the last few years, the evidence [of an extended] range of potential disease phenotypes with a prenatal developmental component to risk […] has become much stronger,” said Peter Gluckman at the University of Auckland, New Zealand. “We also need to give greater attention to the growing evidence of prenatal and early postnatal effects on cognitive and non-cognitive functional development and to variation in life history patterns.” Similarly, Michael Symonds and colleagues from the University Hospital at Nottingham, UK, wrote: “These critical periods occur at times when fetal development is plastic; in other words, when the fetus is experiencing rapid cell proliferation making it sensitive to environmental challenges” (Symonds et al, 2009).This new idea about the influence of the environment during prenatal development on adult disease risk comes with a better understanding of epigenetic processes—the biological mechanisms that explain how in utero experiences could translate into phenotypic variation and disease susceptibility within, or over several, generations (Gluckman et al, 2009; Fig 1). “I think it has been the combination of good empirical data (experimental and clinical), the appearance of epigenetic data to provide molecular mechanisms and a sound theoretical framework (based on evolutionary biology) that has allowed this field to mature,” said Gluckman. “Having said that, I think it is only as more human molecular data (epigenetic) emerges that this will happen.”Open in a separate windowFigure 1Environmental sensitivity of the epigenome throughout life. Adapted from Gluckman et al (2009), with permission.Epidemiological data in support of the Barker theory have come from investigations of the effects of the ‘Dutch famine''. Between November 1944 and May 1945, the western part of The Netherlands suffered a severe food shortage, owing to the ravages of the Second World War. In large cities such as Utrecht, Amsterdam, Rotterdam and The Hague, the average individual daily rations were as low as 400–800 kcal. In 1994, a large study involving hundreds of people born between November 1943 and February 1947 in a major hospital in Amsterdam was initiated to assess whether and to what extent the famine had prenatally affected the health of the subjects in later life. The Dutch Famine Birth Cohort Study (www.hongerwinter.nl) found a strong link between malnutrition and under-nutrition in utero and cardiovascular disease and diabetes in later life, as well as increased susceptibility to pulmonary diseases, altered coagulation, higher incidence of breast cancer and other diseases, although some of these links were only found in a few cases.More recently, a group led by Bastiaan Heijmans at the Leiden University Medical Centre in The Netherlands and Columbia University (New York, USA) conducted epigenetic studies of individuals who had been exposed to the Dutch famine during gestation. They analysed the level of DNA methylation at several candidate loci in the cohort and found decreased methylation of the imprinted insulin-like growth factor 2 (IGF2) gene—a key factor in human growth and development—compared with the unexposed, same-sex siblings of the cohort (Heijmans et al, 2008). Further studies have identified another six genes implicated in growth, metabolic and cardiovascular phenotypes that show altered methylation statuses associated with prenatal exposure to famine (Heijmans et al, 2009). The overall conclusion from this work is that exposure to certain conditions in the womb can lead to epigenetic marks that can persist throughout life. “It is remarkable to realize that history can leave an imprint on our DNA that is visible up to six decades later. The current challenge is to scale up such studies to the genome,” said Heijmans. His team is now using high-throughput sequencing to see whether there are genomic regions that are more susceptible to prenatal environmental influences. “Genome-scale data may also allow us to observe the hypothesized accumulation of epigenetic changes in specific biological processes, perhaps as a sign of adaptive responses,” he said.Epigenetic modification of genes involved in key regulatory pathways is central to the mechanisms of nutritional programming of disease, but other factors also seem to have a role including altered cell number or cell type, precocious activation of the hypothalamic–pituitary–adrenal axis, increased local glucocorticoid and endocrine sensitivity, impaired mitochondrial function and reduced oxidative capacity. “The particular type of mechanism invoked seems to vary between tissues according to the duration and timing of the nutritional intervention through pregnancy and/or lactation,” commented Symonds et al (2009).“If we just focus on metabolic, cardiovascular and body compositional outcome, I think the data is now overwhelming that there is an important life-long early developmental contribution. The emergent data would suggest that the underpinning epigenetic processes are likely to be at least as important as genetic variation in contributing to disease risk,” commented Gluckman. His research in animal models has shown that epigenetic changes are potentially reversible in mammals through intervention during development, when the growing organism still has sufficient plasticity (Gluckman et al, 2007). For instance, the neonatal administration of leptin has a bidirectional effect on gene expression and the epigenetic status of key genes involved in metabolic regulation in adult rats; an effect that is dependent on prenatal nutrition and unaffected by post-weaning nutrition (normal compared with high-fat diet). In rats that were manipulated in utero by maternal under-nutrition and fed a hypercaloric diet after weaning, leptin treatment normalized adiposity and hepatic gene expression of proteins that are central to lipid metabolism and glucose homeostasis. “The experimental data showing that programming is reversible is a critical proof of concept. I think there is still confusion as to the role of catch-up growth—its effect may be dependent on its timing and this may have implications for infant nutrition,” Gluckman said.The Dutch Famine Birth Cohort Study […] found a strong link between malnutrition and under-nutrition in utero and cardiovascular disease and diabetes in later life…Central to this view of the link between the developing fetus and its later risk of metabolic disease is the idea of ‘developmental mismatch''. The fetus is programmed, largely through epigenetic changes, to match its environment. However, if the environment in childhood and adult life differs sharply from that during prenatal and early postnatal development, ill adaptation can occur and bring disease in its wake (Gluckman & Hanson, 2006). Poor nutrition during fetal development, for example, would lead the organism to expect a hostile future environment, adversely affecting its ability to cope with a richer environment. “Developmental factors do not cause disease in this context, rather they create a situation where the individual becomes more (or less) sensitive in an obesogenic postnatal environment,” said Gluckman. “The experimental and early clinical data point to both central and peripheral effects and this may explain why lifestyle intervention is so hard in some individuals.”Yet there is another nutrition-related pathway that goes beyond mismatch. According to a recent, large population-based study published in The Lancet, maternal weight gain during pregnancy increases birth weight independently of genetic factors, which increases the long-term risk of obesity-related disease in offspring (Ludwig & Currie, 2010). To reduce or eliminate potential confounds such as genetics, sociodemographic factors or other individual characteristics, the researchers examined the association between maternal weight gain—as a measure of over-nutrition during pregnancy—and birth weight using State-based birth registry data in Michigan and New Jersey, allowing them to compare outcomes from several pregnancies in the same mother. “During pregnancy, insulin resistance develops in the mother to shunt vital nutrients to the growing fetus. Excessive weight or weight gain during pregnancy exaggerates this normal process by further increasing insulin resistance and possibly also by affecting other maternal hormones that regulate placental nutrient transporters. The resulting high rate of nutrient transfer stimulates fetal insulin secretion, overgrowth, and increased adiposity,” the authors speculated (Ludwig & Currie, 2010).It could be that epigenetic malprogramming is also involved in these cases. The group of Andreas Plagemann at the Charitè–University Medicine in Berlin, Germany, analysed acquired alterations of DNA methylation patterns of the hypothalamic insulin receptor promoter (IRP) in neonatally overfed rats. They found that altered nutrition during the critical developmental period of perinatal life induced IRP hypermethylation in a seemingly glucose-dependent manner. This revealed an epigenetic mechanism that could affect the function of a promoter that codes for a receptor involved in the life-long regulation of food intake, body weight and metabolism (Plagemann et al, 2010). “In parallel with the general ‘diabesity'' epidemics, diabetes during pregnancy and overweight in pregnant women meanwhile reach dramatic prevalences. Consequently, mean birth weight and frequencies of ‘fat babies'' rise,” said Plagemann. “Taking together epidemiological, clinical and experimental observations, it seems obvious that fetal hyperinsulinism induced by maternal hyperglycaemia/overweight has ‘functionally teratogenic'' significance for a permanently increased disposition to obesity, diabetes, the metabolic syndrome, and subsequent cardiovascular diseases in the offspring” (Fig 2).Open in a separate windowFigure 2Pathogenetic framework, mechanisms and consequences of perinatal malprogramming, showing the etiological significance of perinatal overfeeding and hyperinsulinism for excess weight gain, obesity, diabetes mellitus and cardiovascular diseases in later life. Credit: Andreas Plagemann.Added to the mix is the ‘endocrine-disruptor hypothesis'', one nuance of which proposes that prenatal—as well as postnatal—exposure to environmental chemicals contributes to adipogenesis and the development of obesity by interfering with homeostatic mechanisms that control weight. Several environmental pollutants, nutritional components and pharmaceuticals have been suggested to have ‘obesogenic'' properties—the best known are tributyltin, bisphenol and phthalates (Grün & Blumberg, 2009). “While one cannot presently estimate the degree to which obesogen exposure contributes to the observed increases in obesity, the main conclusion to be drawn from research in our laboratory is that obesogens exist and that prenatal obesogen exposure can predispose an exposed individual to become fatter, later in life,” said Bruce Blumberg at the University of California at Irvine, USA, who is also credited with coining the term ‘obesogen''. “The existence of such chemicals was not even suspected as recently as seven years ago when we began this research.”Several environmental pollutants, nutritional components and pharmaceuticals have been suggested to have ‘obesogenic'' properties…It is clear that diet and exercise are important contributors to the body weight of an individual. However, weight maintenance is not as simple as balancing a ‘caloric checkbook'', or fewer people would be obese, Blumberg commented. Early nutrition and chemical exposure could alter the metabolic set-point of an individual, making their subsequent fight against weight gain more difficult. “We do not currently know how many chemicals are obesogenic or the entire spectrum of mechanisms through which obesogens act,” Blumberg said. “Our data suggest that prenatal obesogen exposure alters the fate of a type of stem cells in the body to favour the development of fat cells at the expense of other cell types (such as bone). In turn, this is likely to increase one''s weight with time.”Obesogen exposure in utero and/or during the first stages of postnatal growth could therefore predispose a child to obesity by influencing all aspects of adipose tissue growth, starting from multipotent stem cells and ending with mature adipocytes (Janesick & Blumberg, 2011). “Epigenetics may also allow us to have a clearer view of the role of xenobiotics, such as bisphenol A, where traditional teratogenetic approaches to analysis seem inappropriate,” Gluckman said. “I expect the potential for either direct or indirect epigenetic inheritance will get much focus in human studies over the next few years.”The impact of the mother''s emotional state during pregnancy on the child''s behaviour and cognitive development of the child is also fertile ground for research. “It has been known from over 50 years of research in animals that stress during pregnancy can have long-term effects on the behavioural and cognitive outcome for the offspring. Over the last ten years many studies, including our own, have shown that the same is true in humans,” said Vivette Glover, a leading expert in the field from Imperial College (London, UK). “If the mother is stressed or anxious while she is pregnant, her child is more likely to have a range of problems such as symptoms of anxiety or depression, [attention deficit hyperactivity disorder] ADHD or conduct disorder, and to be slower at learning, even after allowing for postnatal influences.” Most children are not affected, but if the anxiety level of the mother is in the top 15% of the general population, the risk of her child having these problems increases from about 5% to 10%, Glover explained.Early nutrition and chemical exposure could alter the metabolic set-point of an individual, making their subsequent fight against weight gain more difficultFocusing on the mechanisms that underlie this, Glover''s team has shown that the cognitive development of the child is slower if the fetus is exposed to higher levels of the stress hormone cortisol in the womb (Bergman et al, 2010). Cortisol in fetal circulation is a combination of that produced endogenously by the fetus and that derived from the mother, through the placenta. Glover''s hypothesis is that the placenta might have a key role as a programming vector: if the mother is stressed and more anxious, the placenta becomes a less effective barrier and allows more cortisol to pass from the mother to the fetus (O''Donnell et al, 2009). “Our most recent research has studied how these prenatal effects can be altered by the later quality of the mothering, and we have found that the effects can be exacerbated if the child is insecure and buffered if the child is securely attached to the mother. So the effects are not all over at birth. There are both prenatal and postnatal effects,” Glover said. “There are large public health implications of all this. If we, as a society, cared better for the emotional wellbeing of our pregnant women we would also improve the behavioural, emotional and cognitive outcome for the next generation,” she concluded (Sidebar A).A more integrated view of the developmental ontogeny of a human from embryo to adult is needed…

Sidebar A | Focus on fetal life to help the next generation

“The global burden of death, disability, and loss of human capital as a result of impaired fetal development is huge and affects both developed and developing countries,” concludes a recent World Health Organization technical consultation (WHO, 2006). It advocates moving away from a focus on birth weight to embrace more factors to ensure an optimal environment for the fetus, to maximize its potential for a healthy life. As our knowledge of developmental biology expands, there is progressively greater awareness that events early in human development can have effects in later stages of life, and even inter-generational consequences in terms of non-communicable diseases, such as cardiovascular disease and diabetes.Calling for a radical change in medical attitudes—which they say are responsible for not giving enough credit to “the concept that environmental factors acting early in life (usually in fetal life) have profound effects on vulnerability to disease later in life”—Peter Gluckman, Mark Hanson and Murray Mitchell have recently proposed several prevention and intervention initiatives that could reduce the burden of chronic disease in the next generation (Gluckman et al., 2010). These include limitation of adolescent pregnancy, possibly delaying the age of first pregnancy until four years after menarche; promotion of a healthy diet and lifestyle among women becoming pregnant to avoid the long-term effects of both excessive and deficient maternal nutrition, smoking, or drug and alcohol abuse; and encouraging breastfeeding for optimal growth, resistance to infection, cardiovascular health and neurocognitive development. Clearly, such actions would face a mix of educational, political and social issues, depending on the geographical or cultural area.“None of these solutions seems sophisticated, although it may have taken the recent insights into underlying developmental epigenetic mechanisms to emphasize them. But, when viewed in terms of their potential impact, especially in developing societies and in lower socioeconomic groups in developed countries, it is clear that their importance has been underestimated” (Gluckman et al., 2010).A more integrated view of the developmental ontogeny of a human from embryo to adult is needed, grounded by appreciation of the fact that the developmental trajectory of the fetus is influenced by factors such as maternal nutrition, body composition and maternal age (Fig 3). This must not be limited to the offspring of gestational diabetics and obese mothers. “While these are more extreme influences on the fetus and will lead to immediate consequences (blurring the boundary between what is physiological and pathophysiological), I think the most important observations and conceptual advances will emerge from understanding the long-term implications and underpinning mechanisms of relatively normal early development still having plastic consequences,” Gluckman said. “Thus, what seem to be unremarkable pregnancies still have important influences on the destiny of the offspring.” Though this might be easy to say, the regulatory mechanisms that underlie the complex journey of development await further clarification.Open in a separate windowFigure 3Leonardo Da Vinci: Studies of the fetus in the womb, circa 1510–1513. In Da Vinci''s words, referring to his treatise on anatomy, for which these drawings were made: “This work must begin with the conception of man, and describe the nature of the womb and how the fetus lives in it, up to what stage it resides there, and in what way it quickens into life and feeds. Also its growth and what interval there is between one stage of growth and another. What it is that forces it out from the body of the mother, and for what reasons it sometimes comes out of the mother''s womb before the due time” (Dunn, 1997).  相似文献   

2.
The French government has ambitious goals to make France a leading nation for synthetic biology research, but it still needs to put its money where its mouth is and provide the field with dedicated funding and other support.Synthetic biology is one of the most rapidly growing fields in the biological sciences and is attracting an increasing amount of public and private funding. France has also seen a slow but steady development of this field: the establishment of a national network of synthetic biologists in 2005, the first participation of a French team at the International Genetically Engineered Machine competition in 2007, the creation of a Master''s curriculum, an institute dedicated to synthetic and systems biology at the University of Évry-Val-d''Essonne-CNRS-Genopole in 2009–2010, and an increasing number of conferences and debates. However, scientists have driven the field with little dedicated financial support from the government.Yet the French government has a strong self-perception of its strengths and has set ambitious goals for synthetic biology. The public are told about a “new generation of products, industries and markets” that will derive from synthetic biology, and that research in the field will result in “a substantial jump for biotechnology” and an “industrial revolution”[1,2]. Indeed, France wants to compete with the USA, the UK, Germany and the rest of Europe and aims “for a world position of second or third”[1]. However, in contrast with the activities of its competitors, the French government has no specific scheme for funding or otherwise supporting synthetic biology[3]. Although we read that “France disposes of strong competences” and “all the assets needed”[2], one wonders how France will achieve its ambitious goals without dedicated budgets or detailed roadmaps to set up such institutions.In fact, France has been a straggler: whereas the UK and the USA have published several reports on synthetic biology since 2007, and have set up dedicated governing networks and research institutions, the governance of synthetic biology in France has only recently become an official matter. The National Research and Innovation Strategy (SNRI) only defined synthetic biology as a “priority” challenge in 2009 and created a working group in 2010 to assess the field''s developments, potentialities and challenges; the report was published in 2011[1].At the same time, the French Parliamentary Office for the Evaluation of Scientific and Technological Choices (OPECST) began a review of the field “to establish a worldwide state of the art and the position of our country in terms of training, research and technology transfer”. Its 2012 report entitled The Challenges of Synthetic Biology[2] assessed the main ethical, legal, economic and social challenges of the field. It made several recommendations for a “controlled” and “transparent” development of synthetic biology. This is not a surprise given that the development of genetically modified organisms and nuclear power in France has been heavily criticized for lack of transparency, and that the government prefers to avoid similar future controversies. Indeed, the French government seems more cautious today: making efforts to assess potential dangers and public opinion before actually supporting the science itself.Both reports stress the necessity of a “real” and “transparent” dialogue between science and society and call for “serene […] peaceful and constructive” public discussion. The proposed strategy has three aims: to establish an observatory, to create a permanent forum for discussion and to broaden the debate to include citizens[4]. An Observatory for Synthetic Biology was set up in January 2012 to collect information, mobilize actors, follow debates, analyse the various positions and organize a public forum. Let us hope that this observatory—unlike so many other structures—will have a tangible and durable influence on policy-making, public opinion and scientific practice.Many structural and organizational challenges persist, as neither the National Agency for Research nor the National Centre for Scientific Research have defined the field as a funding priority and public–private partnerships are rare in France. Moreover, strict boundaries between academic disciplines impede interdisciplinary work, and synthetic biology is often included in larger research programmes rather than supported as a research field in itself. Although both the SNRI and the OPECST reports make recommendations for future developments—including setting up funding policies and platforms—it is not clear whether these will materialize, or when, where and what size of investments will be made.France has ambitious goals for synthetic biology, but it remains to be seen whether the government is willing to put ‘meat to the bones'' in terms of financial and institutional support. If not, these goals might come to be seen as unrealistic and downgraded or they will be replaced with another vision that sees synthetic biology as something that only needs discussion and deliberation but no further investment. One thing is already certain: the future development of synthetic biology in France is a political issue.  相似文献   

3.
4.
5.
The public view of life-extension technologies is more nuanced than expected and researchers must engage in discussions if they hope to promote awareness and acceptanceThere is increasing research and commercial interest in the development of novel interventions that might be able to extend human life expectancy by decelerating the ageing process. In this context, there is unabated interest in the life-extending effects of caloric restriction in mammals, and there are great hopes for drugs that could slow human ageing by mimicking its effects (Fontana et al, 2010). The multinational pharmaceutical company GlaxoSmithKline, for example, acquired Sirtris Pharmaceuticals in 2008, ostensibly for their portfolio of drugs targeting ‘diseases of ageing''. More recently, the immunosuppressant drug rapamycin has been shown to extend maximum lifespan in mice (Harrison et al, 2009). Such findings have stoked the kind of enthusiasm that has become common in media reports of life-extension and anti-ageing research, with claims that rapamycin might be “the cure for all that ails” (Hasty, 2009), or that it is an “anti-aging drug [that] could be used today” (Blagosklonny, 2007).Given the academic, commercial and media interest in prolonging human lifespan—a centuries-old dream of humanity—it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives, and to ask whether they would be willing to buy and use drugs that slow the ageing process. Surveys that have addressed these questions, have given some rather surprising results, contrary to the expectations of many researchers in the field. They have also highlighted that although human life extension (HLE) and ageing are topics with enormous implications for society and individuals, scientists have not communicated efficiently with the public about their research and its possible applications.Given the academic, commercial and media interest in prolonging human lifespan […] it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives…Proponents and opponents of HLE often assume that public attitudes towards ageing interventions will be strongly for or against, but until now, there has been little empirical evidence with which to test these assumptions (Lucke & Hall, 2005). We recently surveyed members of the public in Australia and found a variety of opinions, including some ambivalence towards the development and use of drugs that could slow ageing and increase lifespan. Our findings suggest that many members of the public anticipate both positive and negative outcomes from this work (Partridge 2009a, b, 2010; Underwood et al, 2009).In a community survey of public attitudes towards HLE we found that around two-thirds of a sample of 605 Australian adults supported research with the potential to increase the maximum human lifespan by slowing ageing (Partridge et al, 2010). However, only one-third expressed an interest in using an anti-ageing pill if it were developed. Half of the respondents were not interested in personally using such a pill and around one in ten were undecided.Some proponents of HLE anticipate their research being impeded by strong public antipathy (Miller, 2002, 2009). Richard Miller has claimed that opposition to the development of anti-ageing interventions often exists because of an “irrational public predisposition” to think that increased lifespans will only lead to elongation of infirmity. He has called this “gerontologiphobia”—a shared feeling among laypeople that while research to cure age-related diseases such as dementia is laudable, research that aims to intervene in ageing is a “public menace” (Miller, 2002).We found broad support for the amelioration of age-related diseases and for technologies that might preserve quality of life, but scepticism about a major promise of HLE—that it will delay the onset of age-related diseases and extend an individual''s healthy lifespan. From the people we interviewed, the most commonly cited potential negative personal outcome of HLE was that it would extend the number of years a person spent with chronic illnesses and poor quality of life (Partridge et al, 2009a). Although some members of the public envisioned more years spent in good health, almost 40% of participants were concerned that a drug to slow ageing would do more harm than good to them personally; another 13% were unsure about the benefits and costs (Partridge et al, 2010).…it might be that advocates of HLE have failed to persuade the public on this issueIt would be unwise to label such concerns as irrational, when it might be that advocates of HLE have failed to persuade the public on this issue. Have HLE researchers explained what they have discovered about ageing and what it means? Perhaps the public see the claims that have been made about HLE as ‘too good to be true‘.Results of surveys of biogerontologists suggest that they are either unaware or dismissive of public concerns about HLE. They often ignore them, dismiss them as “far-fetched”, or feel no responsibility “to respond” (Settersten Jr et al, 2008). Given this attitude, it is perhaps not surprising that the public are sceptical of their claims.Scientists are not always clear about the outcomes of their work, biogerontologists included. Although the life-extending effects of interventions in animal models are invoked as arguments for supporting anti-ageing research, it is not certain that these interventions will also extend healthy lifespans in humans. Miller (2009) reassuringly claims that the available evidence consistently suggests that quality of life is maintained in laboratory animals with extended lifespans, but he acknowledges that the evidence is “sparse” and urges more research on the topic (Miller, 2009). In the light of such ambiguity, researchers need to respond to public concerns in ways that reflect the available evidence and the potential of their work, without becoming apostles for technologies that have not yet been developed. An anti-ageing drug that extends lifespan without maintaining quality of life is clearly undesirable, but the public needs to be persuaded that such an outcome can be avoided.The public is also concerned about the possible adverse side effects of anti-ageing drugs. Many people were bemused when they discovered that members of the Caloric Restriction Society experienced a loss of libido and loss of muscle mass as a result of adhering to a low-calorie diet to extend their longevity—for many people, such side effects would not be worth the promise of some extra years of life. Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humans (Fontana et al, 2010). If researchers do not discuss these possible effects, then a curious public might draw their own conclusions.Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humansSome HLE advocates seem eager to tout potential anti-ageing drugs as being free from adverse side effects. For example, Blagosklonny (2007) has argued that rapamycin could be used to prevent age-related diseases in humans because it is “a non-toxic, well tolerated drug that is suitable for everyday oral administration” with its major “side-effects” being anti-tumour, bone-protecting, and mimicking caloric restriction effects. By contrast, Kaeberlein & Kennedy (2009) have advised the public against using the drug because of its immunosuppressive effects.Aubrey de Grey has called for scientists to provide more optimistic timescales for HLE on several occasions. He claims that public opposition to interventions in ageing is based on “extraordinarily transparently flawed opinions” that HLE would be unethical and unsustainable (de Grey, 2004). In his view, public opposition is driven by scepticism about whether HLE will be possible, and that concerns about extending infirmity, injustice or social harms are simply excuses to justify people''s belief that ageing is ‘not so bad'' (de Grey, 2007). He argues that this “pro-ageing trance” can only be broken by persuading the public that HLE technologies are just around the corner.Contrary to de Grey''s expectations of public pessimism, 75% of our survey participants thought that HLE technologies were likely to be developed in the near future. Furthermore, concerns about the personal, social and ethical implications of ageing interventions and HLE were not confined to those who believed that HLE is not feasible (Partridge et al, 2010).Juengst et al (2003) have rightly pointed out that any interventions that slow ageing and substantially increase human longevity might generate more social, economic, political, legal, ethical and public health issues than any other technological advance in biomedicine. Our survey supports this idea; the major ethical concerns raised by members of the public reflect the many and diverse issues that are discussed in the bioethics literature (Partridge et al, 2009b; Partridge & Hall, 2007).When pressed, even enthusiasts admit that a drastic extension of human life might be a mixed blessing. A recent review by researchers at the US National Institute on Aging pointed to several economic and social challenges that arise from longevity extension (Sierra et al, 2009). Perry (2004) suggests that the ability to slow ageing will cause “profound changes” and a “firestorm of controversy”. Even de Grey (2005) concedes that the development of an effective way to slow ageing will cause “mayhem” and “absolute pandemonium”. If even the advocates of anti-ageing and HLE anticipate widespread societal disruption, the public is right to express concerns about the prospect of these things becoming reality. It is accordingly unfair to dismiss public concerns about the social and ethical implications as “irrational”, “inane” or “breathtakingly stupid” (de Grey, 2004).The breadth of the possible implications of HLE reinforces the need for more discussion about the funding of such research and management of its outcomes ( Juengst et al, 2003). Biogerontologists need to take public concerns more seriously if they hope to foster support for their work. If there are misperceptions about the likely outcomes of intervention in ageing, then biogerontologists need to better explain their research to the public and discuss how their concerns will be addressed. It is not enough to hope that a breakthrough in human ageing research will automatically assuage public concerns about the effects of HLE on quality of life, overpopulation, economic sustainability, the environment and inequities in access to such technologies. The trajectories of other controversial research areas—such as human embryonic stem cell research and assisted reproductive technologies (Deech & Smajdor, 2007)—have shown that “listening to public concerns on research and responding appropriately” is a more effective way of fostering support than arrogant dismissal of public concerns (Anon, 2009).Biogerontologists need to take public concerns more seriously if they hope to foster support for their work? Open in a separate windowBrad PartridgeOpen in a separate windowJayne LuckeOpen in a separate windowWayne Hall  相似文献   

6.
Greener M 《EMBO reports》2008,9(11):1067-1069
A consensus definition of life remains elusiveIn July this year, the Phoenix Lander robot—launched by NASA in 2007 as part of the Phoenix mission to Mars—provided the first irrefutable proof that water exists on the Red Planet. “We''ve seen evidence for this water ice before in observations by the Mars Odyssey orbiter and in disappearing chunks observed by Phoenix […], but this is the first time Martian water has been touched and tasted,” commented lead scientist William Boynton from the University of Arizona, USA (NASA, 2008). The robot''s discovery of water in a scooped-up soil sample increases the probability that there is, or was, life on Mars.Meanwhile, the Darwin project, under development by the European Space Agency (ESA; Paris, France; www.esa.int/science/darwin), envisages a flotilla of four or five free-flying spacecraft to search for the chemical signatures of life in 25 to 50 planetary systems. Yet, in the vastness of space, to paraphrase the British astrophysicist Arthur Eddington (1822–1944), life might be not only stranger than we imagine, but also stranger than we can imagine. The limits of our current definitions of life raise the possibility that we would not be able to recognize an extra-terrestrial organism.Back on Earth, molecular biologists—whether deliberately or not—are empirically tackling the question of what is life. Researchers at the J Craig Venter Institute (Rockville, MD, USA), for example, have synthesized an artificial bacterial genome (Gibson et al, 2008). Others have worked on ‘minimal cells'' with the aim of synthesizing a ‘bioreactor'' that contains the minimum of components necessary to be self-sustaining, reproduce and evolve. Some biologists regard these features as the hallmarks of life (Luisi, 2007). However, to decide who is first in the ‘race to create life'' requires a consensus definition of life itself. “A definition of the precise boundary between complex chemistry and life will be critical in deciding which group has succeeded in what might be regarded by the public as the world''s first theology practical,” commented Jamie Davies, Professor of Experimental Anatomy at the University of Edinburgh, UK.For most biologists, defining life is a fascinating, fundamental, but largely academic question. It is, however, crucial for exobiologists looking for extra-terrestrial life on Mars, Jupiter''s moon Europa, Saturn''s moon Titan and on planets outside our solar system.In their search for life, exobiologists base their working hypothesis on the only example to hand: life on Earth. “At the moment, we can only assume that life elsewhere is based on the same principles as on Earth,” said Malcolm Fridlund, Secretary for the Exo-Planet Roadmap Advisory Team at the ESA''s European Space Research and Technology Centre (Noordwijk, The Netherlands). “We should, however, always remember that the universe is a peculiar place and try to interpret unexpected results in terms of new physics and chemistry.”The ESA''s Darwin mission will, therefore, search for life-related gases such as carbon dioxide, water, methane and ozone in the atmospheres of other planets. On Earth, the emergence of life altered the balance of atmospheric gases: living organisms produced all of the Earth'' oxygen, which now accounts for one-fifth of the atmosphere. “If all life on Earth was extinguished, the oxygen in our atmosphere would disappear in less than 4 million years, which is a very short time as planets go—the Earth is 4.5 billion years old,” Fridlund said. He added that organisms present in the early phases of life on Earth produced methane, which alters atmospheric composition compared with a planet devoid of life.Although the Darwin project will use a pragmatic and specific definition of life, biologists, philosophers and science-fiction authors have devised numerous other definitions—none of which are entirely satisfactory. Some are based on basic physiological characteristics: a living organism must feed, grow, metabolize, respond to stimuli and reproduce. Others invoke metabolic definitions that define a living organism as having a distinct boundary—such as a membrane—which facilitates interaction with the environment and transfers the raw materials needed to maintain its structure (Wharton, 2002). The minimal cell project, for example, defines cellular life as “the capability to display a concert of three main properties: self-maintenance (metabolism), reproduction and evolution. When these three properties are simultaneously present, we will have a full fledged cellular life” (Luisi, 2007). These concepts regard life as an emergent phenomenon arising from the interaction of non-living chemical components.Cryptobiosis—hidden life, also known as anabiosis—and bacterial endospores challenge the physiological and metabolic elements of these definitions (Wharton, 2002). When the environment changes, certain organisms are able to undergo cryptobiosis—a state in which their metabolic activity either ceases reversibly or is barely discernible. Cryptobiosis allows the larvae of the African fly Polypedilum vanderplanki to survive desiccation for up to 17 years and temperatures ranging from −270 °C (liquid helium) to 106 °C (Watanabe et al, 2002). It also allows the cysts of the brine shrimp Artemia to survive desiccation, ultraviolet radiation, extremes of temperature (Wharton, 2002) and even toyshops, which sell the cysts as ‘sea monkeys''. Organisms in a cryptobiotic state show characteristics that vary markedly from what we normally consider to be life, although they are certainly not dead. “[C]ryptobiosis is a unique state of biological organization”, commented James Clegg, from the Bodega Marine Laboratory at the University of California (Davies, CA, USA), in an article in 2001 (Clegg, 2001). Bacterial endospores, which are the “hardiest known form of life on Earth” (Nicholson et al, 2000), are able to withstand almost any environment—perhaps even interplanetary space. Microbiologists isolated endospores of strict thermophiles from cold lake sediments and revived spores from samples some 100,000 years old (Nicholson et al, 2000).…life might be not only stranger than we imagine, but also stranger than we can imagineAnother problem with the definitions of life is that these can expand beyond biology. The minimal cell project, for example, in common with most modern definitions of life, encompass the ability to undergo Darwinian evolution (Wharton, 2002). “To be considered alive, the organism needs to be able to undergo extensive genetic modification through natural selection,” said Professor Paul Freemont from Imperial College London, UK, whose research interests encompass synthetic biology. But the virtual ‘organisms'' in computer simulations such as the Game of Life (www.bitstorm.org/gameoflife) and Tierra (http://life.ou.edu/tierra) also exhibit life-like characteristics, including growth, death and evolution—similar to robots and other artifical systems that attempt to mimic life (Guruprasad & Sekar, 2006). “At the moment, we have some problems differentiating these approaches from something biologists consider [to be] alive,” Fridlund commented.…to decide who is first in the ‘race to create life'' requires a consensus definition of lifeBoth the genetic code and all computer-programming languages are means of communicating large quantities of codified information, which adds another element to a comprehensive definition of life. Guenther Witzany, an Austrian philosopher, has developed a “theory of communicative nature” that, he claims, differentiates biotic and abiotic life. “Life is distinguished from non-living matter by language and communication,” Witzany said. According to his theory, RNA and DNA use a ‘molecular syntax'' to make sense of the genetic code in a manner similar to language. This paragraph, for example, could contain the same words in a random order; it would be meaningless without syntactic and semantic rules. “The RNA/DNA language follows syntactic, semantic and pragmatic rules which are absent in [a] random-like mixture of nucleic acids,” Witzany explained.Yet, successful communication requires both a speaker using the rules and a listener who is aware of and can understand the syntax and semantics. For example, cells, tissues, organs and organisms communicate with each other to coordinate and organize their activities; in other words, they exchange signals that contain meaning. Noradrenaline binding to a β-adrenergic receptor in the bronchi communicates a signal that says ‘dilate''. “If communication processes are deformed, destroyed or otherwise incorrectly mediated, both coordination and organisation of cellular life is damaged or disturbed, which can lead to disease,” Witzany added. “Cellular life also interprets abiotic environmental circumstances—such as the availability of nutrients, temperature and so on—to generate appropriate behaviour.”Nonetheless, even definitions of life that include all the elements mentioned so far might still be incomplete. “One can make a very complex definition that covers life on the Earth, but what if we find life elsewhere and it is different? My opinion, shared by many, is that we don''t have a clue of how life arose on Earth, even if there are some hypotheses,” Fridlund said. “This underlies many of our problems defining life. Since we do not have a good minimum definition of life, it is hard or impossible to find out how life arose without observing the process. Nevertheless, I''m an optimist who believes the universe is understandable with some hard work and I think we will understand these issues one day.”Both synthetic biology and research on organisms that live in extreme conditions allow biologists to explore biological boundaries, which might help them to reach a consensual minimum definition of life, and understand how it arose and evolved. Life is certainly able to flourish in some remarkably hostile environments. Thermus aquaticus, for example, is metabolically optimal in the springs of Yellowstone National Park at temperatures between 75 °C and 80 °C. Another extremophile, Deinococcus radiodurans, has evolved a highly efficient biphasic system to repair radiation-induced DNA breaks (Misra et al, 2006) and, as Fridlund noted, “is remarkably resistant to gamma radiation and even lives in the cooling ponds of nuclear reactors.”In turn, synthetic biology allows for a detailed examination of the elements that define life, including the minimum set of genes required to create a living organism. Researchers at the J Craig Venter Institute, for example, have synthesized a 582,970-base-pair Mycoplasma genitalium genome containing all the genes of the wild-type bacteria, except one that they disrupted to block pathogenicity and allow for selection. ‘Watermarks'' at intergenic sites that tolerate transposon insertions identify the synthetic genome, which would otherwise be indistinguishable from the wild type (Gibson et al, 2008).Yet, as Pier Luigi Luisi from the University of Roma in Italy remarked, even M. genitalium is relatively complex. “The question is whether such complexity is necessary for cellular life, or whether, instead, cellular life could, in principle, also be possible with a much lower number of molecular components”, he said. After all, life probably did not start with cells that already contained thousands of genes (Luisi, 2007).…researchers will continue their attempts to create life in the test tube—it is, after all, one of the greatest scientific challengesTo investigate further the minimum number of genes required for life, researchers are using minimal cell models: synthetic genomes that can be included in liposomes, which themselves show some life-like characteristics. Certain lipid vesicles are able to grow, divide and grow again, and can include polymerase enzymes to synthesize RNA from external substrates as well as functional translation apparatuses, including ribosomes (Deamer, 2005).However, the requirement that an organism be subject to natural selection to be considered alive could prove to be a major hurdle for current attempts to create life. As Freemont commented: “Synthetic biologists could include the components that go into a cell and create an organism [that is] indistinguishable from one that evolved naturally and that can replicate […] We are beginning to get to grips with what makes the cell work. Including an element that undergoes natural selection is proving more intractable.”John Dupré, Professor of Philosophy of Science and Director of the Economic and Social Research Council (ESRC) Centre for Genomics in Society at the University of Exeter, UK, commented that synthetic biologists still approach the construction of a minimal organism with certain preconceptions. “All synthetic biology research assumes certain things about life and what it is, and any claims to have ‘confirmed'' certain intuitions—such as life is not a vital principle—aren''t really adding empirical evidence for those intuitions. Anyone with the opposite intuition may simply refuse to admit that the objects in question are living,” he said. “To the extent that synthetic biology is able to draw a clear line between life and non-life, this is only possible in relation to defining concepts brought to the research. For example, synthetic biologists may be able to determine the number of genes required for minimal function. Nevertheless, ‘what counts as life'' is unaffected by minimal genomics.”Partly because of these preconceptions, Dan Nicholson, a former molecular biologist now working at the ESRC Centre, commented that synthetic biology adds little to the understanding of life already gained from molecular biology and biochemistry. Nevertheless, he said, synthetic biology might allow us to go boldly into the realms of biological possibility where evolution has not gone before.An engineered synthetic organism could, for example, express novel amino acids, proteins, nucleic acids or vesicular forms. A synthetic organism could use pyranosyl-RNA, which produces a stronger and more selective pairing system than the natural existent furanosyl-RNA (Bolli et al, 1997). Furthermore, the synthesis of proteins that do not exist in nature—so-called never-born proteins—could help scientists to understand why evolutionary pressures only selected certain structures.As Luisi remarked, the ratio between the number of theoretically possible proteins containing 100 amino acids and the real number present in nature is close to the ratio between the space of the universe and the space of a single hydrogen atom, or the ratio between all the sand in the Sahara Desert and a single grain. Exploring never-born proteins could, therefore, allow synthetic biologists to determine whether particular physical, structural, catalytic, thermodynamic and other properties maximized the evolutionary fitness of natural proteins, or whether the current protein repertoire is predominately the result of chance (Luisi, 2007).In the final analysis, as with all science, deep understanding is more important than labelling with words.“Synthetic biology also could conceivably help overcome the ‘n = 1 problem''—namely, that we base biological theorising on terrestrial life only,” Nicholson said. “In this way, synthetic biology could contribute to the development of a more general, broader understanding of what life is and how it might be defined.”No matter the uncertainties, researchers will continue their attempts to create life in the test tube—it is, after all, one of the greatest scientific challenges. Whether or not they succeed will depend partly on the definition of life that they use, though in any case, the research should yield numerous insights that are beneficial to biologists generally. “The process of creating a living system from chemical components will undoubtedly offer many rich insights into biology,” Davies concluded. “However, the definition will, I fear, reflect politics more than biology. Any definition will, therefore, be subject to a lot of inter-lab political pressure. Definitions are also important for bioethical legislation and, as a result, reflect larger politics more than biology. In the final analysis, as with all science, deep understanding is more important than labelling with words.”  相似文献   

7.
8.
L Bornmann 《EMBO reports》2012,13(8):673-676
The global financial crisis has changed how nations and agencies prioritize research investment. There has been a push towards science with expected benefits for society, yet devising reliable tools to predict and measure the social impact of research remains a major challenge.Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier, and it has become the underlying rationale for public support and funding of science.However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [1]. In fact, there is no direct link between the scientific quality of a research project and its societal value. As Paul Nightingale and Alister Scott of the University of Sussex''s Science and Technology Policy Research centre have pointed out: “research that is highly cited or published in top journals may be good for the academic discipline but not for society” [2]. Moreover, it might take years, or even decades, until a particular body of knowledge yields new products or services that affect society. By way of example, in an editorial on the topic in the British Medical Journal, editor Richard Smith cites the original research into apoptosis as work that is of high quality, but that has had “no measurable impact on health” [3]. He contrasts this with, for example, research into “the cost effectiveness of different incontinence pads”, which is certainly not seen as high value by the scientific community, but which has had an immediate and important societal impact.…the growth of scientific research during the past decades has outpaced the public resources available to fund itThe problem actually begins with defining the ‘societal impact of research''. A series of different concepts has been introduced: ‘third-stream activities'' [4], ‘societal benefits'' or ‘societal quality'' [5], ‘usefulness'' [6], ‘public values'' [7], ‘knowledge transfer'' [8] and ‘societal relevance'' [9, 10]. Yet, each of these concepts is ultimately concerned with measuring the social, cultural, environmental and economic returns from publicly funded research, be they products or ideas.In this context, ‘societal benefits'' refers to the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making. ‘Cultural benefits'' are those that add to the cultural capital of a nation, for example, by giving insight into how we relate to other societies and cultures, by providing a better understanding of our history and by contributing to cultural preservation and enrichment. ‘Environmental benefits'' benefit the natural capital of a nation, by reducing waste and pollution, and by increasing natural preserves or biodiversity. Finally, ‘economic benefits'' increase the economic capital of a nation by enhancing its skills base and by improving its productivity [11].Given the variability and the complexity of evaluating the societal impact of research, Barend van der Meulen at the Rathenau Institute for research and debate on science and technology in the Netherlands, and Arie Rip at the School of Management and Governance of the University of Twente, the Netherlands, have noted that “it is not clear how to evaluate societal quality, especially for basic and strategic research” [5]. There is no accepted framework with adequate datasets comparable to,for example, Thomson Reuters'' Web of Science, which enables the calculation of bibliometric values such as the h index [12] or journal impact factor [13]. There are also no criteria or methods that can be applied to the evaluation of societal impact, whilst conventional research and development (R&D) indicators have given little insight, with the exception of patent data. In fact, in many studies, the societal impact of research has been postulated rather than demonstrated [14]. For Benoît Godin at the Institut National de la Recherche Scientifique (INRS) in Quebec, Canada, and co-author Christian Doré, “systematic measurements and indicators [of the] impact on the social, cultural, political, and organizational dimensions are almost totally absent from the literature” [15]. Furthermore, they note, most research in this field is primarily concerned with economic impact.A presentation by Ben Martin from the Science and Technology Policy Research Unit at Sussex University, UK, cites four common problems that arise in the context of societal impact measurements [16]. The first is the causality problem—it is not clear which impact can be attributed to which cause. The second is the attribution problem, which arises because impact can be diffuse or complex and contingent, and it is not clear what should be attributed to research or to other inputs. The third is the internationality problem that arises as a result of the international nature of R&D and innovation, which makes attribution virtually impossible. Finally, the timescale problem arises because the premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impact.…in many studies, the societal impact of research has been postulated rather than demonstratedIn addition, there are four other problems. First, it is hard to find experts to assess societal impact that is based on peer evaluation. As Robert Frodeman and James Britt Holbrook at the University of North Texas, USA, have noted, “[s]cientists generally dislike impacts considerations” and evaluating research in terms of its societal impact “takes scientists beyond the bounds of their disciplinary expertise” [10]. Second, given that the scientific work of an engineer has a different impact than the work of a sociologist or historian, it will hardly be possible to have a single assessment mechanism [4, 17]. Third, societal impact measurement should take into account that there is not just one model of a successful research institution. As such, assessment should be adapted to the institution''s specific strengths in teaching and research, the cultural context in which it exists and national standards. Finally, the societal impact of research is not always going to be desirable or positive. For example, Les Rymer, graduate education policy advisor to the Australian Group of Eight (Go8) network of university vice-chancellors, noted in a report for the Go8 that, “environmental research that leads to the closure of a fishery might have an immediate negative economic impact, even though in the much longer term it will preserve a resource that might again become available for use. The fishing industry and conservationists might have very different views as to the nature of the initial impact—some of which may depend on their view about the excellence of the research and its disinterested nature” [18].Unlike scientific impact measurement, for which there are numerous established methods that are continually refined, research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishments. Even so, governments already conduct budget-relevant measurements, or plan to do so. The best-known national evaluation system is the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. Efforts are under way to set up the Research Excellence Framework (REF), which is set to replace the RAE in 2014 “to support the desire of modern research policy for promoting problem-solving research” [21]. In order to develop the new arrangements for the assessment and funding of research in the REF, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches for evaluating the impact of research [20]. The recommendation from this consultation is that impact should be measured in a quantifiable way, and expert panels should review narrative evidence in case studies supported by appropriate indicators [19,21].…premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impactMany of the studies that have carried out societal impact measurement chose to do so on the basis of case studies. Although this method is labour-intensive and a craft rather than a quantitative activity, it seems to be the best way of measuring the complex phenomenon that is societal impact. The HEFCE stipulates that “case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe” [22]. Claire Donovan at Brunel University, London, UK, considers the preference for a case-study approach in the REF to be “the ‘state of the art'' [for providing] the necessary evidence-base for increased financial support of university research across all fields” [23]. According to Finn Hansson from the Department of Leadership, Policy and Philosophy at the Copenhagen Business School, Denmark, and co-author Erik Ernø-Kjølhede, the new REF is “a clear political signal that the traditional model for assessing research quality based on a discipline-oriented Mode 1 perception of research, first and foremost in the form of publication in international journals, was no longer considered sufficient by the policy-makers” [19]. ‘Mode 1'' describes research governed by the academic interests of a specific community, whereas ‘Mode 2'' is characterized by collaboration—both within the scientific realm and with other stakeholders—transdisciplinarity and basic research that is being conducted in the context of application [19].The new REF will also entail changes in budget allocations. The evaluation of a research unit for the purpose of allocations will determine 20% of the societal influence dimension [19]. The final REF guidance contains lists of examples for different types of societal impact [24].Societal impact is much harder to measure than scientific impact, and there are probably no indicators that can be used across all disciplines and institutions for collation in databases [17]. Societal impact often takes many years to become apparent, and “[t]he routes through which research can influence individual behaviour or inform social policy are often very diffuse” [18].Yet, the practitioners of societal impact measurement should not conduct this exercise alone; scientists should also take part. According to Steve Hanney at Brunel University, an expert in assessing payback or impacts from health research, and his co-authors, many scientists see societal impact measurement as a threat to their scientific freedom and often reject it [25]. If the allocation of funds is increasingly oriented towards societal impact issues, it challenges the long-standing reward system in science whereby scientists receive credits—not only citations and prizes but also funds—for their contributions to scientific advancement. However, given that societal impact measurement is already important for various national evaluations—and other countries will follow probably—scientists should become more concerned with this aspect of their research. In fact, scientists are often unaware that their research has a societal impact. “The case study at BRASS [Centre for Business Relationships, Accountability, Sustainability and Society] uncovered activities that were previously ‘under the radar'', that is, researchers have been involved in activities they realised now can be characterized as productive interactions” [26] between them and societal stakeholders. It is probable that research in many fields already has a direct societal impact, or induces productive interactions, but that it is not yet perceived as such by the scientists conducting the work.…research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishmentsThe involvement of scientists is also necessary in the development of mechanisms to collect accurate and comparable data [27]. Researchers in a particular discipline will be able to identify appropriate indicators to measure the impact of their kind of work. If the approach to establishing measurements is not sufficiently broad in scope, there is a danger that readily available indicators will be used for evaluations, even if they do not adequately measure societal impact [16]. There is also a risk that scientists might base their research projects and grant applications on readily available and ultimately misleading indicators. As Hansson and Ernø-Kjølhede point out, “the obvious danger is that researchers and universities intensify their efforts to participate in activities that can be directly documented rather than activities that are harder to document but in reality may be more useful to society” [19]. Numerous studies have documented that scientists already base their activities on the criteria and indicators that are applied in evaluations [19, 28, 29].Until reliable and robust methods to assess impact are developed, it makes sense to use expert panels to qualitatively assess the societal relevance of research in the first instance. Rymer has noted that, “just as peer review can be useful in assessing the quality of academic work in an academic context, expert panels with relevant experience in different areas of potential impact can be useful in assessing the difference that research has made” [18].Whether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting the public funding and support of basic researchWhether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting public funding and support of basic research. This has always been the case, but new research into measures that can assess the societal impact of research would provide better qualitative and quantitative data on which funding agencies and politicians could base decisions. At the same time, such measurement should not come at the expense of basic, blue-sky research, given that it is and will remain near-impossible to predict the impact of certain research projects years or decades down the line.  相似文献   

9.
Martinson BC 《EMBO reports》2011,12(8):758-762
Universities have been churning out PhD students to reap financial and other rewards for training biomedical scientists. This deluge of cheap labour has created unhealthy competition, which encourages scientific misconduct.Most developed nations invest a considerable amount of public money in scientific research for a variety of reasons: most importantly because research is regarded as a motor for economic progress and development, and to train a research workforce for both academia and industry. Not surprisingly, governments are occasionally confronted with questions about whether the money invested in research is appropriate and whether taxpayers are getting the maximum value for their investments.…questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientistsThe training and maintenance of the research workforce is a large component of these investments. Yet discussions in the USA about the appropriate size of this workforce have typically been contentious, owing to an apparent lack of reliable data to tell us whether the system yields academic ‘reproduction rates'' that are above, below or at replacement levels. In the USA, questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientists. As Donald Kennedy, then Editor-in-Chief of Science, noted several years ago, leaders in prestigious academic institutions have repeatedly rung alarm bells about shortages in the science workforce. Less often does one see questions raised about whether too many scientists are being produced or concerns about unintended consequences that may result from such overproduction. Yet recognizing that resources are finite, it seems reasonable to ask what level of competition for resources is productive, and at what level does competition become counter-productive.Finding a proper balance between the size of the research workforce and the resources available to sustain it has other important implications. Unhealthy competition—too many people clamouring for too little money and too few desirable positions—creates its own problems, most notably research misconduct and lower-quality, less innovative research. If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edge. Moreover, many in the science community worry that every publicized case of research misconduct could jeopardize those resources, if politicians and taxpayers become unwilling to invest in a research system that seems to be riddled with fraud and misconduct.The biomedical research enterprise in the USA provides a useful context in which to examine the level of competition for resources among academic scientists. My thesis is that the system of publicly funded research in the USA as it is currently configured supports a feedback system of institutional incentives that generate excessive competition for resources in biomedical research. These institutional incentives encourage universities to overproduce graduate students and postdoctoral scientists, who are both trainees and a cheap source of skilled labour for research while in training. However, once they have completed their training, they become competitors for money and positions, thereby exacerbating competitive pressures.Questions raised about whether too many scientists are being produced or concerns about the unintended consequences of such overproduction are less commonThe resulting scarcity of resources, partly through its effect on peer review, leads to a shunting of resources away from both younger researchers and the most innovative ideas, which undermines the effectiveness of the research enterprise as a whole. Faced with an increasing number of grant applications and the consequent decrease in the percentage of projects that can be funded, reviewers tend to ‘play it safe'' and favour projects that have a higher likelihood of yielding results, even if the research is conservative in the sense that it does not explore new questions. Resource scarcity can also introduce unwanted randomness to the process of determining which research gets funded. A large group of scientists, led by a cancer biologist, has recently mounted a campaign against a change in a policy of the National Institutes of Health (NIH) to allow only one resubmission of an unfunded grant proposal (Wadman, 2011). The core of their argument is that peer reviewers are likely able to distinguish the top 20% of research applications from the rest, but that within that top 20%, distinguishing the top 5% or 10% means asking peer reviewers for a level of precision that is simply not possible. With funding levels in many NIH institutes now within that 5–10% range, the argument is that reviewers are being forced to choose at random which excellent applications do and do not get funding. In addition to the inefficiency of overproduction and excessive competition in terms of their costs to society and opportunity costs to individuals, these institutional incentives might undermine the integrity and quality of science, and reduce the likelihood of breakthroughs.My colleagues and I have expressed such concerns about workforce dynamics and related issues in several publications (Martinson, 2007; Martinson et al, 2005, 2006, 2009, 2010). Early on, we observed that, “missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures” (Martinson et al, 2005). Our more recent publications have been more specific about the institutional and systemic structures concerned. It seems that at least a few important leaders in science share these concerns.In April 2009, the NIH, through the National Institute of General Medical Sciences (NIGMS), issued a request for applications (RFA) calling for proposals to develop computational models of the research workforce (http://grants.nih.gov/grants/guide/rfa-files/RFA-GM-10-003.html). Although such an initiative might be premature given the current level of knowledge, the rationale behind the RFA seems irrefutable: “there is a need to […] pursue a systems-based approach to the study of scientific workforce dynamics.” Roughly four decades after the NIH appeared on the scene, this is, to my knowledge, the first official, public recognition that the biomedical workforce tends not to conform nicely to market forces of supply and demand, despite the fact that others have previously made such arguments.Early last year, Francis Collins, Director of the NIH, published a PolicyForum article in Science, voicing many of the concerns I have expressed about specific influences that have led to growth rates in the science workforce that are undermining the effectiveness of research in general, and biomedical research in particular. He notes the increasing stress in the biomedical research community after the end of the NIH “budget doubling” between 1998 and 2003, and the likelihood of further disruptions when the American Recovery and Reinvestment Act of 2009 (ARRA) funding ends in 2011. Arguing that innovation is crucial to the future success of biomedical research, he notes the tendency towards conservatism of the NIH peer-review process, and how this worsens in fiscally tight times. Collins further highlights the ageing of the NIH workforce—as grants increasingly go to older scientists—and the increasing time that researchers are spending in itinerant and low-paid postdoctoral positions as they stack up in a holding pattern, waiting for faculty positions that may or may not materialize. Having noted these challenging trends, and echoing the central concerns of a 2007 Nature commentary (Martinson, 2007), he concludes that “…it is time for NIH to develop better models to guide decisions about the optimum size and nature of the US workforce for biomedical research. A related issue that needs attention, though it will be controversial, is whether institutional incentives in the current system that encourage faculty to obtain up to 100% of their salary from grants are the best way to encourage productivity.”Similarly, Bruce Alberts, Editor-in-Chief of Science, writing about incentives for innovation, notes that the US biomedical research enterprise includes more than 100,000 graduate students and postdoctoral fellows. He observes that “only a select few will go on to become independent research scientists in academia”, and argues that “assuming that the system supporting this career path works well, these will be the individuals with the most talent and interest in such an endeavor” (Alberts, 2009).His editorial is not concerned with what happens to the remaining majority, but argues that even among the select few who manage to succeed, the funding process for biomedical research “forces them to avoid risk-taking and innovation”. The primary culprit, in his estimation, is the conservatism of the traditional peer-review system for federal grants, which values “research projects that are almost certain to ‘work''”. He continues, “the innovation that is essential for keeping science exciting and productive is replaced by […] research that has little chance of producing the breakthroughs needed to improve human health.”If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edgeAlthough I believe his assessment of the symptoms is correct, I think he has misdiagnosed the cause, in part because he has failed to identify which influence he is concerned with from the network of influences in biomedical research. To contextualize the influences of concern to Alberts, we must consider the remaining majority of doctorally trained individuals so easily dismissed in his editorial, and further examine what drives the dynamics of the biomedical research workforce.Labour economists might argue that market forces will always balance the number of individuals with doctorates with the number of appropriate jobs for them in the long term. Such arguments would ignore, however, the typical information asymmetry between incoming graduate students, whose knowledge about their eventual job opportunities and career options is by definition far more limited than that of those who run the training programmes. They would also ignore the fact that universities are generally not confronted with the externalities resulting from overproduction of PhDs, and have positive financial incentives that encourage overproduction. During the past 40 years, NIH ‘extramural'' funding has become crucial for graduate student training, faculty salaries and university overheads. For their part, universities have embraced NIH extramural funding as a primary revenue source that, for a time, allowed them to implement a business model based on the interconnected assumptions that, as one of the primary ‘outputs'' or ‘products'' of the university, more doctorally trained individuals are always better than fewer, and because these individuals are an excellent source of cheap, skilled labour during their training, they help to contain the real costs of faculty research.“…the current system has succeeded in maximizing the amount of research […] it has also degraded the quality of graduate training and led to an overproduction of PhDs…”However, it has also made universities increasingly dependent on NIH funding. As recently documented by the economist Paula Stephan, most faculty growth in graduate school programmes during the past decade has occurred in medical colleges, with the majority—more than 70%—in non-tenure-track positions. Arguably, this represents a shift of risk away from universities and onto their faculty. Despite perennial cries of concern about shortages in the research workforce (Butz et al, 2003; Kennedy et al, 2004; National Academy of Sciences et al, 2005) a number of commentators have recently expressed concerns that the current system of academic research might be overbuilt (Cech, 2005; Heinig et al, 2007; Martinson, 2007; Stephan, 2007). Some explicitly connect this to structural arrangements between the universities and NIH funding (Cech, 2005; Collins, 2007; Martinson, 2007; Stephan, 2007).In 1995, David Korn pointed out what he saw as some problematic aspects of the business model employed by Academic Medical Centers (AMCs) in the USA during the past few decades (Korn, 1995). He noted the reliance of AMCs on the relatively low-cost, but highly skilled labour represented by postdoctoral fellows, graduate students and others—who quickly start to compete with their own professors and mentors for resources. Having identified the economic dependence of the AMCs on these inexpensive labour pools, he noted additional problems with the graduate training programmes themselves. “These programs are […] imbued with a value system that clearly indicates to all participants that true success is only marked by the attainment of a faculty position in a high-profile research institution and the coveted status of principal investigator on NIH grants.” Pointing to “more than 10 years of severe supply/demand imbalance in NIH funds”, Korn concluded that, “considering the generative nature of each faculty mentor, this enterprise could only sustain itself in an inflationary environment, in which the society''s investment in biomedical research and clinical care was continuously and sharply expanding.” From 1994 to 2003, total funding for biomedical research in the USA increased at an annual rate of 7.8%, after adjustment for inflation. The comparable rate of growth between 2003 and 2007 was 3.4% (Dorsey et al, 2010). These observations resonate with the now classic observation by Derek J. de Solla Price, from more than 30 years before, that growth in science frequently follows an exponential pattern that cannot continue indefinitely; the enterprise must eventually come to a plateau (de Solla Price, 1963).In May 2009, echoing some of Korn''s observations, Nobel laureate Roald Hoffmann caused a stir in the US science community when he argued for a “de-coupling” of the dual roles of graduate students as trainees and cheap labour (Hoffmann, 2009). His suggestion was to cease supporting graduate students with faculty research grants, and to use the money instead to create competitive awards for which graduate students could apply, making them more similar to free agents. During the ensuing discussion, Shirley Tilghman, president of Princeton University, argued that “although the current system has succeeded in maximizing the amount of research performed […] it has also degraded the quality of graduate training and led to an overproduction of PhDs in some areas. Unhitching training from research grants would be a much-needed form of professional ‘birth control''” (Mervis, 2009).The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientistsAlthough the issue of what I will call the ‘academic birth rate'' is the central concern of this analysis, the ‘academic end-of-life'' also warrants some attention. The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientists. A 2008 news item in Science quoted then 70-year-old Robert Wells, a molecular geneticist at Texas A&M University, “‘if I and other old birds continue to land the grants, the [young scientists] are not going to get them.” He worries that the budget will not be able to support “the 100 people ‘I''ve trained […] to replace me''” (Kaiser, 2008). While his claim of 100 trainees might be astonishing, it might be more astonishing that his was the outlying perspective. The majority of senior scientists interviewed for that article voiced intentions to keep doing science—and going after NIH grants—until someone forced them to stop or they died.Some have looked at the current situation with concern, primarily because of the threats it poses to the financial and academic viability of universities (Korn, 1995; Heinig et al, 2007; Korn & Heinig, 2007), although most of those who express such concerns have been distinctly reticent to acknowledge the role of universities in creating and maintaining the situation. Others have expressed concerns about the differential impact of extreme competition and meagre job prospects on the recruitment, development and career survival of young and aspiring scientists (Freeman et al, 2001; Kennedy et al, 2004; Martinson et al, 2006; Anderson et al, 2007a; Martinson, 2007; Stephan, 2007). There seems to be little disagreement, however, that the system has generated excessively high competition for federal research funding, and that this threatens to undermine the very innovation and production of knowledge that is its raison d''etre.The production of knowledge in science, particularly of the ‘revolutionary'' variety, is generally not a linear input–output process with predictable returns on investment, clear timelines and high levels of certainty (Lane, 2009). On the contrary, it is arguable that “revolutionary science is a high risk and long-term endeavour which usually fails” (Charlton & Andras, 2008). Predicting where, when and by whom breakthroughs in understanding will be produced has proven to be an extremely difficult task. In the face of such uncertainty, and denying the realities of finite resources, some have argued that the best bet is to maximize the number of scientists, using that logic to justify a steady-state production of new PhDs, regardless of whether the labour market is sending signals of increasing or decreasing demand for that supply. Only recently have we begun to explore the effects of the current arrangement on the process of knowledge production, and on innovation in particular (Charlton & Andras, 2008; Kolata, 2009).…most of those who express such concerns have been reticent to acknowledge the role of universities themselves in creating and maintaining the situationBruce Alberts, in the above-mentioned editorial, points to several initiatives launched by the NIH that aim to get a larger share of NIH funding into the hands of young scientists with particularly innovative ideas. These include the “New Innovator Award,” the “Pioneer Award” and the “Transformational R01 Awards”. The proportion of NIH funding dedicated to these awards, however, amounts to “only 0.27% of the NIH budget” (Alberts, 2009). Such a small proportion of the NIH budget does not seem likely to generate a large amount of more innovative science. Moreover, to the extent that such initiatives actually succeed in enticing more young investigators to become dependent on NIH funds, any benefit these efforts have in terms of innovation may be offset by further increases in competition for resources that will come when these new ‘innovators'' reach the end of this specialty funding and add to the rank and file of those scrapping for funds through the standard mechanisms.Our studies on research integrity have been mostly oriented towards understanding how the influences within which academic scientists work might affect their behaviour, and thus the quality of the science they produce (Anderson et al, 2007a, 2007b; Martinson et al, 2009, 2010). My colleagues and I have focused on whether biomedical researchers perceive fairness in the various exchange relationships within their work systems. I am persuaded by the argument that expectations of fairness in exchange relationships have been hard-wired into us through evolution (Crockett et al, 2008; Hsu et al, 2008; Izuma et al, 2008; Pennisi, 2009), with the advent of modern markets being a primary manifestation of this. Thus, violations of these expectations strike me as potentially corrupting influences. Such violations might be prime motivators for ill will, possibly engendering bad-faith behaviour among those who perceive themselves to have been slighted, and therefore increasing the risk of research misconduct. They might also corrupt the enterprise by signalling to talented young people that biomedical research is an inhospitable environment in which to develop a career, possibly chasing away some of the most talented individuals, and encouraging a selection of characteristics that might not lead to optimal effectiveness, in terms of scientific innovation and productivity (Charlton, 2009).To the extent that we have an ecology with steep competition that is fraught with high risks of career failure for young scientists after they incur large costs of time, effort and sometimes financial resources to obtain a doctoral degree, why would we expect them to take on the additional, substantial risks involved in doing truly innovative science and asking risky research questions? And why, in such a cut-throat setting, would we not anticipate an increase in corner-cutting, and a corrosion of good scientific practice, collegiality, mentoring and sociability? Would we not also expect a reduction in high-risk, innovative science, and a reversion to a more career-safe type of ‘normal'' science? Would this not reduce the effectiveness of the institution of biomedical research? I do not claim to know the conditions needed to maximize the production of research that is novel, innovative and conducted with integrity. I am fairly certain, however, that putting scientists in tenuous positions in which their careers and livelihoods would be put at risk by pursuing truly revolutionary research is one way to insure against it.  相似文献   

10.
11.
12.
Samuel Caddick 《EMBO reports》2008,9(12):1174-1176
  相似文献   

13.
Elixirs of death     
Substandard and fake drugs are increasingly threatening lives in both the developed and developing world, but governments and industry are struggling to improve the situation.When people take medicine, they assume that it will make them better. However many patients cannot trust their drugs to be effective or even safe. Fake or substandard medicine is a major public health problem and it seems to be growing. More than 200 heart patients died in Pakistan in 2012 after taking a contaminated drug against hypertension [1]. In 2006, cough syrup that contained diethylene glycol as a cheap substitute for pharmaceutical-grade glycerin was distributed in Panama, causing the death of at least 219 people [2,3]. However, the problem is not restricted to developing countries. In 2012, more than 500 patients came down with fungal meningitis and several dozens died after receiving contaminated steroid injections from a compounding pharmacy in Massachusetts [4]. The same year, a fake version of the anti-cancer drug Avastin, which contained no active ingredient, was sold in the USA. The drug seemed to have entered the country through Turkey, Switzerland, Denmark and the UK [5].…many patients cannot trust their drugs to be effective or even safeThe extent of the problem is not really known, as companies and governments do not always report incidents [6]. However, the information that is available is alarming enough, especially in developing countries. One study found that 20% of antihypertensive drugs collected from pharmacies in Rwanda were substandard [7]. Similarly, in a survey of anti-malaria drugs in Southeast Asia and sub-Saharan Africa, 20–42% were found to be either of poor quality or outright fake [8], whilst 56% of amoxicillin capsules sampled in different Arab countries did not meet the US Pharmacopeia requirements [9].Developing countries are particularly susceptible to substandard and fake medicine. Regulatory authorities do not have the means or human resources to oversee drug manufacturing and distribution. A country plagued by civil war or famine might have more pressing problems—including shortages of medicine in the first place. The drug supply chain is confusingly complex with medicines passing through many different hands before they reach the patient, which creates many possible entry points for illegitimate products. Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insurance. Instead, they buy cheap medicine from street vendors at the market or on the bus (Fig 1; [2,10,11]). “People do not have the money to buy medicine at a reasonable price. But quality comes at a price. A reasonable margin is required to pay for a quality control system,” explained Hans Hogerzeil, Professor of Global Health at Groningen University in the Netherlands. In some countries, falsifying medicine has developed into a major business. The low risk of being detected combined with relatively low penalties has turned falsifying medicine into the “perfect crime” [2].Open in a separate windowFigure 1Women sell smuggled, counterfeit medicine on the Adjame market in Abidjan, Ivory Coast, in 2007. Fraudulent street medecine sales rose by 15–25% in the past two years in Ivory Coast.Issouf Sanogo/AFP Photo/Getty Images.There are two main categories of illegitimate drugs. ‘Substandard'' medicines might result from poor-quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intent. It might be manufactured outside the regulatory system, perhaps in an illegitimate production shack that blends chalk with other ingredients and presses it into pills [10]. Whilst falsified medicines do not typically contain any active ingredients, substandard medicine might contain subtherapeutic amounts. This is particularly problematic when it comes to anti-infectious drugs, as it facilitates the emergence and spread of drug resistance [12]. A sad example is the emergence of artemisinin-resistant Plasmodium strains at the Thai–Cambodia border [8] and the Thai–Myanmar border [13], and increasing multidrug-resistant tuberculosis might also be attributed to substandard medication [11].Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insuranceEven if a country effectively prosecutes falsified and substandard medicine within its borders, it is still vulnerable to fakes and low-quality drugs produced elsewhere where regulations are more lax. To address this problem, international initiatives are urgently required [10,14,15], but there is no internationally binding law to combat counterfeit and substandard medicine. Although drug companies, governments and NGOs are interested in good-quality medicines, the different parties seem to have difficulties coming to terms with how to proceed. What has held up progress is a conflation of health issues and economic interests: innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting quality of medicine [14,16].The concern that intellectual property (IP) interests threaten public health dates back to the ‘Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement'' of the World Trade Organization (WTO), adopted in 1994, to establish global protection of intellectual property rights, including patents for pharmaceuticals. The TRIPS Agreement had devastating consequences during the acquired immunodeficiency syndrome epidemic, as it blocked patients in developing countries from access to affordable medicine. Although it includes flexibility, such as the possibility for governments to grant compulsory licenses to manufacture or import a generic version of a patented drug, it has not always been clear how these can be used by countries [14,16,17].In response to public concerns over the public health consequences of TRIPS, the Doha Declaration on the TRIPS Agreement and Public Health was adopted at the WTO''s Ministerial Conference in 2001. It reaffirmed the right of countries to use TRIPS flexibilities and confirmed the primacy of public health over the enforcement of IP rights. Although things have changed for the better, the Doha Declaration did not solve all the problems associated with IP protection and public health. For example, anti-counterfeit legislation, encouraged by multi-national pharmaceutical industries and the EU, threatened to impede the availability of generic medicines in East Africa [14,16,18]. In 2008–2009, European customs authorities seized shipments of legitimate generic medicines in transit from India to other developing countries because they infringed European IP laws [14,16,17]. “We''re left with decisions being taken based on patents and trademarks that should be taken based on health,” commented Roger Bate, a global health expert and resident scholar at the American Enterprise Institute in Washington, USA. “The health community is shooting themselves in the foot.”Conflating health care and IP issues are reflected in the unclear use of the term ‘counterfeit'' [2,14]. “Since the 1990s the World Health Organization (WHO) has used the term ‘counterfeit'' in the sense we now use ‘falsified'',” explained Hogerzeil. “The confusion started in 1995 with the TRIPS agreement, through which the term ‘counterfeit'' got the very narrow meaning of trademark infringement.” As a consequence, an Indian generic, for example, which is legal in some countries but not in others, could be labelled as ‘counterfeit''—and thus acquire the negative connotation of bad quality. “The counterfeit discussion was very much used as a way to block the market of generics and to put them in a bad light,” Hogerzeil concluded.The rifts between the stakeholders have become so deep during the course of these discussions that progress is difficult to achieve. “India is not at all interested in any international regulation. And, unfortunately, it wouldn''t make much sense to do anything without them,” Hogerzeil explained. Indeed, India is a core player: not only does it have a large generics industry, but also the country seems to be, together with China, the biggest source of fake medical products [19,20]. The fact that India is so reluctant to react is tragically ironic, as this stance hampers the growth of its own generic companies like Ranbaxy, Cipla or Piramal. “I certainly don''t believe that Indian generics would lose market share if there was stronger action on public health,” Bate said. Indeed, stricter regulations and control systems would be advantageous, because they would keep fakers at bay. The Indian generic industry is a common target for fakers, because their products are broadly distributed. “The most likely example of a counterfeit product I have come across in emerging markets is a counterfeit Indian generic,” Bate said. Such fakes can damage a company''s reputation and have a negative impact on its revenues when customers stop buying the product.The WHO has had a key role in attempting to draft international regulations that would contain the spread of falsified and substandard medicine. It took a lead in 2006 with the launch of the International Medical Products Anti-Counterfeiting Taskforce (IMPACT). But IMPACT was not a success. Concerns were raised over the influence of multi-national drug companies and the possibility that issues on quality of medicines were conflated with the attempts to enforce stronger IP measures [17]. The WHO distanced itself from IMPACT after 2010. For example, it no longer hosts IMPACT''s secretariat at its headquarters in Geneva [2].‘Substandard'' medicines might result from poor quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intentIn 2010, the WHO''s member states established a working group to further investigate how to proceed, which led to the establishment of a new “Member State mechanism on substandard/spurious/falsely labelled/falsified/counterfeit medical products” (http://www.who.int/medicines/services/counterfeit/en/index.html). However, according to a publication by Amir Attaran from the University of Ottawa, Canada, and international colleagues, the working group “still cannot agree how to define the various poor-quality medicines, much less settle on any concrete actions” [14]. The paper''s authors demand more action and propose a binding legal framework: a treaty. “Until we have stronger public health law, I don''t think that we are going to resolve this problem,” Bate, who is one of the authors of the paper, said.Similarly, the US Food and Drug Administration (FDA) commissioned the Institute of Medicine (IOM) to convene a consensus committee on understanding the global public health implications of falsified and substandard pharmaceuticals [2]. Whilst others have called for a treaty, the IOM report calls on the World Health Assembly—the governing body of the WHO—to develop a code of practice such as a “voluntary soft law” that countries can sign to express their will to do better. “At the moment, there is not yet enough political interest in a treaty. A code of conduct may be more realistic,” Hogerzeil, who is also on the IOM committee, commented. Efforts to work towards a treaty should nonetheless be pursued, Bate insisted: “The IOM is right in that we are not ready to sign a treaty yet, but that does not mean you don''t start negotiating one.”Whilst a treaty might take some time, there are several ideas from the IOM report and elsewhere that could already be put into action to deal with this global health threat [10,12,14,15,19]. Any attempts to safeguard medicines need to address both falsified and substandard medicines, but the counter-measures are different [14]. Falsifying medicine is, by definition, a criminal act. To counteract fakers, action needs to be taken to ensure that the appropriate legal authorities deal with criminals. Substandard medicine, on the other hand, arises when mistakes are made in genuine manufacturing companies. Such mistakes can be reduced by helping companies do better and by improving quality control of drug regulatory authorities.Manufacturing pharmaceuticals is a difficult and costly business that requires clean water, high-quality chemicals, expensive equipment, technical expertise and distribution networks. Large and multi-national companies benefit from economies of scale to cope with these problems. But smaller companies often struggle and compromise in quality [2,21]. “India has 20–40 big companies and perhaps nearly 20,000 small ones. To me, it seems impossible for them to produce at good quality, if they remain so small,” Hogerzeil explained. “And only by being strict, can you force them to combine and to become bigger industries that can afford good-quality assurance systems.” Clamping down on drug quality will therefore lead to a consolidation of the industry, which is an essential step. “If you look at Europe and the US, there were hundreds of drug companies—now there are dozens. And if you look at the situation in India and China today, there are thousands and that will have to come down to dozens as well,” Bate explained.…innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting […] medicineIn addition to consolidating the market by applying stricter rules, the IOM has also suggested measures for supporting companies that observe best practices [2]. For example, the IOM proposes that the International Finance Corporation and the Overseas Private Investment Corporation, which promote private-sector development to reduce poverty, should create separate investment vehicles for pharmaceutical manufacturers who want to upgrade to international standards. Another suggestion is to harmonize market registration of pharmaceutical products, which would ease the regulatory burden for generic producers in developing countries and improve the efficiency of regulatory agencies.Once the medicine leaves the manufacturer, controlling distribution systems becomes another major challenge in combatting falsified and substandard medicine. Global drug supply chains have grown increasingly complicated; drugs cross borders, are sold back and forth between wholesalers and distributers, and are often repackaged. Still, there is a main difference between developing and developed countries. In the latter case, relatively few companies dominate the market, whereas in poorer nations, the distribution system is often fragmented and uncontrolled with parallel schemes, too few pharmacies, even fewer pharmacists and many unlicensed medical vendors. Every transaction creates an opportunity for falsified or substandard medicine to enter the market [2,10,19]. More streamlined and transparent supply chains and stricter licensing requirements would be crucial to improve drug quality. “And we can start in the US,” Hogerzeil commented.…India is a core player: not only does it have a large generics industry, but the country also seems to be, together with China, the biggest source of fake medical productsDistribution could be improved at different levels, starting with the import of medicine. “There are states in the USA where the regulation for medicine importation is very lax. Anyone can import; private clinics can buy medicine from Lebanon or elsewhere and fly them in,” Hogerzeil explained. The next level would be better control over the distribution system within the country. The IOM suggests that state boards should license wholesalers and distributors that meet the National Association of Boards of Pharmacy accreditation standards. “Everybody dealing with medicine has to be licensed,” Hogerzeil said. “And there should be a paper trail of who buys what from whom. That way you close the entry points for illegal drugs and prevent that falsified medicines enter the legal supply chain.” The last level would be a track-and-trace system to identify authentic drugs [2]. Every single package of medicine should be identifiable through an individual marker, such as a 3D bar code. Once it is sold, it is ticked off in a central database, so the marker cannot be reused.According to Hogerzeil, equivalent measures at these different levels should be established in every country. “I don''t believe in double standards”, he said. “Don''t say to Uganda: ‘you can''t do that''. Rather, indicate to them what a cost-effective system in the West looks like and help them, and give them the time, to create something in that direction that is feasible in their situation.”Nigeria, for instance, has demonstrated that with enough political will, it is possible to reduce the proliferation of falsified and substandard medicine. Nigeria had been a major source for falsified products, but things changed in 2001, when Dora Akunyili was appointed Director General of the National Agency for Food and Drug Administration and Control. Akunyili has a personal motivation for fighting falsified drugs: her sister Vivian, a diabetic patient, lost her life to fake insulin in 1988. Akunyili strengthened import controls, campaigned for public awareness, clamped down on counterfeit operations and pushed for harsher punishments [10,19]. Paul Orhii, Akunyili''s successor, is committed to continuing her work [10]. Although there are no exact figures, various surveys indicate that the rate of bad-quality medicine has dropped considerably in Nigeria [10].China is also addressing its drug-quality problems. In a highly publicized event, the former head of China''s State Food and Drug Administration, Zheng Xiaoyu, was executed in 2007 after he was found guilty of accepting bribes to approve untested medicine. Since then, China''s fight against falsified medicine has continued. As a result of heightened enforcement, the number of drug companies in China dwindled from 5,000 in 2004 to about 3,500 this year [2]. Moreover, in July 2012, more than 1,900 suspects were arrested for the sale of fake or counterfeit drugs.Quality comes at a price, however. It is expensive to produce high-quality medicine, and it is expensive to control the production and distribution of drugs. Many low- and middle-income countries might not have the resources to tackle the problem and might not see quality of medicine as a priority. But they should, and affluent countries should help. Not only because health is a human right, but also for economic reasons. A great deal of time and money is invested into testing the safety and efficacy of medicine during drug development, and these resources are wasted when drugs do not reach patients. Falsified and substandard medicines are a financial burden to health systems and the emergence of drug-resistant pathogens might make invaluable medications useless. Investing in the safety of medicine is therefore a humane and an economic imperative.  相似文献   

14.
Zhang JY 《EMBO reports》2011,12(4):302-306
How can grass-roots movements evolve into a national research strategy? The bottom-up emergence of synthetic biology in China could give some pointers.Given its potential to aid developments in renewable energy, biosensors, sustainable chemical industries, microbial drug factories and biomedical devices, synthetic biology has enormous implications for economic development. Many countries are therefore implementing strategies to promote progress in this field. Most notably, the USA is considered to be the leader in exploring the industrial potential of synthetic biology (Rodemeyer, 2009). Synthetic biology in Europe has benefited from several cross-border studies, such as the ‘New and Emerging Science and Technology'' programme (NEST, 2005) and the ‘Towards a European Strategy for Synthetic Biology'' project (TESSY; Gaisser et al, 2008). Yet, little is known in the West about Asia''s role in this ‘new industrial revolution'' (Kitney, 2009). In particular, China is investing heavily in scientific research for future developments, and is therefore likely to have an important role in the development of synthetic biology.Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework…In 2010, as part of a study of the international governance of synthetic biology, the author visited four leading research teams in three Chinese cities (Beijing, Tianjin and Hefei). The main aims of the visits were to understand perspectives in China on synthetic biology, to identify core themes among its scientific community, and to address questions such as ‘how did synthetic biology emerge in China?'', ‘what are the current funding conditions?'', ‘how is synthetic biology generally perceived?'' and ‘how is it regulated?''. Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework; one that is more dynamic and comprises more options than existing national or international research and development (R&D) strategies. Such findings might contribute to Western knowledge of Chinese R&D, but could also expose European and US policy-makers to alternative forms and patterns of research governance that have emerged from a grass-roots level.…the process of developing a framework is at least as important to research governance as the big question it might eventually addressA dominant narrative among the scientists interviewed is the prospect of a ‘big-question'' strategy to promote synthetic-biology research in China. This framework is at a consultation stage and key questions are still being discussed. Yet, fieldwork indicates that the process of developing a framework is at least as important to research governance as the big question it might eventually address. According to several interviewees, this approach aims to organize dispersed national R&D resources into one grand project that is essential to the technical development of the field, preferably focusing on an industry-related theme that is economically appealling to the Chinese public.Chinese scientists have a pragmatic vision for research; thinking of science in terms of its ‘instrumentality'' has long been regarded as characteristic of modern China (Schneider, 2003). However, for a country in which the scientific community is sometimes described as an “uncoordinated ‘bunch of loose ends''” (Cyranoski, 2001) “with limited synergies between them” (OECD, 2007), the envisaged big-question approach implies profound structural and organizational changes. Structurally, the approach proposes that the foundational (industry-related) research questions branch out into various streams of supporting research and more specific short-term research topics. Within such a framework, a variety of Chinese universities and research institutions can be recruited and coordinated at different levels towards solving the big question.It is important to note that although this big-question strategy is at a consultation stage and supervised by the Ministry of Science and Technology (MOST), the idea itself has emerged in a bottom-up manner. One academic who is involved in the ongoing ministerial consultation recounted that, “It [the big-question approach] was initially conversations among we scientists over the past couple of years. We saw this as an alternative way to keep up with international development and possibly lead to some scientific breakthrough. But we are happy to see that the Ministry is excited and wants to support such an idea as well.” As many technicalities remain to be addressed, there is no clear time-frame yet for when the project will be launched. Yet, this nationwide cooperation among scientists with an emerging commitment from MOST seems to be largely welcomed by researchers. Some interviewees described the excitement it generated among the Chinese scientific community as comparable with the establishment of “a new ‘moon-landing'' project”.Of greater significance than the time-frame is the development process that led to this proposition. On the one hand, the emergence of synthetic biology in China has a cosmopolitan feel: cross-border initiatives such as international student competitions, transnational funding opportunities and social debates in Western countries—for instance, about biosafety—all have an important role. On the other hand, the development of synthetic biology in China has some national particularities. Factors including geographical proximity, language, collegial familiarity and shared interests in economic development have all attracted Chinese scientists to the national strategy, to keep up with their international peers. Thus, to some extent, the development of synthetic biology in China is an advance not only in the material synthesis of the ‘cosmos''—the physical world—but also in the social synthesis of aligning national R&D resources and actors with the global scientific community.To comprehend how Chinese scientists have used national particularities and global research trends as mutually constructive influences, and to identify the implications of this for governance, this essay examines the emergence of synthetic biology in China from three perspectives: its initial activities, the evolution of funding opportunities, and the ongoing debates about research governance.China''s involvement in synthetic biology was largely promoted by the participation of students in the International Genetically Engineered Machine (iGEM) competition, an international contest for undergraduates initiated by the Massachusetts Institute of Technology (MIT) in the USA. Before the iGEM training workshop that was hosted by Tianjin University in the Spring of 2007, there were no research records and only two literature reviews on synthetic biology in Chinese scientific databases (Zhao & Wang, 2007). According to Chunting Zhang of Tianjin University—a leading figure in the promotion of synthetic biology in China—it was during these workshops that Chinese research institutions joined their efforts for the first time (Zhang, 2008). From the outset, the organization of the workshop had a national focus, while it engaged with international networks. Synthetic biologists, including Drew Endy from MIT and Christina Smolke from Stanford University, USA, were invited. Later that year, another training camp designed for iGEM tutors was organized in Tianjin and included delegates from Australia and Japan (Zhang, 2008).Through years of organizing iGEM-related conferences and workshops, Chinese universities have strengthened their presence at this international competition; in 2007, four teams from China participated. During the 2010 competition, 11 teams from nine universities in six provinces/municipalities took part. Meanwhile, recruiting, training and supervising iGEM teams has become an important institutional programme at an increasing number of universities.…training for iGEM has grown beyond winning the student awards and become a key component of exchanges between Chinese researchers and the international communityIt might be easy to interpret the enthusiasm for the iGEM as a passion for winning gold medals, as is conventionally the case with other international scientific competitions. This could be one motive for participating. Yet, training for iGEM has grown beyond winning the student awards and has become a key component of exchanges between Chinese researchers and the international community (Ding, 2010). Many of the Chinese scientists interviewed recounted the way in which their initial involvement in synthetic biology overlapped with their tutoring of iGEM teams. One associate professor at Tianjin University, who wrote the first undergraduate textbook on synthetic biology in China, half-jokingly said, “I mainly learnt [synthetic biology] through tutoring new iGEM teams every year.”Participation in such contests has not only helped to popularize synthetic biology in China, but has also influenced local research culture. One example of this is that the iGEM competition uses standard biological parts (BioBricks), and new BioBricks are submitted to an open registry for future sharing. A corresponding celebration of open-source can also be traced to within the Chinese synthetic-biology community. In contrast to the conventional perception that the Chinese scientific sector consists of a “very large number of ‘innovative islands''” (OECD, 2007; Zhang, 2010), communication between domestic teams is quite active. In addition to the formally organized national training camps and conferences, students themselves organize a nationwide, student-only workshop at which to informally test their ideas.More interestingly, when the author asked one team whether there are any plans to set up a ‘national bank'' for hosting designs from Chinese iGEM teams, in order to benefit domestic teams, both the tutor and team members thought this proposal a bit “strange”. The team leader responded, “But why? There is no need. With BioBricks, we can get any parts we want quite easily. Plus, it directly connects us with all the data produced by iGEM teams around the world, let alone in China. A national bank would just be a small-scale duplicate.”From the beginning, interest in the development of synthetic biology in China has been focused on collective efforts within and across national borders. In contrast to conventional critiques on the Chinese scientific community''s “inclination toward competition and secrecy, rather than openness” (Solo & Pressberg, 2007; OECD, 2007; Zhang, 2010), there seems to be a new outlook emerging from the participation of Chinese universities in the iGEM contest. Of course, that is not to say that the BioBricks model is without problems (Rai & Boyle, 2007), or to exclude inputs from other institutional channels. Yet, continuous grass-roots exchanges, such as the undergraduate-level competition, might be as instrumental as formal protocols in shaping research culture. The indifference of Chinese scientists to a ‘national bank'' seems to suggest that the distinction between the ‘national'' and ‘international'' scientific communities has become blurred, if not insignificant.However, frequent cross-institutional exchanges and the domestic organization of iGEM workshops seem to have nurtured the development of a national synthetic-biology community in China, in which grass-roots scientists are comfortable relying on institutions with a cosmopolitan character—such as the BioBricks Foundation—to facilitate local research. To some extent, one could argue that in the eyes of Chinese scientists, national and international resources are one accessible global pool. This grass-roots interest in incorporating local and global advantages is not limited to student training and education, but also exhibited in evolving funding and regulatory debates.In the development of research funding for synthetic biology, a similar bottom-up consolidation of national and global resources can also be observed. As noted earlier, synthetic-biology research in China is in its infancy. A popular view is that China has the potential to lead this field, as it has strong support from related disciplines. In terms of genome sequencing, DNA synthesis, genetic engineering, systems biology and bioinformatics, China is “almost at the same level as developed countries” (Pan, 2008), but synthetic-biology research has only been carried out “sporadically” (Pan, 2008; Huang, 2009). There are few nationally funded projects and there is no discernible industrial involvement (Yang, 2010). Most existing synthetic-biology research is led by universities or institutions that are affiliated with the Chinese Academy of Science (CAS). As one CAS academic commented, “there are many Chinese scientists who are keen on conducting synthetic-biology research. But no substantial research has been launched nor has long-term investment been committed.”The initial undertaking of academic research on synthetic biology in China has therefore benefited from transnational initiatives. The first synthetic-biology project in China, launched in October 2006, was part of the ‘Programmable Bacteria Catalyzing Research'' (PROBACTYS) project, funded by the Sixth Framework Programme of the European Union (Yang, 2010). A year later, another cross-border collaborative effort led to the establishment of the first synthetic-biology centre in China: the Edinburgh University–Tianjing University Joint Research Centre for Systems Biology and Synthetic Biology (Zhang, 2008).There is also a comparable commitment to national research coordination. A year after China''s first participation in iGEM, the 2008 Xiangshan conference focused on domestic progress. From 2007 to 2009, only five projects in China received national funding, all of which came from the National Natural Science Foundation of China (NSFC). This funding totalled ¥1,330,000 (approximately £133,000; www.nsfc.org), which is low in comparison to the £891,000 funding that was given in the UK for seven Networks in Synthetic Biology in 2007 alone (www.bbsrc.ac.uk).One of the primary challenges in obtaining funding identified by the interviewees is that, as an emerging science, synthetic biology is not yet appreciated by Chinese funding agencies. After the Xiangshan conference, the CAS invited scientists to a series of conferences in late 2009. According to the interviewees, one of the main outcomes was the founding of a ‘China Synthetic Biology Coordination Group''; an informal association of around 30 conference delegates from various research institutions. This group formulated a ‘regulatory suggestion'' that they submitted to MOST, which stated the necessity and implications of supporting synthetic-biology research. In addition, leading scientists such as Chunting Zhang and Huanming Yang—President of the Beijing Genomic Institute (BGI), who co-chaired the Beijing Institutes of Life Science (BILS) conferences—have been active in communicating with government institutions. The initial results of this can be seen in the MOST 2010 Application Guidelines for the National Basic Research Program, in which synthetic biology was included for the first time, among ‘key supporting areas'' (MOST, 2010). Meanwhile, in 2010, NSFC allocated ¥1,500,000 (approximately £150,000) to synthetic-biology research, which is more than the total funding the area had received in the past three years.The search for funding further demonstrates the dynamics between national and transnational resources. Chinese R&D initiatives have to deal with the fact that scientific venture-capital and non-governmental research charities are underdeveloped in China. In contrast to the EU or the USA, government institutions in China, such as the NSFC and MOST, are the main and sometimes only domestic sources of funding. Yet, transnational funding opportunities facilitate the development of synthetic biology by alleviating local structural and financial constraints, and further integrate the Chinese scientific community into international research.This is not a linear ‘going-global'' process; it is important for Chinese scientists to secure and promote national and regional support. In addition, this alignment of national funding schemes with global research progress is similar to the iGEM experience, as it is being initiated through informal bottom-up associations between scientists, rather than by top-down institutional channels.As more institutions have joined iGEM training camps and participated in related conferences, a shared interest among the Chinese scientific community in developing synthetic biology has become visible. In late 2009, at the conference that founded the informal ‘coordination group'', the proposition of integrating national expertise through a big-question approach emerged. According to one professor in Beijing—who was a key participant in the discussion at the time—this proposition of a nationwide synergy was not so much about ‘national pride'' or an aim to develop a ‘Chinese'' synthetic biology, it was about research practicality. She explained, “synthetic biology is at the convergence of many disciplines, computer modelling, nano-technology, bioengineering, genomic research etc. Individual researchers like me can only operate on part of the production chain. But I myself would like to see where my findings would fit in a bigger picture as well. It just makes sense for a country the size of China to set up some collective and coordinated framework so as to seek scientific breakthrough.”From the first participation in the iGEM contest to the later exploration of funding opportunities and collective research plans, scientists have been keen to invite and incorporate domestic and international resources, to keep up with global research. Yet, there are still regulatory challenges to be met.…with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' mannerThe reputation of “the ‘wild East'' of biology” (Dennis, 2002) is associated with China'' previous inattention to ethical concerns about the life sciences, especially in embryonic-stem-cell research. Similarly, synthetic biology creates few social concerns in China. Public debate is minimal and most media coverage has been positive. Synthetic biology is depicted as “a core in the fourth wave of scientific development” (Pan, 2008) or “another scientific revolution” (Huang, 2009). Whilst recognizing its possible risks, mainstream media believe that “more people would be attracted to doing good while making a profit than doing evil” (Fang & He, 2010). In addition, biosecurity and biosafety training in China are at an early stage, with few mandatory courses for students (Barr & Zhang, 2010). The four leading synthetic-biology teams I visited regarded the general biosafety regulations that apply to microbiology laboratories as sufficient for synthetic biology. In short, with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' manner.Yet, fieldwork suggests that, in contrast to this previous insensitivity to global ethical concerns, the synthetic-biology community in China has taken a more proactive approach to engaging with international debates. It is important to note that there are still no synthetic-biology-specific administrative guidelines or professional codes of conduct in China. However, Chinese stakeholders participate in building a ‘mutual inclusiveness'' between global and domestic discussions.One of the most recent examples of this is a national conference about the ethical and biosafety implications of synthetic biology, which was jointly hosted by the China Association for Science and Technology, the Chinese Society of Biotechnology and the Beijing Institutes of Life Science CAS, in Suzhou in June 2010. The discussion was open to the mainstream media. The debate was not simply a recapitulation of Western worries, such as playing god, potential dual-use or ecological containment. It also focused on the particular concerns of developing countries about how to avoid further widening the developmental gap with advanced countries (Liu, 2010).In addition to general discussions, there are also sustained transnational communications. For example, one of the first three projects funded by the NSFC was a three-year collaboration on biosafety and risk-assessment frameworks between the Institute of Botany at CAS and the Austrian Organization for International Dialogue and Conflict Management (IDC).Chinese scientists are also keen to increase their involvement in the formulation of international regulations. The CAS and the Chinese Academy of Engineering are engaged with their peer institutions in the UK and the USA to “design more robust frameworks for oversight, intellectual property and international cooperation” (Royal Society, 2009). It is too early to tell what influence China will achieve in this field. Yet, the changing image of the country from an unconcerned wild East to a partner in lively discussions signals a new dynamic in the global development of synthetic biology.Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in ChinaFrom self-organized participation in iGEM to bottom-up funding and governance initiatives, two features are repeatedly exhibited in the emergence of synthetic biology in China: global resources and international perspectives complement national interests; and the national and cosmopolitan research strengths are mostly instigated at the grass-roots level. During the process of introducing, developing and reflecting on synthetic biology, many formal or informal, provisional or long-term alliances have been established from the bottom up. Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in China.However, the inputs of different social actors has not led to disintegration of the field into an array of individualized pursuits, but has transformed it into collective synergies, or the big-question approach. Underlying the diverse efforts of Chinese scientists is a sense of ‘inclusiveness'', or the idea of bringing together previously detached research expertise. Thus, the big-question strategy cannot be interpreted as just another nationally organized agenda in response to global scientific advancements. Instead, it represents a more intricate development path corresponding to how contemporary research evolves on the ground.In comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stageIn comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stage. Government input—such as the potential stewardship of the MOST in directing a big-question approach or long-term funding—remain important; the scientists who were interviewed expend a great deal of effort to attract governmental participation. Yet, China'' experience highlights that the key to comprehending regional scientific capacity lies not so much in what the government can do, but rather in what is taking place in laboratories. It is important to remember that Chinese iGEM victories, collaborative synthetic-biology projects and ethical discussions all took place before the government became involved. Thus, to appreciate fully the dynamics of an emerging science, it might be necessary to focus on what is formulated from the bottom up.The experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary researchThe experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary research. More specifically, it is a result of the commitment of Chinese scientists to incorporating national and international resources, actors and social concerns. For practical reasons, the national organization of research, such as through the big-question approach, might still have an important role. However, synthetic biology might be not only a mosaic of national agendas, but also shaped by transnational activities and scientific resources. What Chinese scientists will collectively achieve remains to be seen. Yet, the emergence of synthetic biology in China might be indicative of a new paradigm for how research practices can be introduced, normalized and regulated.  相似文献   

15.
16.
Wolinsky H 《EMBO reports》2010,11(11):830-833
Sympatric speciation—the rise of new species in the absence of geographical barriers—remains a puzzle for evolutionary biologists. Though the evidence for sympatric speciation itself is mounting, an underlying genetic explanation remains elusive.For centuries, the greatest puzzle in biology was how to account for the sheer variety of life. In his 1859 landmark book, On the Origin of Species, Charles Darwin (1809–1882) finally supplied an answer: his grand theory of evolution explained how the process of natural selection, acting on the substrate of genetic mutations, could gradually produce new organisms that are better adapted to their environment. It is easy to see how adaptation to a given environment can differentiate organisms that are geographically separated; different environmental conditions exert different selective pressures on organisms and, over time, the selection of mutations creates different species—a process that is known as allopatric speciation.It is more difficult to explain how new and different species can arise within the same environment. Although Darwin never used the term sympatric speciation for this process, he did describe the formation of new species in the absence of geographical separation. “I can bring a considerable catalogue of facts,” he argued, “showing that within the same area, varieties of the same animal can long remain distinct, from haunting different stations, from breeding at slightly different seasons, or from varieties of the same kind preferring to pair together” (Darwin, 1859).It is more difficult to explain how new and different species can arise within the same environmentIn the 1920s and 1930s, however, allopatric speciation and the role of geographical isolation became the focus of speciation research. Among those leading the charge was Ernst Mayr (1904–2005), a young evolutionary biologist, who would go on to influence generations of biologists with his later work in the field. William Baker, head of palm research at the Royal Botanic Gardens, Kew in Richmond, UK, described Mayr as “one of the key figures to crush sympatric speciation.” Frank Sulloway, a Darwin Scholar at the Institute of Personality and Social Research at the University of California, Berkeley, USA, similarly asserted that Mayr''s scepticism about sympatry was central to his career.The debate about sympatric and allopatric speciation has livened up since Mayr''s death…Since Mayr''s death in 2005, however, several publications have challenged the notion that sympatric speciation is a rare exception to the rule of allopatry. These papers describe examples of both plants and animals that have undergone speciation in the same location, with no apparent geographical barriers to explain their separation. In these instances, a single ancestral population has diverged to the extent that the two new species cannot produce viable offspring, despite the fact that their ranges overlap. The debate about sympatric and allopatric speciation has livened up since Mayr''s death, as Mayr''s influence over the field has waned and as new tools and technologies in molecular biology have become available.Sulloway, who studied with Mayr at Harvard University, in the late 1960s and early 1970s, notes that Mayr''s background in natural history and years of fieldwork in New Guinea and the Solomon Islands contributed to his perception that the bulk of the data supported allopatry. “Ernst''s early career was in many ways built around that argument. It wasn''t the only important idea he had, but he was one of the strong proponents of it. When an intellectual stance exists where most people seem to have gotten it wrong, there is a tendency to sort of lay down the law,” Sulloway said.Sulloway also explained that Mayr “felt that botanists had basically led Darwin astray because there is so much evidence of polyploidy in plants and Darwin turned in large part to the study of botany and geographical distribution in drawing evidence in The Origin.” Indeed, polyploidization is common in plants and can lead to ‘instantaneous'' speciation without geographical barriers.In February 2006, the journal Nature simultaneously published two papers that described sympatric speciation in animals and plants, reopening the debate. Axel Meyer, a zoologist and evolutionary biologist at the University of Konstanz, Germany, demonstrated with his colleagues that sympatric speciation has occurred in cichlid fish in Lake Apoyo, Nicaragua (Barluenga et al, 2006). The researchers claimed that the ancestral fish only seeded the crater lake once; from this, new species have evolved that are distinct and reproductively isolated. Meyer''s paper was broadly supported, even by critics of sympatric speciation, perhaps because Mayr himself endorsed sympatric speciation among the cichlids in his 2001 book What Evolution Is. “[Mayr] told me that in the case of our crater lake cichlids, the onus of showing that it''s not sympatric speciation lies with the people who strongly believe in only allopatric speciation,” Meyer said.…several scientists involved in the debate think that molecular biology could help to eventually resolve the issueThe other paper in Nature—by Vincent Savolainen, a molecular systematist at Imperial College, London, UK, and colleagues—described the sympatric speciation of Howea palms on Lord Howe Island (Fig 1), a minute Pacific island paradise (Savolainen et al, 2006a). Savolainen''s research had originally focused on plant diversity in the gesneriad family—the best known example of which is the African violet—while he was in Brazil for the Geneva Botanical Garden, Switzerland. However, he realized that he would never be able prove the occurrence of sympatry within a continent. “It might happen on a continent,” he explained, “but people will always argue that maybe they were separated and got together after. […] I had to go to an isolated piece of the world and that''s why I started to look at islands.”Open in a separate windowFigure 1Lord Howe Island. Photo: Ian Hutton.He eventually heard about Lord Howe Island, which is situated just off the east coast of Australia, has an area of 56 km2 and is known for its abundance of endemic palms (Sidebar A). The palms, Savolainen said, were an ideal focus for sympatric research: “Palms are not the most diverse group of plants in the world, so we could make a phylogeny of all the related species of palms in the Indian Ocean, southeast Asia and so on.”…the next challenges will be to determine which genes are responsible for speciation, and whether sympatric speciation is common

Sidebar A | Research in paradise

Alexander Papadopulos is no Tarzan of the Apes, but he has spent a couple months over the past two years aloft in palm trees hugging rugged mountainsides on Lord Howe Island, a Pacific island paradise and UNESCO World Heritage site.Papadopulos—who is finishing his doctorate at Imperial College London, UK—said the views are breathtaking, but the work is hard and a bit treacherous as he moves from branch to branch. “At times, it can be quite hairy. Often you''re looking over a 600-, 700-metre drop without a huge amount to hold onto,” he said. “There''s such dense vegetation on most of the steep parts of the island. You''re actually climbing between trees. There are times when you''re completely unsupported.”Papadopulos typically spends around 10 hours a day in the field, carrying a backpack and utility belt with a digital camera, a trowel to collect soil samples, a first-aid kit, a field notebook, food and water, specimen bags, tags to label specimens, a GPS device and more. After several days in the field, he spends a day working in a well-equipped field lab and sleeping in the quarters that were built by the Lord Howe governing board to accommodate the scientists who visit the island on various projects. Papadopulos is studying Lord Howe''s flora, which includes more than 200 plant species, about half of which are indigenous.Vincent Savolainen said it takes a lot of planning to get materials to Lord Howe: the two-hour flight from Sydney is on a small plane, with only about a dozen passengers on board and limited space for equipment. Extra gear—from gardening equipment to silica gel and wood for boxes in which to dry wet specimens—arrives via other flights or by boat, to serve the needs of the various scientists on the team, including botanists, evolutionary biologists and ecologists.Savolainen praised the well-stocked researcher station for visiting scientists. It is run by the island board and situated near the palm nursery. It includes one room for the lab and another with bunks. “There is electricity and even email,” he said. Papadoupulos said only in the past year has the internet service been adequate to accommodate video calls back home.Ian Hutton, a Lord Howe-based naturalist and author, who has lived on the island since 1980, said the island authorities set limits on not only the number of residents—350—but also the number of visitors at one time—400—as well as banning cats, to protect birds such as the flightless wood hen. He praised the Imperial/Kew group: “They''re world leaders in their field. And they''re what I call ‘Gentlemen Botanists''. They''re very nice people, they engage the locals here. Sometimes researchers might come here, and they''re just interested in what they''re doing and they don''t want to share what they''re doing. Not so with these people. Savolainen said his research helps the locals: “The genetics that we do on the island are not only useful to understand big questions about evolution, but we also always provide feedback to help in its conservation efforts.”Yet, in Savolainen''s opinion, Mayr''s influential views made it difficult to obtain research funding. “Mayr was a powerful figure and he dismissed sympatric speciation in textbooks. People were not too keen to put money on this,” Savolainen explained. Eventually, the Leverhulme Trust (London, UK) gave Savolainen and Baker £70,000 between 2003–2005 to get the research moving. “It was enough to do the basic genetics and to send a research assistant for six months to the island to do a lot of natural history work,” Savolainen said. Once the initial results had been processed, the project received a further £337,000 from the British Natural Environment Research Council in 2008, and €2.5 million from the European Research Council in 2009.From the data collected on Lord Howe Island, Savolainen and his team constructed a dated phylogenetic tree showing that the two endemic species of the palm Howea (Arecaceae; Fig 2) are sister taxa. From their tree, the researchers were able to establish that the two species—one with a thatch of leaves and one with curly leaves—diverged long after the island was formed 6.9 million years ago. Even where they are found in close proximity, the two species cannot interbreed as they flower at different times.Open in a separate windowFigure 2The two species of Howea palm. (A) Howea fosteriana (Kentia palm). (B) Howea belmoreana. Photos: William Baker, Royal Botanical Gardens, Kew, Richmond, UK.According to the researchers, the palm speciation probably occurred owing to the different soil types in which the plants grow. Baker explained that there are two soil types on Lord Howe—the older volcanic soil and the younger calcareous soils. The Kentia palm grows in both, whereas the curly variety is restricted to the volcanic soil. These soil types are closely intercalated—fingers and lenses of calcareous soils intrude into the volcanic soils in lowland Lord Howe Island. “You can step over a geological boundary and the palms in the forest can change completely, but they remain extremely close to each other,” Baker said. “What''s more, the palms are wind-pollinated, producing vast amounts of pollen that blows all over the place during the flowering season—people even get pollen allergies there because there is so much of the stuff.” According to Savolainen, that the two species have different flowering times is a “way of having isolation so that they don''t reproduce with each other […] this is a mechanism that evolved to allow other species to diverge in situ on a few square kilometres.”According to Baker, the absence of a causative link has not been demonstrated between the different soils and the altered flowering times, “but we have suggested that at the time of speciation, perhaps when calcareous soils first appeared, an environmental effect may have altered the flowering time of palms colonising the new soil, potentially causing non-random mating and kicking off speciation. This is just a hypothesis—we need to do a lot more fieldwork to get to the bottom of this,” he said. What is clear is that this is not allopatric speciation, as “the micro-scale differentiation in geology and soil type cannot create geographical isolation”, said Baker.…although molecular data will add to the debate, it will not settle it aloneThe results of the palm research caused something of a splash in evolutionary biology, although the study was not without its critics. Tod Stuessy, Chair of the Department of Systematic and Evolutionary Botany at the University of Vienna, Austria, has dealt with similar issues of divergence on Chile''s Juan Fernández Islands—also known as the Robinson Crusoe Islands—in the South Pacific. From his research, he points out that on old islands, large ecological areas that once separated species—and caused allopatric speciation—could have since disappeared, diluting the argument for sympatry. “There are a lot of cases [in the Juan Fernández Islands] where you have closely related species occurring in the same place on an island, even in the same valley. We never considered that they had sympatric origins because we were always impressed by how much the island had been modified through time,” Stuessy said. “What [the Lord Howe researchers] really didn''t consider was that Lord Howe Island could have changed a lot over time since the origins of the species in question.” It has also been argued that one of the palm species on Lord Howe Island might have evolved allopatrically on a now-sunken island in the same oceanic region.In their response to a letter from Stuessy, Savolainen and colleagues argued that erosion on the island has been mainly coastal and equal from all sides. “Consequently, Quaternary calcarenite deposits, which created divergent ecological selection pressures conducive to Howea species divergence, have formed evenly around the island; these are so closely intercalated with volcanic rocks that allopatric speciation due to ecogeographic isolation, as Stuessy proposes, is unrealistic” (Savolainen et al, 2006b). Their rebuttal has found support in the field. Evolutionary biologist Loren Rieseberg at the University of British Columbia in Vancouver, Canada, said: “Basically, you have two sister species found on a very small island in the middle of the ocean. It''s hard to see how one could argue anything other than they evolved there. To me, it would be hard to come up with a better case.”Whatever the reality, several scientists involved in the debate think that molecular biology could help to eventually resolve the issue. Savolainen said that the next challenges will be to determine which genes are responsible for speciation, and whether sympatric speciation is common. New sequencing techniques should enable the team to obtain a complete genomic sequence for the palms. Savolainen said that next-generation sequencing is “a total revolution.” By using sequencing, he explained that the team, “want to basically dissect exactly what genes are involved and what has happened […] Is it very special on Lord Howe and for this palm, or is [sympatric speciation] a more general phenomenon? This is a big question now. I think now we''ve found places like Lord Howe and [have] tools like the next-gen sequencing, we can actually get the answer.”Determining whether sympatric speciation occurs in animal species will prove equally challenging, according to Meyer. His own lab, among others, is already looking for ‘speciation genes'', but this remains a tricky challenge. “Genetic models […] argue that two traits (one for ecological specialisation and another for mate choice, based on those ecological differences) need to become tightly linked on one chromosome (so that they don''t get separated, often by segregation or crossing over). The problem is that the genetic basis for most ecologically relevant traits are not known, so it would be very hard to look for them,” Meyer explained. “But, that is about to change […] because of next-generation sequencing and genomics more generally.”Many researchers who knew Mayr personally think he would have enjoyed the challenge to his viewsOthers are more cautious. “In some situations, such as on isolated oceanic islands, or in crater lakes, molecular phylogenetic information can provide strong evidence of sympatric speciation. It also is possible, in theory, to use molecular data to estimate the timing of gene flow, which could help settle the debate,” Rieseberg said. However, he cautioned that although molecular data will add to the debate, it will not settle it alone. “We will still need information from historical biogeography, natural history, phylogeny, and theory, etc. to move things forward.”Many researchers who knew Mayr personally think he would have enjoyed the challenge to his views. “I can only imagine that it would''ve been great fun to engage directly with him [on sympatry on Lord Howe],” Baker said. “It''s a shame that he wasn''t alive to comment on [our paper].” In fact, Mayr was not really as opposed to sympatric speciation as some think. “If one is of the opinion that Mayr opposed all forms of sympatric speciation, well then this looks like a big swing back the other way,” Sulloway commented. “But if one reads Mayr carefully, one sees that he was actually interested in potential exceptions and, as best he could, chronicled which ones he thought were the best candidates.”Mayr''s opinions aside, many biologists today have stronger feelings against sympatric speciation than he did himself in his later years, Meyer added. “I think that Ernst was more open to the idea of sympatric speciation later in his life. He got ‘softer'' on this during the last two of his ten decades of life that I knew him. I was close to him personally and I think that he was much less dogmatic than he is often made out to be […] So, I don''t think that he is spinning in his grave.” Mayr once told Sulloway that he liked to take strong stances, precisely so that other researchers would be motivated to try to prove him wrong. “If they eventually succeeded in doing so, Mayr felt that science was all the better for it.”? Open in a separate windowAlex Papadopulos and Ian Hutton doing fieldwork on a very precarious ridge on top of Mt. Gower. Photo: William Baker, Royal Botanical Gardens, Kew, Richmond, UK.  相似文献   

17.
Ramani D  Saviane C 《EMBO reports》2010,11(12):910-913
Commercial genetic testing challenges traditional medical practice and the doctor–patient relationship. Neurodegenerative diseases may serve as the practical and ethical testing ground for the application of genomics to medicine.In the age of the Internet, a wealth of information lies at your fingertips—even your genetic ancestry and your fate in terms of health and sickness. A Google search for ‘genetic testing'' immediately comes up with a list of companies offering quick, direct-to-consumer genetic tests (DCGT) for relatively little money. “Claim your kit, spit into the tube and send it to the lab,” states the website of 23andMe—the company whose Personal Genome Service was named the ‘2008 Invention of the Year'' by Time magazine. Six to eight weeks after sending in a sample, customers can log on to the company website and learn about their genetic origins and ancestry if they opted for the ‘Fill in Your Family Tree'' option, or can explore their genetic profile under the “Health Edition” and what it says about personal disease risks and drug responses.The availability of next-generation high-throughput DNA sequencers has enabled companies to sequence the genes of a large number of customers at a low costs and with few personnel23andMe is one of several companies that offer predictive genetic tests covering a range of multifactorial and monogenic disorders (STOA, 2008). This is clearly a revolutionary approach to personalized medicine; it not only allows individuals to learn about genetic risk factors for a variety of diseases, but also does so outside the established medical system. Before the advent of DCGTs, genetic tests were only carried out at specialized medical institutions under controlled conditions, and only on referral from a physician. The decreasing price of DNA sequencing, new technologies for high-throughput sequencing and the growth of the Internet have all helped to reduce the technical, financial and access barriers to genetic testing. It is therefore not surprising that private enterprises moved into this fast-developing market.The availability of next-generation high-throughput DNA sequencers enables companies to sequence the genes of a large number of customers at a low cost and with few personnel. They can therefore offer this service at attractive prices, in the range of a few hundred dollars. The Internet conversely enables accessibility: a few mouse clicks are enough, while completely bypassing the usual checks and balances of organized healthcare. This means that expert advice is often lacking for patients about results that predict inherited risks for diabetes, cancer, neurological disorders and drug response (STOA, 2008).The simple, affordable and rapid service offered by these companies raises concerns about the clinical validity and utility of the tests, as well as the information and support that they offer to properly interpret the results. In June this year, the US Food and Drug Administration (FDA) contacted five companies that sell genetic tests directly to consumers and asked them to prove the validity of their products (Pollack, 2010). The FDA argues that genetic tests are diagnostic tools that must obtain regulatory approval before they can be marketed, but it did not order the companies to stop selling their tests. 23andMe—one of the five companies that were contacted—replied: “We are sensitive to the FDA''s concerns, but we believe that people have the right to know as much about their genes and their bodies as they choose” (Pollack, 2010). Last year, researchers from the J. Craig Venter Institute (San Diego, CA, USA) and the Scripps Translational Science Institute (La Jolla, CA, USA) reported inconsistencies between results obtained from two DCGT companies in an opinion article in Nature and made recommendations for improving predictions (Ng et al, 2009).A balance must be struck between consumer choice, consumer benefit and consumer protection. On one side is the individual''s right to have access to information about his or her health condition and health risks, so as to be able to take preventive measures. On the other side, serious questions have emerged about the lack of proper counselling in a professional setting. Are customers able to correctly understand, interpret and manage the information gained from a genetic test? Are they prepared to deal with the health risk information such a test provides? Are the scientific community and society as a whole ready to change the focus in medicine from morphological and physiological factors to molecular and genetic information?A balance must be struck between consumer choice, benefit and protectionThese concerns become more complicated when companies offer genetic tests for neurodegenerative disorders for which there are no preventive measures or treatments, such as Alzheimer, Parkinson or Huntington diseases. Most of these diseases are severe, debilitating and can lead to stigmatization and possible discrimination for patients. Brain disorders that cause progressive mental decline affect not only the health of the individual, but also their identity, self-consciousness and role within the family and society. As Judit Sándor, Director of the Centre for Ethics and Law in Biomedicine at the Central European University (Budapest, Hungary) put it: “The stigmatization of hereditary diseases in society may lead to ethical and legal consequences that are difficult to grasp. The stigma associated with neurodegenerative diseases would be even harder to bear if the disease is proven to be hereditary by some form of genetic testing.”In this context, 60 experts from a range of disciplines—scientists, clinicians, philosophers, sociologists, jurists, journalists and patients—from Europe, Canada and the USA met at the 2010 workshop ‘Brains In Dialogue On Genetic Testing'' in Trieste, Italy. The meeting was organized by the International School for Advanced Studies, as part of the European project ‘Brains in Dialogue'', which aims to foster dialogue among key stakeholders in neuroscience (www.neuromedia.eu). The use of predictive genetic testing for neurodegenerative diseases was the main focus of the meeting and represents an interesting model for discussing the risks and benefits of DCGTs.Very few neurodegenerative disorders have a typical Mendelian inheritance. The most (in)famous is Huntington disease, which typically becomes noticeable in middle age. Symptoms include progressive choreiform movements, cognitive impairment, mood disorders and behavioural changes. Huntington disease is caused by an increase in the number of CAG repeats in the gene Huntingtin, which can be tested for easily and reliably (Myers, 2004) in order to confirm a diagnosis or predict the disease, in at-risk groups or prenatally. The results have psychological and ethical implications that affect individuals and their families. According to the STOA report on DCGTs, only one company offers a test for Huntington disease.Most neurodegenerative disorders have a more complex set of genetic and environmental risk factors that make it difficult—if not impossible—to predict the risk of disease at a certain age. A small percentage of cases of Alzheimer and Parkinson diseases—usually early-onset—carry specific mutations with a Mendelian inheritance, but genetic factors are also involved in the most common late-onset forms of these diseases (Avramopoulos, 2009; Klein & Schlossmacher, 2006). Nicholas Wood of University College London, UK, commented that: “[t]here has been a revolution in our molecular genetic understanding of Parkinson''s disease. Twenty years ago Parkinson''s disease perhaps was considered the archetype of non-genetic disease. It is now clear that a growing list of genes is primarily responsible for Mendelian forms of Parkinson''s disease. It is also clear from recent studies that, due to reduced penetrance, some of these ‘Mendelian genes'' play a role in the so-called sporadic disease.” Nevertheless, a genetic test based on susceptibility genes would not enable a clear diagnosis—as in the case of Huntington disease—but only an estimate of the individual''s risk of developing the disease later in life, with varying reliability.For Alzheimer disease, genetic testing is usually only recommended for individuals with a family history of early-onset or with immediate relatives who already have the disease. The most common form of late-onset Alzheimer disease has a complex inheritance pattern. The medical establishment does not therefore recommend genetic testing for it, although a polymorphism in the Apolipoprotein E (APOE) gene has been unequivocally associated with Alzheimer disease (Avramopoulos, 2009). The identification of such risk factors through epidemiological studies provides valuable information about the molecular basis of the disease, but the management of this information at the individual level seems difficult for clinicians and patients. Agnes Allansdottir of the University of Siena, Italy, explained these difficulties stating that “research on decision-making processes demonstrates that we humans have severe problems dealing with probabilities.”Sándor expressed concerns that these difficulties could lead to additional discrimination. “Most people know what to do if they have high blood pressure, for instance. However, information coming from a genetic test is much more complex—their reading and interpretation require special expertise,” she said. She pointed out that some groups might be unable to access that expertise, while others might be unable to understand the information. “As a consequence, they will suffer an additional form of discrimination that is the ‘discrimination in the accessibility'' of sensitive and complex medical data, and that affects […] the right to privacy, as well.”…“research on decision-making processes demonstrates that we humans have severe problems dealing with probabilities”It is certainly possible that individuals who do not understand what probabilistic estimates of risk mean will be upset to find out they have a higher risk of developing a certain disorder, even though in absolute terms this risk is marginal. Avoiding this situation is what genetic counselling tries to achieve: to inform patients and help them to interpret the results of genetic tests. For the same reason, genetic testing for most common forms of late-onset Alzheimer or Parkinson disease—both of which are multifactorial—is not recommended, precisely because of the limited predictive value of these tests and the lack of proven preventive measures. However, various companies including deCODEme offer to identify your APOE variant and calculate “your risk of developing late-onset Alzheimer''s Disease” as part of their service.Research has demonstrated that genetic testing may be a useful coping strategy for some at-risk individuals (Gooding et al, 2006), a conclusion that was also reached by the Risk Evaluation and Education for Alzheimer''s Disease (REVEAL) study (Green et al, 2009). Some results showed that knowledge of their APOE genotype and numerical lifetime risk influenced the health-related behaviour of asymptomatic adult children of Alzheimer disease patients. The discovery of increased risk of disease through an education-and-disclosure protocol was associated with a stronger motivation to engage in behaviours that reduce risk, such as changes in medications or vitamin intake, even if their effectiveness is still unclear (Chao et al, 2008). Genotype disclosure did not result in short-term psychological problems, despite the frightening nature of the disease and the lack of therapies for it (Green et al, 2009). These studies highlight the importance of education and counselling in understanding risk and evaluating the means of counteracting it.Yet the ease with which DCGT companies offer tests over the Internet creates a new kind of autonomy for patients. “Genetic information serves often as a key to future decisions. Based on the information, they may rearrange the priorities in their life or change their lifestyle in order to fight against the manifestation of the disease, to decrease its symptoms or simply delay its progress,” Sándor said. “For many people, nothing else is worse than the lack of certainty and thus knowledge, in itself, can be a value.”To know or not to know: that is the question—particularly for neurodegenerative diseases. In addition to the opinions of the experts at the meeting, the public round table, ‘Health and DNA: my life, my genes'', showed that the choice whether to take a test should be a personal decision; certainly nobody should be forced in one direction or another. During the discussion, different opinions and experiences regarding the use of genetic testing were presented by members of the panel and the public. Verena Schmocker, a Swiss woman affected by Parkinson disease, explained why she refused to be tested, despite a strong family history of early-onset Parkinson disease. “I knew already that the disease was in my family, but I didn''t want to take any genetic test. I chose to live my life day by day and live what is there for me to live.” Another woman in the audience explained that she wanted to know her destiny: “[w]hen 15 years ago I was diagnosed with Huntington''s disease I woke up from a nightmare of doubts. I started organizing my life, I got married and got prepared for the future.”In many ways, Huntington disease is an unrepresentative example—not only because it is an untreatable, debilitating Mendelian disease, but also because patients typically receive mandatory and sophisticated patient counselling. Most importantly, as Marina Frontali from the National Research Council (Rome, Italy) highlighted, counselling should enable and respect autonomous decisions by the person at risk, even in light of third-party pressure to take the test, not just by employers or insurance companies, but also by family members. The counselling service for Huntington disease—through a tight collaboration between laypeople and professionals—is a valuable example of the management of genetic testing.…the ease with which DCGT companies offer tests over the Internet creates a new kind of autonomy for patientsThe Eurobarometer 2005 survey showed that EU citizens are generally supportive of the use of genetic data for diagnosis and research, and 64% of the respondents said that they would take a genetic test to detect potential diseases (EC, 2006). In reality, however, attitudes vary between countries: in most cases, people would be willing to take a test only in exceptional circumstances or only if it was highly regulated and controlled. Interestingly, those countries in which people expressed more concern and negative attitudes towards testing were those with higher levels of education and scientific literacy, where the mass media is more attentive to science and technology and where the public and political debate is more advanced. It shows, again, that increasing scientific literacy is not enough to overcome people''s fears and objections to genetic testing; the more they understand the issues, the less likely people are to be enthusiastic about new technologies.These concerns notwithstanding, the number of tests that are available is growing, and genetic testing—whether as part of the healthcare system or through DCGT companies—is becoming a model for preventive medicine and discussions about the impact of genetics on public health (Brand et al, 2008). The advances brought about by genomics will lead to more targeted health promotion messages and disease prevention programmes specifically directed at susceptible individuals and families, or at subgroups of the population, based on their genomic risk profile.The controversial nature of the political discourse concerning science and health often raises controversy, and the integration of genomics into public healthcare, research and policy might therefore be challenging. According to Brand et al (2008), the question is not whether the use of genomics in public health is dangerous, but whether excluding genomic information from public health interventions and withholding the potential of evidence-based prevention might do more harm. The next decade will provide a window of opportunity in which to prepare and educate clinicians, public health professionals, policy-makers and the public for the integration of genomics into healthcare. Brand et al (2008) argue that there is an ethical obligation to meet this challenge and make the best use of the opportunities provided by scientific progress.This, inevitably, requires a legal and regulatory framework to ensure that the benefits are made widely available to the population and, in particular, to protect consumers—today, DCGT by private companies remains a largely unregulated market. In 2008, the Committee of Ministers of the 47 Member States of the Council of Europe adopted the first international legally binding document concerning genetic testing for health purposes (Lwoff, 2009). The Additional Protocol to The Convention on Human Rights and Biomedicine about Genetic Testing for Health Purposes addresses some of the issues raised by genetic testing, from quality and clinical utility, to public information and genetic screening programmes for health purposes (Council of Europe, 2008). According to the Protocol, a health-screening programme that uses genetic tests can only be implemented if approved by the competent body, after independent evaluation of its ethical acceptability and fulfilment of specific conditions. These include the health relevance, scientific validity and effectiveness, availability of appropriate preventive or treatment measures, equitable access to the programme and availability of adequate measures to inform the population about the existence, purpose and accessibility of the screening programme, as well as the voluntary nature of participation in it.Two particular issues were discussed during the development of the Protocol: direct access to tests by individuals; and information and genetic counselling (Lwoff, 2009). The Protocol includes some debated restrictions to DCGT (Borry, 2008), to guarantee the proper interpretation of predictive test results and appropriate counselling to understand their implications. According to Article 7, with few exceptions “[a] genetic test for health purposes may only be performed under individualized medical supervision.” In order to assure quality of information and support for the patient, Article 8 states that “the person concerned shall be provided with prior appropriate information in particular on the purpose and the nature of the test, as well as the implications of its results.” Moreover, for tests for monogenic diseases, tests that aim to detect a genetic predisposition or genetic susceptibility to a disease, or tests to identify the subject as a healthy carrier of a gene responsible for a disease, appropriate genetic counselling should be available. It states that “the form and extent of this genetic counselling shall be defined according to the implications of the results of the test and their significance for the person or the members of his or her family, including possible implications concerning procreation choices.” According to this document, genetic counselling could thus go from being a “very heavy and long” procedure to a “lighter” one, but should be guaranteed in any case. The Protocol has already influenced legislation, but it will apply only in countries that have ratified it, which, so far, is only Slovenia.Companies that offer DCGTs are harbingers of change for personalized medicine. Their increasing popularity—owing not least to the ease with which their services can be obtained over the Internet—shows that the public is willing to pay for this kind of personal information. Nevertheless, healthcare systems and regulators must ensure that developments in this area benefit patients. Experience from genetic testing for neurological diseases—given their particularly severe impact on patients and their families—highlights both the current lack of proper regulation and oversight, as well as the potential health benefits that can be reaped from genetic tests.? Open in a separate windowDonato RamaniOpen in a separate windowChiara Saviane  相似文献   

18.
Antony M Dean 《EMBO reports》2010,11(6):409-409
Antony Dean explores the past, present and future of evolutionary theory and our continuing efforts to explain biological patterns in terms of molecular processes and mechanisms.There are just two questions to be asked in evolution: how are things related, and what makes them differ? Lamarck was the first biologist—he invented the word—to address both. In his Philosophie Zoologique (1809) he suggested that the relationships among species are better described by branching trees than by a simple ladder, that new species arise gradually by descent with modification and that they adapt to changing environments through the inheritance of acquired characteristics. Much that Lamarck imagined has since been superseded. Following Wallace and Darwin, we now envision that species belong to a single highly branched tree and that natural selection is the mechanism of adaptation. Nonetheless, to Lamarck we owe the insight that pattern is produced by process and that both need mechanistic explanation.Questions of pattern, process and mechanism pervade the modern discipline of molecular evolution. The field was established when Zuckerkandl & Pauling (1965) noted that haemoglobins evolve at a roughly constant rate. Their “molecular evolutionary clock” forever changed our view of evolutionary history. Not only were seemingly intractable relationships resolved—for example, whales are allies of the hippopotamus—but also the eubacterial origins of eukaryotic organelles were firmly established and a new domain of life was discovered: the Archaea.Yet, different genes sometimes produce different trees. Golding & Gupta (1995) resolved two-dozen conflicting protein trees by suggesting that Eukarya arose following massive horizontal gene transfer between Bacteria and Archaea. Whole genome sequencing has since revealed so many conflicts that horizontal gene transfer seems characteristic of prokaryote evolution. In higher animals—where horizontal transfer is sufficiently rare that the tree metaphor remains robust—rapid and inexpensive whole genome sequencing promises to provide a wealth of data for population studies. The patterns of migration, admixture and divergence of species will be soon addressed in unprecedented detail.Sequence analyses are also used to infer processes. A constant molecular clock originally buttressed the neutral theory of molecular evolution (Kimura, 1985). The clock has since proven erratic, while the neutral theory now serves as a null hypothesis for statistical tests of ‘selection''. In truth, most tests are also sensitive to demographic changes. The promise of ultra-high throughput sequencing to provide genome-wide data should help dissect selection, which targets particular genes, from demography, which affects all the genes in a genome, although weak selection and ancient adaptations will remain undetected.In the functional synthesis (Dean & Thornton, 2007), molecular biology provides the experimental means to test evolutionary inferences decisively. For example, site-directed mutagenesis can be used to introduce putatively selected mutations into reconstructed ancestral sequences, the gene products are then expressed and purified and their functional properties determined in vitro. In microbial species, homologous recombination is used routinely to replace wild-type with engineered genes, enabling organismal phenotypes and fitnesses to be determined in vivo. The vision of Zuckerkandl & Pauling (1965) that by “furnishing probable structures of ancestral proteins, chemical paleogenetics will in the future lead to deductions concerning molecular functions as they were presumably carried out in the distant evolutionary past” is now a reality.If experimental tests of evolutionary inferences open windows on past mechanisms, directed evolution focuses on the mechanisms without attempting historical reconstruction. Today''s ‘fast-forward'' molecular breeding experiments use mutagenic PCR to generate vast libraries of variation and high throughput screens to identify rare novel mutants (Romero & Arnold, 2009; Khersonsky & Tawfik, 2010). Among numerous topics explored are: the role of intragenic recombination in furthering adaptation, the number and location of mutations in protein structures, the necessity—or lack thereof—of broadening substrate specificity before a new function is acquired, the evolution of robustness, and the alleged trade-off between stability and catalytic efficiency. Few, however, have approached the detail found in those classic studies of evolved β-galactosidase (Hall, 2003) that revealed how the free-energy profile of an enzyme-catalysed reaction evolved. Even further removed from natural systems are catalytic RNAs that, by combining phenotype and genotype within the same molecule, allow evolution to proceed in a lifeless series of chemical reactions. Recently, two RNA enzymes that catalyse each other''s synthesis were shown to undergo self-sustained exponential amplification (Lincoln & Joyce, 2009). Competition for limiting tetranucleotide resources favours mutants with higher relative fitness—faster replication—demonstrating that adaptive evolution can occur in a chemically defined abiotic genetic system.Lamarck was the first to attempt a coherent explanation of biological patterns in terms of processes and mechanisms. That his legacy can still be discerned in the vibrant field of molecular evolution would no doubt please him as much as it does us in promising extraordinary advances in our understanding of the mechanistic basis of molecular adaptation.  相似文献   

19.
The authors of “The anglerfish deception” respond to the criticism of their article.EMBO reports (2012) advanced online publication; doi: 10.1038/embor.2012.70EMBO reports (2012) 13 2, 100–105; doi: 10.1038/embor.2011.254Our respondents, eight current or former members of the EFSA GMO panel, focus on defending the EFSA''s environmental risk assessment (ERA) procedures. In our article for EMBO reports, we actually focused on the proposed EU GMO legislative reform, especially the European Commission (EC) proposal''s false political inflation of science, which denies the normative commitments inevitable in risk assessment (RA). Unfortunately the respondents do not address this problem. Indeed, by insisting that Member States enjoy freedom over risk management (RM) decisions despite the EFSA''s central control over RA, they entirely miss the relevant point. This is the unacknowledged policy—normative commitments being made before, and during, not only after, scientific ERA. They therefore only highlight, and extend, the problem we identified.The respondents complain that we misunderstood the distinction between RA and RM. We did not. We challenged it as misconceived and fundamentally misleading—as though only objective science defined RA, with normative choices cleanly confined to RM. Our point was that (i) the processes of scientific RA are inevitably shaped by normative commitments, which (ii) as a matter of institutional, policy and scientific integrity must be acknowledged and inclusively deliberated. They seem unaware that many authorities [1,2,3,4] have recognized such normative choices as prior matters, of RA policy, which should be established in a broadly deliberative manner “in advance of risk assessment to ensure that [RA] is systematic, complete, unbiased and transparent” [1]. This was neither recognized nor permitted in the proposed EC reform—a central point that our respondents fail to recognize.In dismissing our criticism that comparative safety assessment appears as a ‘first step'' in defining ERA, according to the new EFSA ERA guidelines, which we correctly referred to in our text but incorrectly referenced in the bibliography [5], our respondents again ignore this widely accepted ‘framing'' or ‘problem formulation'' point for science. The choice of comparator has normative implications as it immediately commits to a definition of what is normal and, implicitly, acceptable. Therefore the specific form and purpose of the comparison(s) is part of the validity question. Their claim that we are against comparison as a scientific step is incorrect—of course comparison is necessary. This simply acts as a shield behind which to avoid our and others'' [6] challenge to their self-appointed discretion to define—or worse, allow applicants to define—what counts in the comparative frame. Denying these realities and their difficult but inevitable implications, our respondents instead try to justify their own particular choices as ‘science''. First, they deny the first-step status of comparative safety assessment, despite its clear appearance in their own ERA Guidance Document [5]—in both the representational figure (p.11) and the text “the outcome of the comparative safety assessment allows the determination of those ‘identified'' characteristics that need to be assessed [...] and will further structure the ERA” (p.13). Second, despite their claims to the contrary, ‘comparative safety assessment'', effectively a resurrection of substantial equivalence, is a concept taken from consumer health RA, controversially applied to the more open-ended processes of ERA, and one that has in fact been long-discredited if used as a bottleneck or endpoint for rigorous RA processes [7,8,9,10]. The key point is that normative commitments are being embodied, yet not acknowledged, in RA science. This occurs through a range of similar unaccountable RA steps introduced into the ERA Guidance, such as judgement of ‘biological relevance'', ‘ecological relevance'', or ‘familiarity''. We cannot address these here, but our basic point is that such endless ‘methodological'' elaborations of the kind that our EFSA colleagues perform, only obscure the institutional changes needed to properly address the normative questions for policy-engaged science.Our respondents deny our claim concerning the singular form of science the EC is attempting to impose on GM policy and debate, by citing formal EFSA procedures for consultations with Member States and non-governmental organizations. However, they directly refute themselves by emphasizing that all Member State GM cultivation bans, permitted only on scientific grounds, have been deemed invalid by EFSA. They cannot have it both ways. We have addressed the importance of unacknowledged normativity in quality assessments of science for policy in Europe elsewhere [11]. However, it is the ‘one door, one key'' policy framework for science, deriving from the Single Market logic, which forces such singularity. While this might be legitimate policy, it is not scientific. It is political economy.Our respondents conclude by saying that the paramount concern of the EFSA GMO panel is the quality of its science. We share this concern. However, they avoid our main point that the EC-proposed legislative reform would only exacerbate their problem. Ignoring the normative dimensions of regulatory science and siphoning-off scientific debate and its normative issues to a select expert panel—which despite claiming independence faces an EU Ombudsman challenge [12] and European Parliament refusal to discharge their 2010 budget, because of continuing questions over conflicts of interests [13,14]—will not achieve quality science. What is required are effective institutional mechanisms and cultural norms that identify, and deliberatively address, otherwise unnoticed normative choices shaping risk science and its interpretive judgements. It is not the EFSA''s sole responsibility to achieve this, but it does need to recognize and press the point, against resistance, to develop better EU science and policy.  相似文献   

20.
Rinaldi A 《EMBO reports》2012,13(1):24-27
Does the spin of an electron allow birds to see the Earth''s magnetic field? Andrea Rinaldi investigates the influence of quantum events in the biological world.The subatomic world is nothing like the world that biologists study. Physicists have struggled for almost a century to understand the wave–particle duality of matter and energy, but many questions remain unanswered. That biological systems ultimately obey the rules of quantum mechanics might be self-evident, but the idea that those rules are the very basis of certain biological functions has needed 80 years of thought, research and development for evidence to begin to emerge (Sidebar A).

Sidebar A | Putting things in their place

Although Erwin Schrödinger (1887–1961) is often credited as the ‘father'' of quantum biology, owing to the publication of his famous 1944 book, What is Life?, the full picture is more complex. While other researchers were already moving towards these concepts in the 1920s, the German theoretical physicist Pascual Jordan (1902–1980) was actually one of the first to attempt to reconcile biological phenomena with the quantum revolution that Jordan himself, working with Max Born and Werner Heisenberg, largely ignited. “Pascual Jordan was one of many scientists at the time who were exploring biophysics in innovative ways. In some cases, his ideas have proven to be speculative or even fantastical. In others, however, his ideas have proven to be really ahead of their time,” explained Richard Beyler, a science historian at Portland State University, USA, who analysed Jordan''s contribution to the rise of quantum biology (Beyler, 1996). “I think this applies to Jordan''s work in quantum biology as well.”Beyler also remarked that some of the well-known figures of molecular biology''s past—Max Delbrück is a notable example—entered into their studies at least in part as a response or rejoinder to Jordan''s work. “Schrödinger''s book can also be read, on some level, as an indirect response to Jordan,” Beyler said.Jordan was certainly a complex personality and his case is rendered more complicated by the fact that he explicitly hitched his already speculative scientific theories to various right-wing political philosophies. “During the Nazi regime, for example, he promoted the notion that quantum biology served as evidence for the naturalness of dictatorship and the prospective death of liberal democracy,” Beyler commented. “After 1945, Jordan became a staunch Cold Warrior and saw in quantum biology a challenge to philosophical and political materialism. Needless to say, not all of his scientific colleagues appreciated these propagandistic endeavors.”Pascual Jordan [pictured above] and the dawn of quantum biology. From 1932, Jordan started to outline the new field''s background in a series of essays that were published in journals such as Naturwissenschaften. An exposition of quantum biology is also encountered in his book Die Physik und das Geheimnis des organischen Lebens, published in 1941. Photo courtesy of Luca Turin.Until very recently, it was not even possible to investigate whether quantum phenomena such as coherence and entanglement could play a significant role in the function of living organisms. As such, researchers were largely limited to computer simulations and theoretical experiments to explain their observations (see A quantum leap in biology, www.emboreports.org). Recently, however, quantum biologists have been making inroads into developing methodology to measure the degree of quantum entanglement in light-harvesting systems. Their breakthrough has turned once ephemeral theories into solid evidence, and has sparked the beginning of an entirely new discipline.How widespread is the direct relevance of quantum effects in nature is hard to say and many scientists suspect that there are only a few cases in which quantum mechanics have a crucial role. However, interest in the field is growing and researchers are looking for more examples of quantum-dependent biological systems. In a way, quantum biology can be viewed as a natural evolution of biophysics, moving from the classical to the quantum, from the atomic to the subatomic. Yet the discipline might prove to be an even more intimate and further-reaching marriage that could provide a deeper understanding of things such as protein energetics and dynamics, and all biological processes where electrons flow.Recently […] quantum biologists have been making inroads into developing methodology to measure the degree of quantum entanglement in light-harvesting systemsAmong the biological systems in which quantum effects are believed to have a crucial role is magnetoreception, although the nature of the receptors and the underlying biophysical mechanisms remain unknown. The possibility that organisms use a ferromagnetic material (magnetite) in some cases has received some confirmation, but support is growing for the explanation lying in a chemical detection mechanism with quantum mechanical properties. This explanation posits a chemical compass based on the light-triggered production of a radical pair—a pair of molecules each with an unpaired electron—the spins of which are entangled. If the products of the radical pair system are spin-dependent, then a magnetic field—like the geomagnetic one—that affects the direction of spin will alter the reaction products. The idea is that these reaction products affect the sensitivity of light sensors in the eye, thus allowing organisms to ‘see'' magnetic fields.The research comes from a team led by Thorsten Ritz at the University of California Irvine, USA, and other groups, who have suggested that the radical pair reaction takes place in the molecule cryptochrome. Cryptochromes are flavoprotein photoreceptors first identified in the model plant Arabidopsis thaliana, in which they play key roles in growth and development. More recently, cryptochromes have been found to have a role in the circadian clock of fruit flies (Ritz et al, 2010) and are known to be present in migratory birds. Intriguingly, magnetic fields have been shown to have an effect on both Arabidopsis seedlings, which respond as though they have been exposed to higher levels of blue light, and Drosophila, in which the period length of the clock is lengthened, mimicking the effect of increased blue light signal intensity on cryptochromes (Ahmad et al, 2007; Yoshii et al, 2009).“The study of quantum effects in biological systems is a rapidly broadening field of research in which intriguing phenomena are yet to be uncovered and understood”Direct evidence that cryptochrome is the avian magnetic compass is currently lacking, but the molecule does have some features that make its candidacy possible. In a recent review (Ritz et al, 2010), Ritz and colleagues discussed the mechanism by which cryptochrome might form radical pairs. They argued that “Cryptochromes are bound to a light-absorbing flavin cofactor (FAD) which can exist in three interconvertable [sic] redox forms: (FAD, FADH, FADH),” and that the redox state of FAD is light-dependent. As such, both the oxidation and reduction of the flavin have radical species as intermediates. “Therefore both forward and reverse reactions may involve the formation of radical pairs” (Ritz et al, 2010). Although speculative, the idea is that a magnetic field could alter the spin of the free electrons in the radical pairs resulting in altered photoreceptor responses that could be perceived by the organism. “Given the relatively short time from the first suggestion of cryptochrome as a magnetoreceptor in 2000, the amount of studies from different fields supporting the photo-magnetoreceptor and cryptochrome hypotheses […] is promising,” the authors concluded. “It suggests that we may be only one step away from a true smoking gun revealing the long-sought after molecular nature of receptors underlying the 6th sense and thus the solution of a great outstanding riddle of sensory biology.”Research into quantum effects in biology took off in 2007 with groundbreaking experiments from Graham Fleming''s group at the University of California, Berkeley, USA. Fleming''s team were able to develop tools that allowed them to excite the photosynthetic apparatus of the green sulphur bacterium Chlorobium tepidum with short laser pulses to demonstrate that wave-like energy transfer takes place through quantum coherence (Engel et al, 2007). Shortly after, Martin Plenio''s group at Ulm University in Germany and Alán Aspuru-Guzik''s team at Harvard University in the USA simultaneously provided evidence that it is a subtle interplay between quantum coherence and environmental noise that optimizes the performance of biological systems such as the photosynthetic machinery, adding further interest to the field (Plenio & Huelga, 2008; Rebentrost et al, 2009). “The recent Quantum Effects in Biological Systems (QuEBS) 2011 meeting in Ulm saw an increasing number of biological systems added to the group of biological processes in which quantum effects are suspected to play a crucial role,” commented Plenio, one of the workshop organizers; he mentioned the examples of avian magnetoreception and the role of phonon-assisted tunnelling to explain the function of the sense of smell (see below). “The study of quantum effects in biological systems is a rapidly broadening field of research in which intriguing phenomena are yet to be uncovered and understood,” he concluded.“The area of quantum effects in biology is very exciting because it is pushing the limits of quantum physics to a new scale,” Yasser Omar from the Technical University of Lisbon, Portugal commented. ”[W]e are finding that quantum coherence plays a significant role in the function of systems that we previously thought would be too large, too hot—working at physiological temperatures—and too complex to depend on quantum effects.”Another growing focus of quantum biologists is the sense of smell and odorant recognition. Mainstream researchers have always favoured a ‘lock-and-key'' mechanism to explain how organisms detect and distinguish different smells. In this case, the identification of odorant molecules relies on their specific shape to activate receptors on the surface of sensory neurons in the nasal epithelium. However, a small group of ‘heretics'' think that the smell of a molecule is actually determined by intramolecular vibrations, rather than by its shape. This, they say, explains why the shape theory has so far failed to explain why different molecules can have similar odours, while similar molecules can have dissimilar odours. It also goes some way to explaining how humans can manage with fewer than 400 smell receptors.…determining whether quantum effects have a role in odorant recognition has involved assessing the physical violations of such a mechanism […] and finding that, given certain biological parameters, there are noneA recent study in Proceedings of the National Academy of Sciences USA has now provided new grist for the mill for ‘vibrationists''. Researchers from the Biomedical Sciences Research Center “Alexander Fleming”, Vari, Greece—where the experiments were performed—and the Massachusetts Institute of Technology (MIT), USA, collaborated to replace hydrogen with deuterium in odorants such as acetophenone and 1-octanol, and asked whether Drosophila flies could distinguish the two isotopes, which are identically shaped but vibrate differently (Franco et al, 2011). Not only were the flies able to discriminate between the isotopic odorants, but when trained to discriminate against the normal or deuterated isotopes of a compound, they could also selectively avoid the corresponding isotope of a different odorant. The findings are inconsistent with a shape-only model for smell, the authors concluded, and suggest that flies can ‘smell molecular vibrations''.“The ability to detect heavy isotopes in a molecule by smell is a good test of shape and vibration theories: shape says it should be impossible, vibration says it should be doable,” explained Luca Turin from MIT, one of the study''s authors. Turin is a major proponent of the vibration theory and suggests that the transduction of molecular vibrations into receptor activation could be mediated by inelastic electron tunnelling (Fig 1; see also The scent of life, www.emboreports.org). “The results so far had been inconclusive and complicated by possible contamination of the test odorants with impurities,” Turin said. “Our work deals with impurities in a novel way, by asking flies whether the presence of deuterium isotope confers a common smell character to odorants, much in the way that the presence of -SH in a molecule makes it smell ‘sulphuraceous'', regardless of impurities. The flies'' answer seems to be ‘yes''.”Open in a separate windowFigure 1Diagram of a vibration-sensing receptor using an inelastic electron tunnelling mechanism. An odorant—here benzaldehyde—is depicted bound to a protein receptor that includes an electron donor site at the top left to which an electron—blue sphere—is bound. The electron can tunnel to an acceptor site at the bottom right while losing energy (vertical arrow) by exciting one or more vibrational modes of the benzaldehyde. When the electron reaches the acceptor, the signal is transduced via a G-protein mechanism, and the olfactory stimulus is triggered. Credit: Luca Turin.One of the study''s Greek co-authors, Efthimios Skoulakis, suggested that flies are better suited than humans at doing this experiment for a couple of reasons. “[The flies] seem to have better acuity than humans and they cannot anticipate the task they will be required to complete (as humans would), thus reducing bias in the outcome,” he said. “Drosophila does not need to detect deuterium per se to survive and be reproductively successful, so it is likely that detection of the vibrational difference between such a compound and its normal counterpart reflects a general property of olfactory systems.”The question of whether quantum mechanics really plays a non-trivial role in biology is still hotly debated by physicists and biologists alikeJennifer Brookes, a physicist at University College London, UK, explained that recent advances in determining whether quantum effects have a role in odorant recognition has involved assessing the physical violations of such a mechanism in the first instance, and finding that, given certain biological parameters, there are none. “The point being that if nature uses something like the quantized vibrations of molecules to ‘measure'' a smell then the idea is not—mathematically, physically and biologically—as eccentric as it at first seems,” she said. Moreover, there is the possibility that quantum mechanics could play a much broader role in biology than simply underpinning the sense of smell. “Odorants are not the only small molecules that interact unpredictably with large proteins; steroid hormones, anaesthetics and neurotransmitters, to name a few, are examples of ligands that interact specifically with special receptors to produce important biological processes,” Brookes wrote in a recent essay (Brookes, 2010).The question of whether quantum mechanics really plays a non-trivial role in biology is still hotly debated by physicists and biologists alike. “[A] non-trivial quantum effect in biology is one that would convince a biologist that they needed to take an advanced quantum mechanics course and learn about Hilbert space and operators etc., so that they could understand the effect,” argued theoretical quantum physicists Howard Wiseman and Jens Eisert in their contribution to the book Quantum Aspects of Life (Wiseman & Eisert, 2008). In their rational challenge to the general enthusiasm for a quantum revolution in biology, Wiseman and Eisert point out that a number of “exotic” and “implausible” quantum effects—including a quantum life principle, quantum computing in the brain, quantum computing in genetics, and quantum consciousness—have been suggested and warn researchers to be cautious of “ideas that are more appealing at first sight than they are realistic” (Wiseman & Eisert, 2008).“One could easily expect many more new exciting ideas and discoveries to emerge from the intersection of two major areas such as quantum physics and biology”Keeping this warning in mind, the view of life from a quantum perspective can still provide a deeper insight into the mechanisms that allow living organisms to thrive without succumbing to the increasing entropy of their environment. But does quantum biology have practical applications? “The investigation of the role of quantum physics in biology is fascinating because it could help explain why evolution has favoured some biological designs, as well as inspire us to develop more efficient artificial devices,” Omar said. The most often quoted examples of such devices are solar collectors that would use efficient energy transport mechanisms inspired by the quantum proficiency of natural light-harvesting systems, and quantum computing. But there is much more ahead. In 2010, the Pentagon''s cutting-edge research branch, DARPA (Defense Advanced Research Projects Agency, USA), launched a solicitation for innovative proposals in the area of quantum effects in a biological environment. “Proposed research should establish beyond any doubt that manifestly quantum effects occur in biology, and demonstrate through simulation proof-of-concept experiments that devices that exploit these effects could be developed into biomimetic sensors,” states the synopsis (DARPA, 2010). This programme will thus look explicitly at photosynthesis, magnetic field sensing and odour detection to lay the foundations for novel sensor technologies for military applications.Clearly a number of civil needs could also be fulfilled by quantum-based biosensors. Take, for example, the much sought-after ‘electronic nose'' that could replace the use of dogs to find drugs or explosives, or could assess food quality and safety. Such a device could even be used to detect cancer, as suggested by a recent publication from a Swedish team of researchers who reported that ovarian carcinomas emit a different array of volatile signals to normal tissue (Horvath et al, 2010). “Our goal is to be able to screen blood samples from apparently healthy women and so detect ovarian cancer at an early stage when it can still be cured,” said the study''s leading author György Horvath in a press release (University of Gothenburg, 2010).Despite its already long incubation time, quantum biology is still in its infancy but with an intriguing adolescence ahead. “A new wave of scientists are finding that quantum physics has the appropriate language and methods to solve many problems in biology, observing phenomena from a different point of view and developing new concepts. The next important steps are experimental verification/falsification,” Brookes said. “One could easily expect many more new exciting ideas and discoveries to emerge from the intersection of two major areas such as quantum physics and biology,” Omar concluded.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号