首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
3.
4.
The authors of “The anglerfish deception” respond to the criticism of their article.EMBO reports (2012) advanced online publication; doi: 10.1038/embor.2012.70EMBO reports (2012) 13 2, 100–105; doi: 10.1038/embor.2011.254Our respondents, eight current or former members of the EFSA GMO panel, focus on defending the EFSA''s environmental risk assessment (ERA) procedures. In our article for EMBO reports, we actually focused on the proposed EU GMO legislative reform, especially the European Commission (EC) proposal''s false political inflation of science, which denies the normative commitments inevitable in risk assessment (RA). Unfortunately the respondents do not address this problem. Indeed, by insisting that Member States enjoy freedom over risk management (RM) decisions despite the EFSA''s central control over RA, they entirely miss the relevant point. This is the unacknowledged policy—normative commitments being made before, and during, not only after, scientific ERA. They therefore only highlight, and extend, the problem we identified.The respondents complain that we misunderstood the distinction between RA and RM. We did not. We challenged it as misconceived and fundamentally misleading—as though only objective science defined RA, with normative choices cleanly confined to RM. Our point was that (i) the processes of scientific RA are inevitably shaped by normative commitments, which (ii) as a matter of institutional, policy and scientific integrity must be acknowledged and inclusively deliberated. They seem unaware that many authorities [1,2,3,4] have recognized such normative choices as prior matters, of RA policy, which should be established in a broadly deliberative manner “in advance of risk assessment to ensure that [RA] is systematic, complete, unbiased and transparent” [1]. This was neither recognized nor permitted in the proposed EC reform—a central point that our respondents fail to recognize.In dismissing our criticism that comparative safety assessment appears as a ‘first step'' in defining ERA, according to the new EFSA ERA guidelines, which we correctly referred to in our text but incorrectly referenced in the bibliography [5], our respondents again ignore this widely accepted ‘framing'' or ‘problem formulation'' point for science. The choice of comparator has normative implications as it immediately commits to a definition of what is normal and, implicitly, acceptable. Therefore the specific form and purpose of the comparison(s) is part of the validity question. Their claim that we are against comparison as a scientific step is incorrect—of course comparison is necessary. This simply acts as a shield behind which to avoid our and others'' [6] challenge to their self-appointed discretion to define—or worse, allow applicants to define—what counts in the comparative frame. Denying these realities and their difficult but inevitable implications, our respondents instead try to justify their own particular choices as ‘science''. First, they deny the first-step status of comparative safety assessment, despite its clear appearance in their own ERA Guidance Document [5]—in both the representational figure (p.11) and the text “the outcome of the comparative safety assessment allows the determination of those ‘identified'' characteristics that need to be assessed [...] and will further structure the ERA” (p.13). Second, despite their claims to the contrary, ‘comparative safety assessment'', effectively a resurrection of substantial equivalence, is a concept taken from consumer health RA, controversially applied to the more open-ended processes of ERA, and one that has in fact been long-discredited if used as a bottleneck or endpoint for rigorous RA processes [7,8,9,10]. The key point is that normative commitments are being embodied, yet not acknowledged, in RA science. This occurs through a range of similar unaccountable RA steps introduced into the ERA Guidance, such as judgement of ‘biological relevance'', ‘ecological relevance'', or ‘familiarity''. We cannot address these here, but our basic point is that such endless ‘methodological'' elaborations of the kind that our EFSA colleagues perform, only obscure the institutional changes needed to properly address the normative questions for policy-engaged science.Our respondents deny our claim concerning the singular form of science the EC is attempting to impose on GM policy and debate, by citing formal EFSA procedures for consultations with Member States and non-governmental organizations. However, they directly refute themselves by emphasizing that all Member State GM cultivation bans, permitted only on scientific grounds, have been deemed invalid by EFSA. They cannot have it both ways. We have addressed the importance of unacknowledged normativity in quality assessments of science for policy in Europe elsewhere [11]. However, it is the ‘one door, one key'' policy framework for science, deriving from the Single Market logic, which forces such singularity. While this might be legitimate policy, it is not scientific. It is political economy.Our respondents conclude by saying that the paramount concern of the EFSA GMO panel is the quality of its science. We share this concern. However, they avoid our main point that the EC-proposed legislative reform would only exacerbate their problem. Ignoring the normative dimensions of regulatory science and siphoning-off scientific debate and its normative issues to a select expert panel—which despite claiming independence faces an EU Ombudsman challenge [12] and European Parliament refusal to discharge their 2010 budget, because of continuing questions over conflicts of interests [13,14]—will not achieve quality science. What is required are effective institutional mechanisms and cultural norms that identify, and deliberatively address, otherwise unnoticed normative choices shaping risk science and its interpretive judgements. It is not the EFSA''s sole responsibility to achieve this, but it does need to recognize and press the point, against resistance, to develop better EU science and policy.  相似文献   

5.
Zhang JY 《EMBO reports》2011,12(4):302-306
How can grass-roots movements evolve into a national research strategy? The bottom-up emergence of synthetic biology in China could give some pointers.Given its potential to aid developments in renewable energy, biosensors, sustainable chemical industries, microbial drug factories and biomedical devices, synthetic biology has enormous implications for economic development. Many countries are therefore implementing strategies to promote progress in this field. Most notably, the USA is considered to be the leader in exploring the industrial potential of synthetic biology (Rodemeyer, 2009). Synthetic biology in Europe has benefited from several cross-border studies, such as the ‘New and Emerging Science and Technology'' programme (NEST, 2005) and the ‘Towards a European Strategy for Synthetic Biology'' project (TESSY; Gaisser et al, 2008). Yet, little is known in the West about Asia''s role in this ‘new industrial revolution'' (Kitney, 2009). In particular, China is investing heavily in scientific research for future developments, and is therefore likely to have an important role in the development of synthetic biology.Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework…In 2010, as part of a study of the international governance of synthetic biology, the author visited four leading research teams in three Chinese cities (Beijing, Tianjin and Hefei). The main aims of the visits were to understand perspectives in China on synthetic biology, to identify core themes among its scientific community, and to address questions such as ‘how did synthetic biology emerge in China?'', ‘what are the current funding conditions?'', ‘how is synthetic biology generally perceived?'' and ‘how is it regulated?''. Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework; one that is more dynamic and comprises more options than existing national or international research and development (R&D) strategies. Such findings might contribute to Western knowledge of Chinese R&D, but could also expose European and US policy-makers to alternative forms and patterns of research governance that have emerged from a grass-roots level.…the process of developing a framework is at least as important to research governance as the big question it might eventually addressA dominant narrative among the scientists interviewed is the prospect of a ‘big-question'' strategy to promote synthetic-biology research in China. This framework is at a consultation stage and key questions are still being discussed. Yet, fieldwork indicates that the process of developing a framework is at least as important to research governance as the big question it might eventually address. According to several interviewees, this approach aims to organize dispersed national R&D resources into one grand project that is essential to the technical development of the field, preferably focusing on an industry-related theme that is economically appealling to the Chinese public.Chinese scientists have a pragmatic vision for research; thinking of science in terms of its ‘instrumentality'' has long been regarded as characteristic of modern China (Schneider, 2003). However, for a country in which the scientific community is sometimes described as an “uncoordinated ‘bunch of loose ends''” (Cyranoski, 2001) “with limited synergies between them” (OECD, 2007), the envisaged big-question approach implies profound structural and organizational changes. Structurally, the approach proposes that the foundational (industry-related) research questions branch out into various streams of supporting research and more specific short-term research topics. Within such a framework, a variety of Chinese universities and research institutions can be recruited and coordinated at different levels towards solving the big question.It is important to note that although this big-question strategy is at a consultation stage and supervised by the Ministry of Science and Technology (MOST), the idea itself has emerged in a bottom-up manner. One academic who is involved in the ongoing ministerial consultation recounted that, “It [the big-question approach] was initially conversations among we scientists over the past couple of years. We saw this as an alternative way to keep up with international development and possibly lead to some scientific breakthrough. But we are happy to see that the Ministry is excited and wants to support such an idea as well.” As many technicalities remain to be addressed, there is no clear time-frame yet for when the project will be launched. Yet, this nationwide cooperation among scientists with an emerging commitment from MOST seems to be largely welcomed by researchers. Some interviewees described the excitement it generated among the Chinese scientific community as comparable with the establishment of “a new ‘moon-landing'' project”.Of greater significance than the time-frame is the development process that led to this proposition. On the one hand, the emergence of synthetic biology in China has a cosmopolitan feel: cross-border initiatives such as international student competitions, transnational funding opportunities and social debates in Western countries—for instance, about biosafety—all have an important role. On the other hand, the development of synthetic biology in China has some national particularities. Factors including geographical proximity, language, collegial familiarity and shared interests in economic development have all attracted Chinese scientists to the national strategy, to keep up with their international peers. Thus, to some extent, the development of synthetic biology in China is an advance not only in the material synthesis of the ‘cosmos''—the physical world—but also in the social synthesis of aligning national R&D resources and actors with the global scientific community.To comprehend how Chinese scientists have used national particularities and global research trends as mutually constructive influences, and to identify the implications of this for governance, this essay examines the emergence of synthetic biology in China from three perspectives: its initial activities, the evolution of funding opportunities, and the ongoing debates about research governance.China''s involvement in synthetic biology was largely promoted by the participation of students in the International Genetically Engineered Machine (iGEM) competition, an international contest for undergraduates initiated by the Massachusetts Institute of Technology (MIT) in the USA. Before the iGEM training workshop that was hosted by Tianjin University in the Spring of 2007, there were no research records and only two literature reviews on synthetic biology in Chinese scientific databases (Zhao & Wang, 2007). According to Chunting Zhang of Tianjin University—a leading figure in the promotion of synthetic biology in China—it was during these workshops that Chinese research institutions joined their efforts for the first time (Zhang, 2008). From the outset, the organization of the workshop had a national focus, while it engaged with international networks. Synthetic biologists, including Drew Endy from MIT and Christina Smolke from Stanford University, USA, were invited. Later that year, another training camp designed for iGEM tutors was organized in Tianjin and included delegates from Australia and Japan (Zhang, 2008).Through years of organizing iGEM-related conferences and workshops, Chinese universities have strengthened their presence at this international competition; in 2007, four teams from China participated. During the 2010 competition, 11 teams from nine universities in six provinces/municipalities took part. Meanwhile, recruiting, training and supervising iGEM teams has become an important institutional programme at an increasing number of universities.…training for iGEM has grown beyond winning the student awards and become a key component of exchanges between Chinese researchers and the international communityIt might be easy to interpret the enthusiasm for the iGEM as a passion for winning gold medals, as is conventionally the case with other international scientific competitions. This could be one motive for participating. Yet, training for iGEM has grown beyond winning the student awards and has become a key component of exchanges between Chinese researchers and the international community (Ding, 2010). Many of the Chinese scientists interviewed recounted the way in which their initial involvement in synthetic biology overlapped with their tutoring of iGEM teams. One associate professor at Tianjin University, who wrote the first undergraduate textbook on synthetic biology in China, half-jokingly said, “I mainly learnt [synthetic biology] through tutoring new iGEM teams every year.”Participation in such contests has not only helped to popularize synthetic biology in China, but has also influenced local research culture. One example of this is that the iGEM competition uses standard biological parts (BioBricks), and new BioBricks are submitted to an open registry for future sharing. A corresponding celebration of open-source can also be traced to within the Chinese synthetic-biology community. In contrast to the conventional perception that the Chinese scientific sector consists of a “very large number of ‘innovative islands''” (OECD, 2007; Zhang, 2010), communication between domestic teams is quite active. In addition to the formally organized national training camps and conferences, students themselves organize a nationwide, student-only workshop at which to informally test their ideas.More interestingly, when the author asked one team whether there are any plans to set up a ‘national bank'' for hosting designs from Chinese iGEM teams, in order to benefit domestic teams, both the tutor and team members thought this proposal a bit “strange”. The team leader responded, “But why? There is no need. With BioBricks, we can get any parts we want quite easily. Plus, it directly connects us with all the data produced by iGEM teams around the world, let alone in China. A national bank would just be a small-scale duplicate.”From the beginning, interest in the development of synthetic biology in China has been focused on collective efforts within and across national borders. In contrast to conventional critiques on the Chinese scientific community''s “inclination toward competition and secrecy, rather than openness” (Solo & Pressberg, 2007; OECD, 2007; Zhang, 2010), there seems to be a new outlook emerging from the participation of Chinese universities in the iGEM contest. Of course, that is not to say that the BioBricks model is without problems (Rai & Boyle, 2007), or to exclude inputs from other institutional channels. Yet, continuous grass-roots exchanges, such as the undergraduate-level competition, might be as instrumental as formal protocols in shaping research culture. The indifference of Chinese scientists to a ‘national bank'' seems to suggest that the distinction between the ‘national'' and ‘international'' scientific communities has become blurred, if not insignificant.However, frequent cross-institutional exchanges and the domestic organization of iGEM workshops seem to have nurtured the development of a national synthetic-biology community in China, in which grass-roots scientists are comfortable relying on institutions with a cosmopolitan character—such as the BioBricks Foundation—to facilitate local research. To some extent, one could argue that in the eyes of Chinese scientists, national and international resources are one accessible global pool. This grass-roots interest in incorporating local and global advantages is not limited to student training and education, but also exhibited in evolving funding and regulatory debates.In the development of research funding for synthetic biology, a similar bottom-up consolidation of national and global resources can also be observed. As noted earlier, synthetic-biology research in China is in its infancy. A popular view is that China has the potential to lead this field, as it has strong support from related disciplines. In terms of genome sequencing, DNA synthesis, genetic engineering, systems biology and bioinformatics, China is “almost at the same level as developed countries” (Pan, 2008), but synthetic-biology research has only been carried out “sporadically” (Pan, 2008; Huang, 2009). There are few nationally funded projects and there is no discernible industrial involvement (Yang, 2010). Most existing synthetic-biology research is led by universities or institutions that are affiliated with the Chinese Academy of Science (CAS). As one CAS academic commented, “there are many Chinese scientists who are keen on conducting synthetic-biology research. But no substantial research has been launched nor has long-term investment been committed.”The initial undertaking of academic research on synthetic biology in China has therefore benefited from transnational initiatives. The first synthetic-biology project in China, launched in October 2006, was part of the ‘Programmable Bacteria Catalyzing Research'' (PROBACTYS) project, funded by the Sixth Framework Programme of the European Union (Yang, 2010). A year later, another cross-border collaborative effort led to the establishment of the first synthetic-biology centre in China: the Edinburgh University–Tianjing University Joint Research Centre for Systems Biology and Synthetic Biology (Zhang, 2008).There is also a comparable commitment to national research coordination. A year after China''s first participation in iGEM, the 2008 Xiangshan conference focused on domestic progress. From 2007 to 2009, only five projects in China received national funding, all of which came from the National Natural Science Foundation of China (NSFC). This funding totalled ¥1,330,000 (approximately £133,000; www.nsfc.org), which is low in comparison to the £891,000 funding that was given in the UK for seven Networks in Synthetic Biology in 2007 alone (www.bbsrc.ac.uk).One of the primary challenges in obtaining funding identified by the interviewees is that, as an emerging science, synthetic biology is not yet appreciated by Chinese funding agencies. After the Xiangshan conference, the CAS invited scientists to a series of conferences in late 2009. According to the interviewees, one of the main outcomes was the founding of a ‘China Synthetic Biology Coordination Group''; an informal association of around 30 conference delegates from various research institutions. This group formulated a ‘regulatory suggestion'' that they submitted to MOST, which stated the necessity and implications of supporting synthetic-biology research. In addition, leading scientists such as Chunting Zhang and Huanming Yang—President of the Beijing Genomic Institute (BGI), who co-chaired the Beijing Institutes of Life Science (BILS) conferences—have been active in communicating with government institutions. The initial results of this can be seen in the MOST 2010 Application Guidelines for the National Basic Research Program, in which synthetic biology was included for the first time, among ‘key supporting areas'' (MOST, 2010). Meanwhile, in 2010, NSFC allocated ¥1,500,000 (approximately £150,000) to synthetic-biology research, which is more than the total funding the area had received in the past three years.The search for funding further demonstrates the dynamics between national and transnational resources. Chinese R&D initiatives have to deal with the fact that scientific venture-capital and non-governmental research charities are underdeveloped in China. In contrast to the EU or the USA, government institutions in China, such as the NSFC and MOST, are the main and sometimes only domestic sources of funding. Yet, transnational funding opportunities facilitate the development of synthetic biology by alleviating local structural and financial constraints, and further integrate the Chinese scientific community into international research.This is not a linear ‘going-global'' process; it is important for Chinese scientists to secure and promote national and regional support. In addition, this alignment of national funding schemes with global research progress is similar to the iGEM experience, as it is being initiated through informal bottom-up associations between scientists, rather than by top-down institutional channels.As more institutions have joined iGEM training camps and participated in related conferences, a shared interest among the Chinese scientific community in developing synthetic biology has become visible. In late 2009, at the conference that founded the informal ‘coordination group'', the proposition of integrating national expertise through a big-question approach emerged. According to one professor in Beijing—who was a key participant in the discussion at the time—this proposition of a nationwide synergy was not so much about ‘national pride'' or an aim to develop a ‘Chinese'' synthetic biology, it was about research practicality. She explained, “synthetic biology is at the convergence of many disciplines, computer modelling, nano-technology, bioengineering, genomic research etc. Individual researchers like me can only operate on part of the production chain. But I myself would like to see where my findings would fit in a bigger picture as well. It just makes sense for a country the size of China to set up some collective and coordinated framework so as to seek scientific breakthrough.”From the first participation in the iGEM contest to the later exploration of funding opportunities and collective research plans, scientists have been keen to invite and incorporate domestic and international resources, to keep up with global research. Yet, there are still regulatory challenges to be met.…with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' mannerThe reputation of “the ‘wild East'' of biology” (Dennis, 2002) is associated with China'' previous inattention to ethical concerns about the life sciences, especially in embryonic-stem-cell research. Similarly, synthetic biology creates few social concerns in China. Public debate is minimal and most media coverage has been positive. Synthetic biology is depicted as “a core in the fourth wave of scientific development” (Pan, 2008) or “another scientific revolution” (Huang, 2009). Whilst recognizing its possible risks, mainstream media believe that “more people would be attracted to doing good while making a profit than doing evil” (Fang & He, 2010). In addition, biosecurity and biosafety training in China are at an early stage, with few mandatory courses for students (Barr & Zhang, 2010). The four leading synthetic-biology teams I visited regarded the general biosafety regulations that apply to microbiology laboratories as sufficient for synthetic biology. In short, with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' manner.Yet, fieldwork suggests that, in contrast to this previous insensitivity to global ethical concerns, the synthetic-biology community in China has taken a more proactive approach to engaging with international debates. It is important to note that there are still no synthetic-biology-specific administrative guidelines or professional codes of conduct in China. However, Chinese stakeholders participate in building a ‘mutual inclusiveness'' between global and domestic discussions.One of the most recent examples of this is a national conference about the ethical and biosafety implications of synthetic biology, which was jointly hosted by the China Association for Science and Technology, the Chinese Society of Biotechnology and the Beijing Institutes of Life Science CAS, in Suzhou in June 2010. The discussion was open to the mainstream media. The debate was not simply a recapitulation of Western worries, such as playing god, potential dual-use or ecological containment. It also focused on the particular concerns of developing countries about how to avoid further widening the developmental gap with advanced countries (Liu, 2010).In addition to general discussions, there are also sustained transnational communications. For example, one of the first three projects funded by the NSFC was a three-year collaboration on biosafety and risk-assessment frameworks between the Institute of Botany at CAS and the Austrian Organization for International Dialogue and Conflict Management (IDC).Chinese scientists are also keen to increase their involvement in the formulation of international regulations. The CAS and the Chinese Academy of Engineering are engaged with their peer institutions in the UK and the USA to “design more robust frameworks for oversight, intellectual property and international cooperation” (Royal Society, 2009). It is too early to tell what influence China will achieve in this field. Yet, the changing image of the country from an unconcerned wild East to a partner in lively discussions signals a new dynamic in the global development of synthetic biology.Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in ChinaFrom self-organized participation in iGEM to bottom-up funding and governance initiatives, two features are repeatedly exhibited in the emergence of synthetic biology in China: global resources and international perspectives complement national interests; and the national and cosmopolitan research strengths are mostly instigated at the grass-roots level. During the process of introducing, developing and reflecting on synthetic biology, many formal or informal, provisional or long-term alliances have been established from the bottom up. Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in China.However, the inputs of different social actors has not led to disintegration of the field into an array of individualized pursuits, but has transformed it into collective synergies, or the big-question approach. Underlying the diverse efforts of Chinese scientists is a sense of ‘inclusiveness'', or the idea of bringing together previously detached research expertise. Thus, the big-question strategy cannot be interpreted as just another nationally organized agenda in response to global scientific advancements. Instead, it represents a more intricate development path corresponding to how contemporary research evolves on the ground.In comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stageIn comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stage. Government input—such as the potential stewardship of the MOST in directing a big-question approach or long-term funding—remain important; the scientists who were interviewed expend a great deal of effort to attract governmental participation. Yet, China'' experience highlights that the key to comprehending regional scientific capacity lies not so much in what the government can do, but rather in what is taking place in laboratories. It is important to remember that Chinese iGEM victories, collaborative synthetic-biology projects and ethical discussions all took place before the government became involved. Thus, to appreciate fully the dynamics of an emerging science, it might be necessary to focus on what is formulated from the bottom up.The experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary researchThe experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary research. More specifically, it is a result of the commitment of Chinese scientists to incorporating national and international resources, actors and social concerns. For practical reasons, the national organization of research, such as through the big-question approach, might still have an important role. However, synthetic biology might be not only a mosaic of national agendas, but also shaped by transnational activities and scientific resources. What Chinese scientists will collectively achieve remains to be seen. Yet, the emergence of synthetic biology in China might be indicative of a new paradigm for how research practices can be introduced, normalized and regulated.  相似文献   

6.
7.
8.
Greener M 《EMBO reports》2008,9(11):1067-1069
A consensus definition of life remains elusiveIn July this year, the Phoenix Lander robot—launched by NASA in 2007 as part of the Phoenix mission to Mars—provided the first irrefutable proof that water exists on the Red Planet. “We''ve seen evidence for this water ice before in observations by the Mars Odyssey orbiter and in disappearing chunks observed by Phoenix […], but this is the first time Martian water has been touched and tasted,” commented lead scientist William Boynton from the University of Arizona, USA (NASA, 2008). The robot''s discovery of water in a scooped-up soil sample increases the probability that there is, or was, life on Mars.Meanwhile, the Darwin project, under development by the European Space Agency (ESA; Paris, France; www.esa.int/science/darwin), envisages a flotilla of four or five free-flying spacecraft to search for the chemical signatures of life in 25 to 50 planetary systems. Yet, in the vastness of space, to paraphrase the British astrophysicist Arthur Eddington (1822–1944), life might be not only stranger than we imagine, but also stranger than we can imagine. The limits of our current definitions of life raise the possibility that we would not be able to recognize an extra-terrestrial organism.Back on Earth, molecular biologists—whether deliberately or not—are empirically tackling the question of what is life. Researchers at the J Craig Venter Institute (Rockville, MD, USA), for example, have synthesized an artificial bacterial genome (Gibson et al, 2008). Others have worked on ‘minimal cells'' with the aim of synthesizing a ‘bioreactor'' that contains the minimum of components necessary to be self-sustaining, reproduce and evolve. Some biologists regard these features as the hallmarks of life (Luisi, 2007). However, to decide who is first in the ‘race to create life'' requires a consensus definition of life itself. “A definition of the precise boundary between complex chemistry and life will be critical in deciding which group has succeeded in what might be regarded by the public as the world''s first theology practical,” commented Jamie Davies, Professor of Experimental Anatomy at the University of Edinburgh, UK.For most biologists, defining life is a fascinating, fundamental, but largely academic question. It is, however, crucial for exobiologists looking for extra-terrestrial life on Mars, Jupiter''s moon Europa, Saturn''s moon Titan and on planets outside our solar system.In their search for life, exobiologists base their working hypothesis on the only example to hand: life on Earth. “At the moment, we can only assume that life elsewhere is based on the same principles as on Earth,” said Malcolm Fridlund, Secretary for the Exo-Planet Roadmap Advisory Team at the ESA''s European Space Research and Technology Centre (Noordwijk, The Netherlands). “We should, however, always remember that the universe is a peculiar place and try to interpret unexpected results in terms of new physics and chemistry.”The ESA''s Darwin mission will, therefore, search for life-related gases such as carbon dioxide, water, methane and ozone in the atmospheres of other planets. On Earth, the emergence of life altered the balance of atmospheric gases: living organisms produced all of the Earth'' oxygen, which now accounts for one-fifth of the atmosphere. “If all life on Earth was extinguished, the oxygen in our atmosphere would disappear in less than 4 million years, which is a very short time as planets go—the Earth is 4.5 billion years old,” Fridlund said. He added that organisms present in the early phases of life on Earth produced methane, which alters atmospheric composition compared with a planet devoid of life.Although the Darwin project will use a pragmatic and specific definition of life, biologists, philosophers and science-fiction authors have devised numerous other definitions—none of which are entirely satisfactory. Some are based on basic physiological characteristics: a living organism must feed, grow, metabolize, respond to stimuli and reproduce. Others invoke metabolic definitions that define a living organism as having a distinct boundary—such as a membrane—which facilitates interaction with the environment and transfers the raw materials needed to maintain its structure (Wharton, 2002). The minimal cell project, for example, defines cellular life as “the capability to display a concert of three main properties: self-maintenance (metabolism), reproduction and evolution. When these three properties are simultaneously present, we will have a full fledged cellular life” (Luisi, 2007). These concepts regard life as an emergent phenomenon arising from the interaction of non-living chemical components.Cryptobiosis—hidden life, also known as anabiosis—and bacterial endospores challenge the physiological and metabolic elements of these definitions (Wharton, 2002). When the environment changes, certain organisms are able to undergo cryptobiosis—a state in which their metabolic activity either ceases reversibly or is barely discernible. Cryptobiosis allows the larvae of the African fly Polypedilum vanderplanki to survive desiccation for up to 17 years and temperatures ranging from −270 °C (liquid helium) to 106 °C (Watanabe et al, 2002). It also allows the cysts of the brine shrimp Artemia to survive desiccation, ultraviolet radiation, extremes of temperature (Wharton, 2002) and even toyshops, which sell the cysts as ‘sea monkeys''. Organisms in a cryptobiotic state show characteristics that vary markedly from what we normally consider to be life, although they are certainly not dead. “[C]ryptobiosis is a unique state of biological organization”, commented James Clegg, from the Bodega Marine Laboratory at the University of California (Davies, CA, USA), in an article in 2001 (Clegg, 2001). Bacterial endospores, which are the “hardiest known form of life on Earth” (Nicholson et al, 2000), are able to withstand almost any environment—perhaps even interplanetary space. Microbiologists isolated endospores of strict thermophiles from cold lake sediments and revived spores from samples some 100,000 years old (Nicholson et al, 2000).…life might be not only stranger than we imagine, but also stranger than we can imagineAnother problem with the definitions of life is that these can expand beyond biology. The minimal cell project, for example, in common with most modern definitions of life, encompass the ability to undergo Darwinian evolution (Wharton, 2002). “To be considered alive, the organism needs to be able to undergo extensive genetic modification through natural selection,” said Professor Paul Freemont from Imperial College London, UK, whose research interests encompass synthetic biology. But the virtual ‘organisms'' in computer simulations such as the Game of Life (www.bitstorm.org/gameoflife) and Tierra (http://life.ou.edu/tierra) also exhibit life-like characteristics, including growth, death and evolution—similar to robots and other artifical systems that attempt to mimic life (Guruprasad & Sekar, 2006). “At the moment, we have some problems differentiating these approaches from something biologists consider [to be] alive,” Fridlund commented.…to decide who is first in the ‘race to create life'' requires a consensus definition of lifeBoth the genetic code and all computer-programming languages are means of communicating large quantities of codified information, which adds another element to a comprehensive definition of life. Guenther Witzany, an Austrian philosopher, has developed a “theory of communicative nature” that, he claims, differentiates biotic and abiotic life. “Life is distinguished from non-living matter by language and communication,” Witzany said. According to his theory, RNA and DNA use a ‘molecular syntax'' to make sense of the genetic code in a manner similar to language. This paragraph, for example, could contain the same words in a random order; it would be meaningless without syntactic and semantic rules. “The RNA/DNA language follows syntactic, semantic and pragmatic rules which are absent in [a] random-like mixture of nucleic acids,” Witzany explained.Yet, successful communication requires both a speaker using the rules and a listener who is aware of and can understand the syntax and semantics. For example, cells, tissues, organs and organisms communicate with each other to coordinate and organize their activities; in other words, they exchange signals that contain meaning. Noradrenaline binding to a β-adrenergic receptor in the bronchi communicates a signal that says ‘dilate''. “If communication processes are deformed, destroyed or otherwise incorrectly mediated, both coordination and organisation of cellular life is damaged or disturbed, which can lead to disease,” Witzany added. “Cellular life also interprets abiotic environmental circumstances—such as the availability of nutrients, temperature and so on—to generate appropriate behaviour.”Nonetheless, even definitions of life that include all the elements mentioned so far might still be incomplete. “One can make a very complex definition that covers life on the Earth, but what if we find life elsewhere and it is different? My opinion, shared by many, is that we don''t have a clue of how life arose on Earth, even if there are some hypotheses,” Fridlund said. “This underlies many of our problems defining life. Since we do not have a good minimum definition of life, it is hard or impossible to find out how life arose without observing the process. Nevertheless, I''m an optimist who believes the universe is understandable with some hard work and I think we will understand these issues one day.”Both synthetic biology and research on organisms that live in extreme conditions allow biologists to explore biological boundaries, which might help them to reach a consensual minimum definition of life, and understand how it arose and evolved. Life is certainly able to flourish in some remarkably hostile environments. Thermus aquaticus, for example, is metabolically optimal in the springs of Yellowstone National Park at temperatures between 75 °C and 80 °C. Another extremophile, Deinococcus radiodurans, has evolved a highly efficient biphasic system to repair radiation-induced DNA breaks (Misra et al, 2006) and, as Fridlund noted, “is remarkably resistant to gamma radiation and even lives in the cooling ponds of nuclear reactors.”In turn, synthetic biology allows for a detailed examination of the elements that define life, including the minimum set of genes required to create a living organism. Researchers at the J Craig Venter Institute, for example, have synthesized a 582,970-base-pair Mycoplasma genitalium genome containing all the genes of the wild-type bacteria, except one that they disrupted to block pathogenicity and allow for selection. ‘Watermarks'' at intergenic sites that tolerate transposon insertions identify the synthetic genome, which would otherwise be indistinguishable from the wild type (Gibson et al, 2008).Yet, as Pier Luigi Luisi from the University of Roma in Italy remarked, even M. genitalium is relatively complex. “The question is whether such complexity is necessary for cellular life, or whether, instead, cellular life could, in principle, also be possible with a much lower number of molecular components”, he said. After all, life probably did not start with cells that already contained thousands of genes (Luisi, 2007).…researchers will continue their attempts to create life in the test tube—it is, after all, one of the greatest scientific challengesTo investigate further the minimum number of genes required for life, researchers are using minimal cell models: synthetic genomes that can be included in liposomes, which themselves show some life-like characteristics. Certain lipid vesicles are able to grow, divide and grow again, and can include polymerase enzymes to synthesize RNA from external substrates as well as functional translation apparatuses, including ribosomes (Deamer, 2005).However, the requirement that an organism be subject to natural selection to be considered alive could prove to be a major hurdle for current attempts to create life. As Freemont commented: “Synthetic biologists could include the components that go into a cell and create an organism [that is] indistinguishable from one that evolved naturally and that can replicate […] We are beginning to get to grips with what makes the cell work. Including an element that undergoes natural selection is proving more intractable.”John Dupré, Professor of Philosophy of Science and Director of the Economic and Social Research Council (ESRC) Centre for Genomics in Society at the University of Exeter, UK, commented that synthetic biologists still approach the construction of a minimal organism with certain preconceptions. “All synthetic biology research assumes certain things about life and what it is, and any claims to have ‘confirmed'' certain intuitions—such as life is not a vital principle—aren''t really adding empirical evidence for those intuitions. Anyone with the opposite intuition may simply refuse to admit that the objects in question are living,” he said. “To the extent that synthetic biology is able to draw a clear line between life and non-life, this is only possible in relation to defining concepts brought to the research. For example, synthetic biologists may be able to determine the number of genes required for minimal function. Nevertheless, ‘what counts as life'' is unaffected by minimal genomics.”Partly because of these preconceptions, Dan Nicholson, a former molecular biologist now working at the ESRC Centre, commented that synthetic biology adds little to the understanding of life already gained from molecular biology and biochemistry. Nevertheless, he said, synthetic biology might allow us to go boldly into the realms of biological possibility where evolution has not gone before.An engineered synthetic organism could, for example, express novel amino acids, proteins, nucleic acids or vesicular forms. A synthetic organism could use pyranosyl-RNA, which produces a stronger and more selective pairing system than the natural existent furanosyl-RNA (Bolli et al, 1997). Furthermore, the synthesis of proteins that do not exist in nature—so-called never-born proteins—could help scientists to understand why evolutionary pressures only selected certain structures.As Luisi remarked, the ratio between the number of theoretically possible proteins containing 100 amino acids and the real number present in nature is close to the ratio between the space of the universe and the space of a single hydrogen atom, or the ratio between all the sand in the Sahara Desert and a single grain. Exploring never-born proteins could, therefore, allow synthetic biologists to determine whether particular physical, structural, catalytic, thermodynamic and other properties maximized the evolutionary fitness of natural proteins, or whether the current protein repertoire is predominately the result of chance (Luisi, 2007).In the final analysis, as with all science, deep understanding is more important than labelling with words.“Synthetic biology also could conceivably help overcome the ‘n = 1 problem''—namely, that we base biological theorising on terrestrial life only,” Nicholson said. “In this way, synthetic biology could contribute to the development of a more general, broader understanding of what life is and how it might be defined.”No matter the uncertainties, researchers will continue their attempts to create life in the test tube—it is, after all, one of the greatest scientific challenges. Whether or not they succeed will depend partly on the definition of life that they use, though in any case, the research should yield numerous insights that are beneficial to biologists generally. “The process of creating a living system from chemical components will undoubtedly offer many rich insights into biology,” Davies concluded. “However, the definition will, I fear, reflect politics more than biology. Any definition will, therefore, be subject to a lot of inter-lab political pressure. Definitions are also important for bioethical legislation and, as a result, reflect larger politics more than biology. In the final analysis, as with all science, deep understanding is more important than labelling with words.”  相似文献   

9.
Rull V 《EMBO reports》2011,12(2):103-106
Capitalism and sustainable development are mutually exclusive. To protect the environment we need to develop alternative economic systems, even if some predict the next man-made mass extinction is already inevitable.Humans are exploiting the Earth in an unsustainable manner, which is accelerating both environmental degradation and loss of biodiversity. Moreover, owing to global climate change, the rates of deterioration and extinction will probably increase in the near future. The scientific community has been highly sensitive to this alarming development and increased the number of baseline and ecological studies on the impact of humans on the biosphere and proposed various strategies to alleviate the environmental and biotic crisis. This has triggered vivid discussions about the potential risks and benefits of measures such as adaptation and/or mitigation actions, ecosystem restoration, the assisted migration of species or triage conservation (Mooney, 2010).One constant in these proposals is a sense of urgency, as the pace of change seems to outstrip our capacity to react to it. There are various crucial issues that limit said capacity: the incomplete inventory of biodiversity—we still do not know how many and which species live on Earth; our deficiency in understanding the relationships between biodiversity and ecosystem functioning; and the inertia of the planet itself—even if we immediately stopped using fossil fuels and reduced CO2 emissions, global climate change would continue for decades or even centuries (Matthews & Weaver, 2010). Finally, but maybe most damaging, our social and economic systems are too recalcitrant to even acknowledge, let alone abandon or reduce their destructive practices.…our social and economic systems are too recalcitrant to even acknowledge, let alone abandon or reduce their destructive practicesA popular remedy for the deterioration of nature is ‘sustainability''—commonly defined as meeting “the needs of the present without compromising the ability of future generations to meet their own needs” (WCDE, 1987)—which would harmonize human development and the conservation of nature. This classical notion of sustainable development argues inexplicitly for caring for our natural environment, because it is the primary provider of resources to sustain human life. Elkington (2002) introduced a social element to this by recognizing that sustainable development involves “the simultaneous pursuit of economic prosperity, environmental quality and social equity”. Baumgärtner & Quaas (2010) define sustainability as “a matter of justice at three levels: between humans of the same generation, between humans of different generations, and between humans and nature”. Many other forms, definitions and interpretations of sustainability exist—strong, weak, technological, economical, social, environmental, ecological, and so on—but, in all cases, the ultimate objective of sustainability is to preserve biodiversity and ecological functions for the benefit of present and future human generations. In short, our concern for nature is essentially anthropocentric (Rull, 2010a).The concept of sustainability has become the paradigm for conservation and environmental studies, to the extent that many consumer products, technologies and developments now claim to be ‘sustainable'', whatever that means. The same happens at the popular level, as the term ‘sustainable'' is often considered a synonym of good whereas ‘unsustainable'' is used in a pejorative sense for what is considered intrinsically bad. Curiously, these notions are widespread in different societal sectors—politicians, economists, scientists, journalists, the general public—independent of their social condition and political and economic orientation. The terms ‘sustainable'' and ‘sustainability'' are in danger of losing their original meaning to become merely rhetorical elements or advertising slogans.Our understanding and definition of nature conservation is largely guided by our concept of ‘naturalness''. But nature has always been in flux; after billions of years of biological evolution and ecological change—with and without human involvement—it is impossible to define the ‘natural'' state of the environment. In addition, human actions have an impact on ecosystems; thus, the maintenance of a pristine state of the Earth—however one would define this—does not seem to be compatible with basic human needs. A more practical approach to sustainability and the preservation of a ‘natural'' state would be to require that any modifications of nature leave ecosystems as diverse and ‘healthy'' as possible. More pragmatically, the best we could hope to achieve, even from an ecocentric point of view, is to stop further ‘spoiling'' of nature and preserve the current ‘unnatural'' state.Our understanding and definition of nature conservation is largely guided by our concept of ‘naturalness''Given this inherent conflict between conservation and human needs, conservation organizations struggle to propose practices that “balance the needs of people with the needs of the planet that supports us” (IUCN, http://www.iucn.org), or “protect Earth''s most important natural places for you and future generations” (The Nature Conservancy, http://www.nature.org), in order to “build a future where people live in harmony with nature” (WWF, http://www.wwf.org). In other words, conservationists advocate sustainable development of human societies, but their activities can only be palliative. Sustainability will only be attained after drastic reorientation towards steady-state or de-growth economic models (Lawn, 2010; Schneider et al, 2010), which would involve profound changes not only for societies, but also for every individual.The main obstacles to such broad socioeconomic change towards sustainable development are the high number of environmental problems that clamour for attention (seeing trees but not the forest) and the intransigence of social and economic systems. It is naive to pretend that representatives of the dominant economic and political systems will renounce capitalism; this has been repeatedly demonstrated at Kyoto or Copenhagen, where the international community was unable to agree on even small changes to slow global climate change.Even worse, scientists and conservationists could become trapped in the very system that they are trying to change. A good example of this risk comes from attempts to assign monetary value to biodiversity and ecosystem services and use market rules to manage them (Rull, 2010a). A simple economic analysis is enough to demonstrate the fallacy of this economic approach to sustainability, even from a pragmatic perspective.The appeal of this concept is that any ecosystem service could be submitted to a cost–benefit analysis, that would incorporate natural capital into current economic models. It also makes possible a general definition of comprehensive wealth, which includes not only reproducible capital such as buildings, machinery, roads and so on, but also natural capital. In this context, sustainable development has been redefined as ‘the accumulation of comprehensive wealth'', which requires that each generation should bequeath the next one at least as large a productive base, including both reproducible and natural capital, as it has inherited (Dasgupta, 2010).However, submitting natural resources to economic analysis does not guarantee sustainable practices. The first thing to bear in mind is that comprehensive wealth is finite and limited by the carrying capacity of the Earth. If certain planetary systems—such as climate, ocean acidity, freshwater and biodiversity—change beyond a certain limit, it could trigger nonlinear and catastrophic consequences on a global scale (Rokström et al, 2009). Second, the components of comprehensive wealth depend on each other: for example, building a road through a forest is done at the expense of the forest, that is, natural capital. Building the road might increase comprehensive wealth but it has a price: natural degradation, including resource exhaustion, loss of biodiversity and increased pollution. If human growth continues, these costs could become so high that systems—both ecological and economic—collapse. Sustainable practices could therefore aim to minimize the loss of natural capital, but if human development continues unabated, the carrying capacity of the Earth will nonetheless be reached sooner or later.Rockström et al (2009) argue that humanity has already transgressed three of nine critical planetary boundaries, namely climate change, biodiversity loss and interference with the nitrogen cycle through industrial and agricultural fixation of atmospheric nitrogen, the combustion of fossil fuels and biomass and the pollution of waterways and coastal zones. This means that nature is subsidizing the capitalist mode of development. For a quantitative estimate of natural costs, the LPI (Living Planet Index) of global diversity has declined by nearly 35% in the past 30 years (WWF, 2008); hence, the cost during this period has been about 1.2% of species per year.Even if capitalism, as the dominant economic model, incorporates natural capital into its cost–benefit analysis, nature still loses out; unlimited human growth—the central tenet of capitalism—and sustainable development are incompatible (Rull, 2010b). Some alternative modes of human development exist (Costanza, 2009; Schneider et al, 2010), but these also rely on sustainability.…even if capitalism, as the dominant economic model, incorporates natural capital into its cost-benefit analysis, nature still loses out…How then would nature benefit from sustainability? In other words, how would sustainability guarantee nature conservation? To answer this question, we must realize what nature is, beyond its role in fulfilling human needs. Our planet has mostly existed without humans since the first forms of life appeared around 3.8 billion years ago. Homo sapiens appeared around 200,000 years ago (Tattersall & Schwartz, 2009), but it was only during the past 10,000 years that humans began to change their environment on an increasing scale. Before this time, biodiversity gains and losses were the results of natural evolution; extinction patterns were more stochastic and were not determined by the needs of one species. The key question is whether humankind will endure, or be just another chapter in the history of the Earth.Despite claims that cultural evolution has replaced biological evolution in humans, natural selection is still shaping our biology in response to environmental change. Humans in their current form are therefore not necessarily the last word in evolutionary terms, nor is there a guarantee that Homo will be around in the future (Rull, 2009). If we take a strictly anthropocentric view and only worry for future humans, the preservation of the planet beyond the next few generations should not be a matter of concern. However, if we worry for the fate of the biosphere in general, nature conservation would imply not only the preservation of the current status, but also its safe evolutionary continuity.From an evolutionary perspective, sustainability is therefore not enough, given its intrinsic anthropocentric focus. Still, it would be a significant improvement on the unfettered exploitation of natural resources. To progress from sustainability to nature conservation would require a less anthropocentric and more evolutionary perspective. This might look like renouncing our status as the assumedly superior species on Earth but, as intelligent creatures, we should be able to embrace conservation of nature. So far, we have used our intelligence to try to understand our own existence, prolong our lives and develop new technologies to rule the Earth. When it comes to environmental issues, however, we are just stupid (Meffe, 2009). We must realize that the ‘real world'' is not the transitory socioeconomic scenario in which we live, but the Earth that is evolving at a pace and magnitude that exceeds our capacity to understand and appreciate it. So far, proponents of sustainability have emphasized social equity and justice for future generations, whereas nature is still viewed as a service provider that should be maintained for practical reasons.From an evolutionary perspective, sustainability is therefore not enough, given its intrinsic anthropocentric focusTo make the argument more complicated, evolution does not seem to be a linear process. On the basis of the geological and palaeontological record, the palaeontologist Peter Ward (2009) has proposed the Medea hypothesis: life—rather than contributing to the habitable condition of the Earth as proposed by the Gaia hypothesis (Lovelock, 1979)—can become self-destructive and has caused nearly all the mass extinctions that have occurred since the origin of life. So far, there have been five mass extinctions in the history of life on Earth (Courtillot, 1999). The last one, 65 million years ago—the demise of the dinosaurs—was probably triggered, at least in part, by a meteorite impact and is the only exception to the actions of Medea.The other mass extinctions were probably the result of biology (Ward, 2009): the proliferation of methane-producing microbes in the early days, which poisoned the biosphere and triggered a significant temperature decrease; the oxygenation of the atmosphere, caused by the evolution of photosynthetic organisms; a global glaciation of the planet (the Snowball Earth hypothesis), probably caused by a decrease in atmospheric greenhouse gases; and the eutrophication of coastal waters.The sixth mass extinction might be in progress, manifested in the ongoing loss of biodiversity caused by human activities (Wake & Vredenburg, 2008). This time, we would be the executors of Medea. The geological record, however, shows that each mass extinction was followed by a spectacular burst of diversification, which created new species. It seems that Gaia takes over after each of Medea''s annihilations; evolution on our planet is therefore imagined as the result of a capricious game between the goddess of Earth (Gaia) and the killer enchantress Medea.The sixth mass extinction might be in progress, manifested in the ongoing loss of biodiversity caused by human activitiesIf Ward is right, there is little we can do to avoid the next catastrophic extinction and we can only delay it for the sake of a few generations. This could lead to a contemplative attitude, given the inevitability of the looming destruction, combined with some efforts to preserve certain species for the sake of temporary human needs and pleasure. After all, Gaia will take care of life again. If this cycle is the ‘natural'' state, more radical ecocentrists should accept it: we have no reason to prefer the current state of life to the future result of Gaia''s creativity after the inevitable extinction. To maintain the status quo would be, according to this view, an unnatural attitude.If Ward is right, there is little we can do to avoid the next catastrophic extinction and we can only delay it for the sake of a few generationsIn conclusion, any proposals aiming to achieve sustainability, owing to their intrinsic anthropocentric nature, can help to promote intra- and inter-generational social justice, but they are not sufficient to achieve real nature conservation. This goal would require even more profound societal change than is acknowledged. Replacing capitalism with a new economic system is necessary for sustainability, but real nature conservation also requires a less anthropocentric attitude and the adoption of an evolutionary perspective. Scientists would have a key role in triggering and guiding these changes, provided that they are able to analyse and communicate the appropriate knowledge and maintain their independence from political and economic influences. Scientists must also leave their laboratories and begin to interact with society on a larger scale (Johns, 2009). One relevant lesson is that natural systems have their dynamics, guided by evolution, so there is no single ‘natural'' state as a preferred conservation target. Naturalness, on the contrary, is constant change.In the light of the long-term cyclical nature of destruction and creation, it could become a frustrating exercise to argue for conservation, given that the next major extinction and subsequent rise of a different biosphere is unavoidable. However, much remains to be done. Even if the cataclysm is inevitable, a reasonable target for conservation is to delay it as much as possible by passing on the responsibility to forces and processes beyond human control, biotic or not. In other words: let the next major extinction event be a natural one.? Open in a separate windowValentí Rull  相似文献   

10.
Samuel Caddick 《EMBO reports》2008,9(12):1174-1176
  相似文献   

11.
L Bornmann 《EMBO reports》2012,13(8):673-676
The global financial crisis has changed how nations and agencies prioritize research investment. There has been a push towards science with expected benefits for society, yet devising reliable tools to predict and measure the social impact of research remains a major challenge.Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier, and it has become the underlying rationale for public support and funding of science.However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [1]. In fact, there is no direct link between the scientific quality of a research project and its societal value. As Paul Nightingale and Alister Scott of the University of Sussex''s Science and Technology Policy Research centre have pointed out: “research that is highly cited or published in top journals may be good for the academic discipline but not for society” [2]. Moreover, it might take years, or even decades, until a particular body of knowledge yields new products or services that affect society. By way of example, in an editorial on the topic in the British Medical Journal, editor Richard Smith cites the original research into apoptosis as work that is of high quality, but that has had “no measurable impact on health” [3]. He contrasts this with, for example, research into “the cost effectiveness of different incontinence pads”, which is certainly not seen as high value by the scientific community, but which has had an immediate and important societal impact.…the growth of scientific research during the past decades has outpaced the public resources available to fund itThe problem actually begins with defining the ‘societal impact of research''. A series of different concepts has been introduced: ‘third-stream activities'' [4], ‘societal benefits'' or ‘societal quality'' [5], ‘usefulness'' [6], ‘public values'' [7], ‘knowledge transfer'' [8] and ‘societal relevance'' [9, 10]. Yet, each of these concepts is ultimately concerned with measuring the social, cultural, environmental and economic returns from publicly funded research, be they products or ideas.In this context, ‘societal benefits'' refers to the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making. ‘Cultural benefits'' are those that add to the cultural capital of a nation, for example, by giving insight into how we relate to other societies and cultures, by providing a better understanding of our history and by contributing to cultural preservation and enrichment. ‘Environmental benefits'' benefit the natural capital of a nation, by reducing waste and pollution, and by increasing natural preserves or biodiversity. Finally, ‘economic benefits'' increase the economic capital of a nation by enhancing its skills base and by improving its productivity [11].Given the variability and the complexity of evaluating the societal impact of research, Barend van der Meulen at the Rathenau Institute for research and debate on science and technology in the Netherlands, and Arie Rip at the School of Management and Governance of the University of Twente, the Netherlands, have noted that “it is not clear how to evaluate societal quality, especially for basic and strategic research” [5]. There is no accepted framework with adequate datasets comparable to,for example, Thomson Reuters'' Web of Science, which enables the calculation of bibliometric values such as the h index [12] or journal impact factor [13]. There are also no criteria or methods that can be applied to the evaluation of societal impact, whilst conventional research and development (R&D) indicators have given little insight, with the exception of patent data. In fact, in many studies, the societal impact of research has been postulated rather than demonstrated [14]. For Benoît Godin at the Institut National de la Recherche Scientifique (INRS) in Quebec, Canada, and co-author Christian Doré, “systematic measurements and indicators [of the] impact on the social, cultural, political, and organizational dimensions are almost totally absent from the literature” [15]. Furthermore, they note, most research in this field is primarily concerned with economic impact.A presentation by Ben Martin from the Science and Technology Policy Research Unit at Sussex University, UK, cites four common problems that arise in the context of societal impact measurements [16]. The first is the causality problem—it is not clear which impact can be attributed to which cause. The second is the attribution problem, which arises because impact can be diffuse or complex and contingent, and it is not clear what should be attributed to research or to other inputs. The third is the internationality problem that arises as a result of the international nature of R&D and innovation, which makes attribution virtually impossible. Finally, the timescale problem arises because the premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impact.…in many studies, the societal impact of research has been postulated rather than demonstratedIn addition, there are four other problems. First, it is hard to find experts to assess societal impact that is based on peer evaluation. As Robert Frodeman and James Britt Holbrook at the University of North Texas, USA, have noted, “[s]cientists generally dislike impacts considerations” and evaluating research in terms of its societal impact “takes scientists beyond the bounds of their disciplinary expertise” [10]. Second, given that the scientific work of an engineer has a different impact than the work of a sociologist or historian, it will hardly be possible to have a single assessment mechanism [4, 17]. Third, societal impact measurement should take into account that there is not just one model of a successful research institution. As such, assessment should be adapted to the institution''s specific strengths in teaching and research, the cultural context in which it exists and national standards. Finally, the societal impact of research is not always going to be desirable or positive. For example, Les Rymer, graduate education policy advisor to the Australian Group of Eight (Go8) network of university vice-chancellors, noted in a report for the Go8 that, “environmental research that leads to the closure of a fishery might have an immediate negative economic impact, even though in the much longer term it will preserve a resource that might again become available for use. The fishing industry and conservationists might have very different views as to the nature of the initial impact—some of which may depend on their view about the excellence of the research and its disinterested nature” [18].Unlike scientific impact measurement, for which there are numerous established methods that are continually refined, research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishments. Even so, governments already conduct budget-relevant measurements, or plan to do so. The best-known national evaluation system is the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. Efforts are under way to set up the Research Excellence Framework (REF), which is set to replace the RAE in 2014 “to support the desire of modern research policy for promoting problem-solving research” [21]. In order to develop the new arrangements for the assessment and funding of research in the REF, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches for evaluating the impact of research [20]. The recommendation from this consultation is that impact should be measured in a quantifiable way, and expert panels should review narrative evidence in case studies supported by appropriate indicators [19,21].…premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impactMany of the studies that have carried out societal impact measurement chose to do so on the basis of case studies. Although this method is labour-intensive and a craft rather than a quantitative activity, it seems to be the best way of measuring the complex phenomenon that is societal impact. The HEFCE stipulates that “case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe” [22]. Claire Donovan at Brunel University, London, UK, considers the preference for a case-study approach in the REF to be “the ‘state of the art'' [for providing] the necessary evidence-base for increased financial support of university research across all fields” [23]. According to Finn Hansson from the Department of Leadership, Policy and Philosophy at the Copenhagen Business School, Denmark, and co-author Erik Ernø-Kjølhede, the new REF is “a clear political signal that the traditional model for assessing research quality based on a discipline-oriented Mode 1 perception of research, first and foremost in the form of publication in international journals, was no longer considered sufficient by the policy-makers” [19]. ‘Mode 1'' describes research governed by the academic interests of a specific community, whereas ‘Mode 2'' is characterized by collaboration—both within the scientific realm and with other stakeholders—transdisciplinarity and basic research that is being conducted in the context of application [19].The new REF will also entail changes in budget allocations. The evaluation of a research unit for the purpose of allocations will determine 20% of the societal influence dimension [19]. The final REF guidance contains lists of examples for different types of societal impact [24].Societal impact is much harder to measure than scientific impact, and there are probably no indicators that can be used across all disciplines and institutions for collation in databases [17]. Societal impact often takes many years to become apparent, and “[t]he routes through which research can influence individual behaviour or inform social policy are often very diffuse” [18].Yet, the practitioners of societal impact measurement should not conduct this exercise alone; scientists should also take part. According to Steve Hanney at Brunel University, an expert in assessing payback or impacts from health research, and his co-authors, many scientists see societal impact measurement as a threat to their scientific freedom and often reject it [25]. If the allocation of funds is increasingly oriented towards societal impact issues, it challenges the long-standing reward system in science whereby scientists receive credits—not only citations and prizes but also funds—for their contributions to scientific advancement. However, given that societal impact measurement is already important for various national evaluations—and other countries will follow probably—scientists should become more concerned with this aspect of their research. In fact, scientists are often unaware that their research has a societal impact. “The case study at BRASS [Centre for Business Relationships, Accountability, Sustainability and Society] uncovered activities that were previously ‘under the radar'', that is, researchers have been involved in activities they realised now can be characterized as productive interactions” [26] between them and societal stakeholders. It is probable that research in many fields already has a direct societal impact, or induces productive interactions, but that it is not yet perceived as such by the scientists conducting the work.…research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishmentsThe involvement of scientists is also necessary in the development of mechanisms to collect accurate and comparable data [27]. Researchers in a particular discipline will be able to identify appropriate indicators to measure the impact of their kind of work. If the approach to establishing measurements is not sufficiently broad in scope, there is a danger that readily available indicators will be used for evaluations, even if they do not adequately measure societal impact [16]. There is also a risk that scientists might base their research projects and grant applications on readily available and ultimately misleading indicators. As Hansson and Ernø-Kjølhede point out, “the obvious danger is that researchers and universities intensify their efforts to participate in activities that can be directly documented rather than activities that are harder to document but in reality may be more useful to society” [19]. Numerous studies have documented that scientists already base their activities on the criteria and indicators that are applied in evaluations [19, 28, 29].Until reliable and robust methods to assess impact are developed, it makes sense to use expert panels to qualitatively assess the societal relevance of research in the first instance. Rymer has noted that, “just as peer review can be useful in assessing the quality of academic work in an academic context, expert panels with relevant experience in different areas of potential impact can be useful in assessing the difference that research has made” [18].Whether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting the public funding and support of basic researchWhether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting public funding and support of basic research. This has always been the case, but new research into measures that can assess the societal impact of research would provide better qualitative and quantitative data on which funding agencies and politicians could base decisions. At the same time, such measurement should not come at the expense of basic, blue-sky research, given that it is and will remain near-impossible to predict the impact of certain research projects years or decades down the line.  相似文献   

12.
Martinson BC 《EMBO reports》2011,12(8):758-762
Universities have been churning out PhD students to reap financial and other rewards for training biomedical scientists. This deluge of cheap labour has created unhealthy competition, which encourages scientific misconduct.Most developed nations invest a considerable amount of public money in scientific research for a variety of reasons: most importantly because research is regarded as a motor for economic progress and development, and to train a research workforce for both academia and industry. Not surprisingly, governments are occasionally confronted with questions about whether the money invested in research is appropriate and whether taxpayers are getting the maximum value for their investments.…questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientistsThe training and maintenance of the research workforce is a large component of these investments. Yet discussions in the USA about the appropriate size of this workforce have typically been contentious, owing to an apparent lack of reliable data to tell us whether the system yields academic ‘reproduction rates'' that are above, below or at replacement levels. In the USA, questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientists. As Donald Kennedy, then Editor-in-Chief of Science, noted several years ago, leaders in prestigious academic institutions have repeatedly rung alarm bells about shortages in the science workforce. Less often does one see questions raised about whether too many scientists are being produced or concerns about unintended consequences that may result from such overproduction. Yet recognizing that resources are finite, it seems reasonable to ask what level of competition for resources is productive, and at what level does competition become counter-productive.Finding a proper balance between the size of the research workforce and the resources available to sustain it has other important implications. Unhealthy competition—too many people clamouring for too little money and too few desirable positions—creates its own problems, most notably research misconduct and lower-quality, less innovative research. If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edge. Moreover, many in the science community worry that every publicized case of research misconduct could jeopardize those resources, if politicians and taxpayers become unwilling to invest in a research system that seems to be riddled with fraud and misconduct.The biomedical research enterprise in the USA provides a useful context in which to examine the level of competition for resources among academic scientists. My thesis is that the system of publicly funded research in the USA as it is currently configured supports a feedback system of institutional incentives that generate excessive competition for resources in biomedical research. These institutional incentives encourage universities to overproduce graduate students and postdoctoral scientists, who are both trainees and a cheap source of skilled labour for research while in training. However, once they have completed their training, they become competitors for money and positions, thereby exacerbating competitive pressures.Questions raised about whether too many scientists are being produced or concerns about the unintended consequences of such overproduction are less commonThe resulting scarcity of resources, partly through its effect on peer review, leads to a shunting of resources away from both younger researchers and the most innovative ideas, which undermines the effectiveness of the research enterprise as a whole. Faced with an increasing number of grant applications and the consequent decrease in the percentage of projects that can be funded, reviewers tend to ‘play it safe'' and favour projects that have a higher likelihood of yielding results, even if the research is conservative in the sense that it does not explore new questions. Resource scarcity can also introduce unwanted randomness to the process of determining which research gets funded. A large group of scientists, led by a cancer biologist, has recently mounted a campaign against a change in a policy of the National Institutes of Health (NIH) to allow only one resubmission of an unfunded grant proposal (Wadman, 2011). The core of their argument is that peer reviewers are likely able to distinguish the top 20% of research applications from the rest, but that within that top 20%, distinguishing the top 5% or 10% means asking peer reviewers for a level of precision that is simply not possible. With funding levels in many NIH institutes now within that 5–10% range, the argument is that reviewers are being forced to choose at random which excellent applications do and do not get funding. In addition to the inefficiency of overproduction and excessive competition in terms of their costs to society and opportunity costs to individuals, these institutional incentives might undermine the integrity and quality of science, and reduce the likelihood of breakthroughs.My colleagues and I have expressed such concerns about workforce dynamics and related issues in several publications (Martinson, 2007; Martinson et al, 2005, 2006, 2009, 2010). Early on, we observed that, “missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures” (Martinson et al, 2005). Our more recent publications have been more specific about the institutional and systemic structures concerned. It seems that at least a few important leaders in science share these concerns.In April 2009, the NIH, through the National Institute of General Medical Sciences (NIGMS), issued a request for applications (RFA) calling for proposals to develop computational models of the research workforce (http://grants.nih.gov/grants/guide/rfa-files/RFA-GM-10-003.html). Although such an initiative might be premature given the current level of knowledge, the rationale behind the RFA seems irrefutable: “there is a need to […] pursue a systems-based approach to the study of scientific workforce dynamics.” Roughly four decades after the NIH appeared on the scene, this is, to my knowledge, the first official, public recognition that the biomedical workforce tends not to conform nicely to market forces of supply and demand, despite the fact that others have previously made such arguments.Early last year, Francis Collins, Director of the NIH, published a PolicyForum article in Science, voicing many of the concerns I have expressed about specific influences that have led to growth rates in the science workforce that are undermining the effectiveness of research in general, and biomedical research in particular. He notes the increasing stress in the biomedical research community after the end of the NIH “budget doubling” between 1998 and 2003, and the likelihood of further disruptions when the American Recovery and Reinvestment Act of 2009 (ARRA) funding ends in 2011. Arguing that innovation is crucial to the future success of biomedical research, he notes the tendency towards conservatism of the NIH peer-review process, and how this worsens in fiscally tight times. Collins further highlights the ageing of the NIH workforce—as grants increasingly go to older scientists—and the increasing time that researchers are spending in itinerant and low-paid postdoctoral positions as they stack up in a holding pattern, waiting for faculty positions that may or may not materialize. Having noted these challenging trends, and echoing the central concerns of a 2007 Nature commentary (Martinson, 2007), he concludes that “…it is time for NIH to develop better models to guide decisions about the optimum size and nature of the US workforce for biomedical research. A related issue that needs attention, though it will be controversial, is whether institutional incentives in the current system that encourage faculty to obtain up to 100% of their salary from grants are the best way to encourage productivity.”Similarly, Bruce Alberts, Editor-in-Chief of Science, writing about incentives for innovation, notes that the US biomedical research enterprise includes more than 100,000 graduate students and postdoctoral fellows. He observes that “only a select few will go on to become independent research scientists in academia”, and argues that “assuming that the system supporting this career path works well, these will be the individuals with the most talent and interest in such an endeavor” (Alberts, 2009).His editorial is not concerned with what happens to the remaining majority, but argues that even among the select few who manage to succeed, the funding process for biomedical research “forces them to avoid risk-taking and innovation”. The primary culprit, in his estimation, is the conservatism of the traditional peer-review system for federal grants, which values “research projects that are almost certain to ‘work''”. He continues, “the innovation that is essential for keeping science exciting and productive is replaced by […] research that has little chance of producing the breakthroughs needed to improve human health.”If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edgeAlthough I believe his assessment of the symptoms is correct, I think he has misdiagnosed the cause, in part because he has failed to identify which influence he is concerned with from the network of influences in biomedical research. To contextualize the influences of concern to Alberts, we must consider the remaining majority of doctorally trained individuals so easily dismissed in his editorial, and further examine what drives the dynamics of the biomedical research workforce.Labour economists might argue that market forces will always balance the number of individuals with doctorates with the number of appropriate jobs for them in the long term. Such arguments would ignore, however, the typical information asymmetry between incoming graduate students, whose knowledge about their eventual job opportunities and career options is by definition far more limited than that of those who run the training programmes. They would also ignore the fact that universities are generally not confronted with the externalities resulting from overproduction of PhDs, and have positive financial incentives that encourage overproduction. During the past 40 years, NIH ‘extramural'' funding has become crucial for graduate student training, faculty salaries and university overheads. For their part, universities have embraced NIH extramural funding as a primary revenue source that, for a time, allowed them to implement a business model based on the interconnected assumptions that, as one of the primary ‘outputs'' or ‘products'' of the university, more doctorally trained individuals are always better than fewer, and because these individuals are an excellent source of cheap, skilled labour during their training, they help to contain the real costs of faculty research.“…the current system has succeeded in maximizing the amount of research […] it has also degraded the quality of graduate training and led to an overproduction of PhDs…”However, it has also made universities increasingly dependent on NIH funding. As recently documented by the economist Paula Stephan, most faculty growth in graduate school programmes during the past decade has occurred in medical colleges, with the majority—more than 70%—in non-tenure-track positions. Arguably, this represents a shift of risk away from universities and onto their faculty. Despite perennial cries of concern about shortages in the research workforce (Butz et al, 2003; Kennedy et al, 2004; National Academy of Sciences et al, 2005) a number of commentators have recently expressed concerns that the current system of academic research might be overbuilt (Cech, 2005; Heinig et al, 2007; Martinson, 2007; Stephan, 2007). Some explicitly connect this to structural arrangements between the universities and NIH funding (Cech, 2005; Collins, 2007; Martinson, 2007; Stephan, 2007).In 1995, David Korn pointed out what he saw as some problematic aspects of the business model employed by Academic Medical Centers (AMCs) in the USA during the past few decades (Korn, 1995). He noted the reliance of AMCs on the relatively low-cost, but highly skilled labour represented by postdoctoral fellows, graduate students and others—who quickly start to compete with their own professors and mentors for resources. Having identified the economic dependence of the AMCs on these inexpensive labour pools, he noted additional problems with the graduate training programmes themselves. “These programs are […] imbued with a value system that clearly indicates to all participants that true success is only marked by the attainment of a faculty position in a high-profile research institution and the coveted status of principal investigator on NIH grants.” Pointing to “more than 10 years of severe supply/demand imbalance in NIH funds”, Korn concluded that, “considering the generative nature of each faculty mentor, this enterprise could only sustain itself in an inflationary environment, in which the society''s investment in biomedical research and clinical care was continuously and sharply expanding.” From 1994 to 2003, total funding for biomedical research in the USA increased at an annual rate of 7.8%, after adjustment for inflation. The comparable rate of growth between 2003 and 2007 was 3.4% (Dorsey et al, 2010). These observations resonate with the now classic observation by Derek J. de Solla Price, from more than 30 years before, that growth in science frequently follows an exponential pattern that cannot continue indefinitely; the enterprise must eventually come to a plateau (de Solla Price, 1963).In May 2009, echoing some of Korn''s observations, Nobel laureate Roald Hoffmann caused a stir in the US science community when he argued for a “de-coupling” of the dual roles of graduate students as trainees and cheap labour (Hoffmann, 2009). His suggestion was to cease supporting graduate students with faculty research grants, and to use the money instead to create competitive awards for which graduate students could apply, making them more similar to free agents. During the ensuing discussion, Shirley Tilghman, president of Princeton University, argued that “although the current system has succeeded in maximizing the amount of research performed […] it has also degraded the quality of graduate training and led to an overproduction of PhDs in some areas. Unhitching training from research grants would be a much-needed form of professional ‘birth control''” (Mervis, 2009).The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientistsAlthough the issue of what I will call the ‘academic birth rate'' is the central concern of this analysis, the ‘academic end-of-life'' also warrants some attention. The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientists. A 2008 news item in Science quoted then 70-year-old Robert Wells, a molecular geneticist at Texas A&M University, “‘if I and other old birds continue to land the grants, the [young scientists] are not going to get them.” He worries that the budget will not be able to support “the 100 people ‘I''ve trained […] to replace me''” (Kaiser, 2008). While his claim of 100 trainees might be astonishing, it might be more astonishing that his was the outlying perspective. The majority of senior scientists interviewed for that article voiced intentions to keep doing science—and going after NIH grants—until someone forced them to stop or they died.Some have looked at the current situation with concern, primarily because of the threats it poses to the financial and academic viability of universities (Korn, 1995; Heinig et al, 2007; Korn & Heinig, 2007), although most of those who express such concerns have been distinctly reticent to acknowledge the role of universities in creating and maintaining the situation. Others have expressed concerns about the differential impact of extreme competition and meagre job prospects on the recruitment, development and career survival of young and aspiring scientists (Freeman et al, 2001; Kennedy et al, 2004; Martinson et al, 2006; Anderson et al, 2007a; Martinson, 2007; Stephan, 2007). There seems to be little disagreement, however, that the system has generated excessively high competition for federal research funding, and that this threatens to undermine the very innovation and production of knowledge that is its raison d''etre.The production of knowledge in science, particularly of the ‘revolutionary'' variety, is generally not a linear input–output process with predictable returns on investment, clear timelines and high levels of certainty (Lane, 2009). On the contrary, it is arguable that “revolutionary science is a high risk and long-term endeavour which usually fails” (Charlton & Andras, 2008). Predicting where, when and by whom breakthroughs in understanding will be produced has proven to be an extremely difficult task. In the face of such uncertainty, and denying the realities of finite resources, some have argued that the best bet is to maximize the number of scientists, using that logic to justify a steady-state production of new PhDs, regardless of whether the labour market is sending signals of increasing or decreasing demand for that supply. Only recently have we begun to explore the effects of the current arrangement on the process of knowledge production, and on innovation in particular (Charlton & Andras, 2008; Kolata, 2009).…most of those who express such concerns have been reticent to acknowledge the role of universities themselves in creating and maintaining the situationBruce Alberts, in the above-mentioned editorial, points to several initiatives launched by the NIH that aim to get a larger share of NIH funding into the hands of young scientists with particularly innovative ideas. These include the “New Innovator Award,” the “Pioneer Award” and the “Transformational R01 Awards”. The proportion of NIH funding dedicated to these awards, however, amounts to “only 0.27% of the NIH budget” (Alberts, 2009). Such a small proportion of the NIH budget does not seem likely to generate a large amount of more innovative science. Moreover, to the extent that such initiatives actually succeed in enticing more young investigators to become dependent on NIH funds, any benefit these efforts have in terms of innovation may be offset by further increases in competition for resources that will come when these new ‘innovators'' reach the end of this specialty funding and add to the rank and file of those scrapping for funds through the standard mechanisms.Our studies on research integrity have been mostly oriented towards understanding how the influences within which academic scientists work might affect their behaviour, and thus the quality of the science they produce (Anderson et al, 2007a, 2007b; Martinson et al, 2009, 2010). My colleagues and I have focused on whether biomedical researchers perceive fairness in the various exchange relationships within their work systems. I am persuaded by the argument that expectations of fairness in exchange relationships have been hard-wired into us through evolution (Crockett et al, 2008; Hsu et al, 2008; Izuma et al, 2008; Pennisi, 2009), with the advent of modern markets being a primary manifestation of this. Thus, violations of these expectations strike me as potentially corrupting influences. Such violations might be prime motivators for ill will, possibly engendering bad-faith behaviour among those who perceive themselves to have been slighted, and therefore increasing the risk of research misconduct. They might also corrupt the enterprise by signalling to talented young people that biomedical research is an inhospitable environment in which to develop a career, possibly chasing away some of the most talented individuals, and encouraging a selection of characteristics that might not lead to optimal effectiveness, in terms of scientific innovation and productivity (Charlton, 2009).To the extent that we have an ecology with steep competition that is fraught with high risks of career failure for young scientists after they incur large costs of time, effort and sometimes financial resources to obtain a doctoral degree, why would we expect them to take on the additional, substantial risks involved in doing truly innovative science and asking risky research questions? And why, in such a cut-throat setting, would we not anticipate an increase in corner-cutting, and a corrosion of good scientific practice, collegiality, mentoring and sociability? Would we not also expect a reduction in high-risk, innovative science, and a reversion to a more career-safe type of ‘normal'' science? Would this not reduce the effectiveness of the institution of biomedical research? I do not claim to know the conditions needed to maximize the production of research that is novel, innovative and conducted with integrity. I am fairly certain, however, that putting scientists in tenuous positions in which their careers and livelihoods would be put at risk by pursuing truly revolutionary research is one way to insure against it.  相似文献   

13.
Biopedagogy     
The world is changing fast and teachers are struggling to familiarize children and young people with the norms and values of society. Biopedagogy—the biology behind pedagogy approaches—might provide some insights and guidance.Humans are the only animals that are subject to cumulative cultural evolution. Biological evolution would be too slow to allow for the invention and continuous improvement of complex artefacts, both material and abstract. One of the mechanisms that makes it possible for humans to pass on social, cultural and technological advances over generations is teaching. Some scholars believe that teaching itself might be an exclusive human trait; others argue that teaching is more prevalent in nature, if it is defined as cooperative behaviour that promotes learning, independent of the mental states and cognitive intentions [1].The science, art and profession of teaching is known as ‘pedagogy'' and its biological basis might well be dubbed ‘biopedagogy''. Pedagogy embeds the learner into a particular culture by exposing the developing mind to cultural values and practices. Human teaching represents two disparate but closely linked activities—education and instruction. To ‘educate'' means to unfold the latent potential of the learner and to cultivate human nature by promoting impulses that conform to a culture and inhibiting those that contradict it. ‘Instruction'' is the provision of knowledge and skills.According to the ethologist Konrad Lorenz, human nature, which is a result of biological evolution, functions as the “inborn schoolmaster” [2] by both allowing and constraining the learning of all the products of cultural evolution. Lorenz received the Nobel Prize for his discovery of ‘imprinting'': the irreversible, life-long fixation of a response to a situation encountered by an organism during development. Imprinting is not specific to humans, but humans have evolved, along with the formation of the central nervous system, more sophisticated mental organs that we call the social brain, the group mind and the darwinian soul. As the physical development of an organism proceeds in stages, so does the mental development. Some of these stages represent critical periods that are particularly sensitive to imprinting. The rapid acquisition of the mother tongue for instance apparently represents such a specific period in human development.According to Lorenz, during and shortly after puberty, humans are prone to a specific kind of imprinting from a culture and its abstract norms and values, driven by a need to become members of a reference group striving for a common ideal [2]. We might call this developmental stage of humans the ‘second (or ideational) imprinting''. This imprinting presupposes a stable society with firmly established norms and values and, in turn, it serves to ensure that stability. The British neuroscientist Sarah-Jayne Blakemore [3] corroborated that the human brain undergoes protracted development and demonstrated that adolescence, in particular, represents a period during which the neuronal basis of the social brain reorganizes. This provides opportunities, but also imposes great responsibility to high-school and university teachers.Jan Amos Comenius, a seventeenth century Moravian educator, already suggested that the mastery of teaching consists in recognizing stages of mental development in which a student is prepared and eager to learn stage-specific knowledge spontaneously. In his view, a teacher is more similar to a gardener, who gives plants care and nutrients to allow them to develop, grow and flourish. Comenius also anticipated the crucial role of positive emotions in pedagogy. His commandment Scola ludus—The School of Play—expresses the fact that teaching and learning can, and should be, associated with pleasure and joy by both teachers and students.Human nature evolved during the Pleistocene about 1.8 million to 10,000 years ago to cope with a hunter–gatherer lifestyle. Yet, modern humans live in and adapt to vastly different environments created by cultural evolution. Apparently, the human genetic outfit is highly versatile and encompasses abstract ‘cultural loci''. Such a cultural locus is an ‘empty slot'' that functions only when it is filled with a meme from the cultural environment; memes would be akin to ‘alleles'' that are specific to a particular cultural locus. This cross-talk of biology and culture makes humans symbolic animals—our social brain allows us to behave altruistically towards ‘symbolic kin'' with whom we share no genetic relationship; and our group mind embraces not only our relatives and friends, but also our tribe or nation and possibly humanity as a whole. Education, and in particular imprinting, essentially determines the extent, quality and scope of this deployment.A developing child has to pass all crucial periods of learning successfully to become a mature human—and humane—being. As Lorenz noted, once a sensitive period has elapsed and the opportunity to learn has been missed, the ability to catch up is considerably reduced or irreversibly lost. We live in a time when cultural values and norms are rapidly changing, often within less than a generation. The ideational imprinting of developing adolescents becomes a problem when the traditional role of family and school is displaced by new social forces such as the internet, Facebook, Twitter and the blogosphere. How to preclude that young people do not develop as persons with stunted social brains, with narrow group minds attuned to fleeting reference groups and with fragmented darwinian souls, and how to promote the development of strong personalities is a challenge for the education of the twenty-first century.  相似文献   

14.
15.
The French government has ambitious goals to make France a leading nation for synthetic biology research, but it still needs to put its money where its mouth is and provide the field with dedicated funding and other support.Synthetic biology is one of the most rapidly growing fields in the biological sciences and is attracting an increasing amount of public and private funding. France has also seen a slow but steady development of this field: the establishment of a national network of synthetic biologists in 2005, the first participation of a French team at the International Genetically Engineered Machine competition in 2007, the creation of a Master''s curriculum, an institute dedicated to synthetic and systems biology at the University of Évry-Val-d''Essonne-CNRS-Genopole in 2009–2010, and an increasing number of conferences and debates. However, scientists have driven the field with little dedicated financial support from the government.Yet the French government has a strong self-perception of its strengths and has set ambitious goals for synthetic biology. The public are told about a “new generation of products, industries and markets” that will derive from synthetic biology, and that research in the field will result in “a substantial jump for biotechnology” and an “industrial revolution”[1,2]. Indeed, France wants to compete with the USA, the UK, Germany and the rest of Europe and aims “for a world position of second or third”[1]. However, in contrast with the activities of its competitors, the French government has no specific scheme for funding or otherwise supporting synthetic biology[3]. Although we read that “France disposes of strong competences” and “all the assets needed”[2], one wonders how France will achieve its ambitious goals without dedicated budgets or detailed roadmaps to set up such institutions.In fact, France has been a straggler: whereas the UK and the USA have published several reports on synthetic biology since 2007, and have set up dedicated governing networks and research institutions, the governance of synthetic biology in France has only recently become an official matter. The National Research and Innovation Strategy (SNRI) only defined synthetic biology as a “priority” challenge in 2009 and created a working group in 2010 to assess the field''s developments, potentialities and challenges; the report was published in 2011[1].At the same time, the French Parliamentary Office for the Evaluation of Scientific and Technological Choices (OPECST) began a review of the field “to establish a worldwide state of the art and the position of our country in terms of training, research and technology transfer”. Its 2012 report entitled The Challenges of Synthetic Biology[2] assessed the main ethical, legal, economic and social challenges of the field. It made several recommendations for a “controlled” and “transparent” development of synthetic biology. This is not a surprise given that the development of genetically modified organisms and nuclear power in France has been heavily criticized for lack of transparency, and that the government prefers to avoid similar future controversies. Indeed, the French government seems more cautious today: making efforts to assess potential dangers and public opinion before actually supporting the science itself.Both reports stress the necessity of a “real” and “transparent” dialogue between science and society and call for “serene […] peaceful and constructive” public discussion. The proposed strategy has three aims: to establish an observatory, to create a permanent forum for discussion and to broaden the debate to include citizens[4]. An Observatory for Synthetic Biology was set up in January 2012 to collect information, mobilize actors, follow debates, analyse the various positions and organize a public forum. Let us hope that this observatory—unlike so many other structures—will have a tangible and durable influence on policy-making, public opinion and scientific practice.Many structural and organizational challenges persist, as neither the National Agency for Research nor the National Centre for Scientific Research have defined the field as a funding priority and public–private partnerships are rare in France. Moreover, strict boundaries between academic disciplines impede interdisciplinary work, and synthetic biology is often included in larger research programmes rather than supported as a research field in itself. Although both the SNRI and the OPECST reports make recommendations for future developments—including setting up funding policies and platforms—it is not clear whether these will materialize, or when, where and what size of investments will be made.France has ambitious goals for synthetic biology, but it remains to be seen whether the government is willing to put ‘meat to the bones'' in terms of financial and institutional support. If not, these goals might come to be seen as unrealistic and downgraded or they will be replaced with another vision that sees synthetic biology as something that only needs discussion and deliberation but no further investment. One thing is already certain: the future development of synthetic biology in France is a political issue.  相似文献   

16.
The public view of life-extension technologies is more nuanced than expected and researchers must engage in discussions if they hope to promote awareness and acceptanceThere is increasing research and commercial interest in the development of novel interventions that might be able to extend human life expectancy by decelerating the ageing process. In this context, there is unabated interest in the life-extending effects of caloric restriction in mammals, and there are great hopes for drugs that could slow human ageing by mimicking its effects (Fontana et al, 2010). The multinational pharmaceutical company GlaxoSmithKline, for example, acquired Sirtris Pharmaceuticals in 2008, ostensibly for their portfolio of drugs targeting ‘diseases of ageing''. More recently, the immunosuppressant drug rapamycin has been shown to extend maximum lifespan in mice (Harrison et al, 2009). Such findings have stoked the kind of enthusiasm that has become common in media reports of life-extension and anti-ageing research, with claims that rapamycin might be “the cure for all that ails” (Hasty, 2009), or that it is an “anti-aging drug [that] could be used today” (Blagosklonny, 2007).Given the academic, commercial and media interest in prolonging human lifespan—a centuries-old dream of humanity—it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives, and to ask whether they would be willing to buy and use drugs that slow the ageing process. Surveys that have addressed these questions, have given some rather surprising results, contrary to the expectations of many researchers in the field. They have also highlighted that although human life extension (HLE) and ageing are topics with enormous implications for society and individuals, scientists have not communicated efficiently with the public about their research and its possible applications.Given the academic, commercial and media interest in prolonging human lifespan […] it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives…Proponents and opponents of HLE often assume that public attitudes towards ageing interventions will be strongly for or against, but until now, there has been little empirical evidence with which to test these assumptions (Lucke & Hall, 2005). We recently surveyed members of the public in Australia and found a variety of opinions, including some ambivalence towards the development and use of drugs that could slow ageing and increase lifespan. Our findings suggest that many members of the public anticipate both positive and negative outcomes from this work (Partridge 2009a, b, 2010; Underwood et al, 2009).In a community survey of public attitudes towards HLE we found that around two-thirds of a sample of 605 Australian adults supported research with the potential to increase the maximum human lifespan by slowing ageing (Partridge et al, 2010). However, only one-third expressed an interest in using an anti-ageing pill if it were developed. Half of the respondents were not interested in personally using such a pill and around one in ten were undecided.Some proponents of HLE anticipate their research being impeded by strong public antipathy (Miller, 2002, 2009). Richard Miller has claimed that opposition to the development of anti-ageing interventions often exists because of an “irrational public predisposition” to think that increased lifespans will only lead to elongation of infirmity. He has called this “gerontologiphobia”—a shared feeling among laypeople that while research to cure age-related diseases such as dementia is laudable, research that aims to intervene in ageing is a “public menace” (Miller, 2002).We found broad support for the amelioration of age-related diseases and for technologies that might preserve quality of life, but scepticism about a major promise of HLE—that it will delay the onset of age-related diseases and extend an individual''s healthy lifespan. From the people we interviewed, the most commonly cited potential negative personal outcome of HLE was that it would extend the number of years a person spent with chronic illnesses and poor quality of life (Partridge et al, 2009a). Although some members of the public envisioned more years spent in good health, almost 40% of participants were concerned that a drug to slow ageing would do more harm than good to them personally; another 13% were unsure about the benefits and costs (Partridge et al, 2010).…it might be that advocates of HLE have failed to persuade the public on this issueIt would be unwise to label such concerns as irrational, when it might be that advocates of HLE have failed to persuade the public on this issue. Have HLE researchers explained what they have discovered about ageing and what it means? Perhaps the public see the claims that have been made about HLE as ‘too good to be true‘.Results of surveys of biogerontologists suggest that they are either unaware or dismissive of public concerns about HLE. They often ignore them, dismiss them as “far-fetched”, or feel no responsibility “to respond” (Settersten Jr et al, 2008). Given this attitude, it is perhaps not surprising that the public are sceptical of their claims.Scientists are not always clear about the outcomes of their work, biogerontologists included. Although the life-extending effects of interventions in animal models are invoked as arguments for supporting anti-ageing research, it is not certain that these interventions will also extend healthy lifespans in humans. Miller (2009) reassuringly claims that the available evidence consistently suggests that quality of life is maintained in laboratory animals with extended lifespans, but he acknowledges that the evidence is “sparse” and urges more research on the topic (Miller, 2009). In the light of such ambiguity, researchers need to respond to public concerns in ways that reflect the available evidence and the potential of their work, without becoming apostles for technologies that have not yet been developed. An anti-ageing drug that extends lifespan without maintaining quality of life is clearly undesirable, but the public needs to be persuaded that such an outcome can be avoided.The public is also concerned about the possible adverse side effects of anti-ageing drugs. Many people were bemused when they discovered that members of the Caloric Restriction Society experienced a loss of libido and loss of muscle mass as a result of adhering to a low-calorie diet to extend their longevity—for many people, such side effects would not be worth the promise of some extra years of life. Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humans (Fontana et al, 2010). If researchers do not discuss these possible effects, then a curious public might draw their own conclusions.Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humansSome HLE advocates seem eager to tout potential anti-ageing drugs as being free from adverse side effects. For example, Blagosklonny (2007) has argued that rapamycin could be used to prevent age-related diseases in humans because it is “a non-toxic, well tolerated drug that is suitable for everyday oral administration” with its major “side-effects” being anti-tumour, bone-protecting, and mimicking caloric restriction effects. By contrast, Kaeberlein & Kennedy (2009) have advised the public against using the drug because of its immunosuppressive effects.Aubrey de Grey has called for scientists to provide more optimistic timescales for HLE on several occasions. He claims that public opposition to interventions in ageing is based on “extraordinarily transparently flawed opinions” that HLE would be unethical and unsustainable (de Grey, 2004). In his view, public opposition is driven by scepticism about whether HLE will be possible, and that concerns about extending infirmity, injustice or social harms are simply excuses to justify people''s belief that ageing is ‘not so bad'' (de Grey, 2007). He argues that this “pro-ageing trance” can only be broken by persuading the public that HLE technologies are just around the corner.Contrary to de Grey''s expectations of public pessimism, 75% of our survey participants thought that HLE technologies were likely to be developed in the near future. Furthermore, concerns about the personal, social and ethical implications of ageing interventions and HLE were not confined to those who believed that HLE is not feasible (Partridge et al, 2010).Juengst et al (2003) have rightly pointed out that any interventions that slow ageing and substantially increase human longevity might generate more social, economic, political, legal, ethical and public health issues than any other technological advance in biomedicine. Our survey supports this idea; the major ethical concerns raised by members of the public reflect the many and diverse issues that are discussed in the bioethics literature (Partridge et al, 2009b; Partridge & Hall, 2007).When pressed, even enthusiasts admit that a drastic extension of human life might be a mixed blessing. A recent review by researchers at the US National Institute on Aging pointed to several economic and social challenges that arise from longevity extension (Sierra et al, 2009). Perry (2004) suggests that the ability to slow ageing will cause “profound changes” and a “firestorm of controversy”. Even de Grey (2005) concedes that the development of an effective way to slow ageing will cause “mayhem” and “absolute pandemonium”. If even the advocates of anti-ageing and HLE anticipate widespread societal disruption, the public is right to express concerns about the prospect of these things becoming reality. It is accordingly unfair to dismiss public concerns about the social and ethical implications as “irrational”, “inane” or “breathtakingly stupid” (de Grey, 2004).The breadth of the possible implications of HLE reinforces the need for more discussion about the funding of such research and management of its outcomes ( Juengst et al, 2003). Biogerontologists need to take public concerns more seriously if they hope to foster support for their work. If there are misperceptions about the likely outcomes of intervention in ageing, then biogerontologists need to better explain their research to the public and discuss how their concerns will be addressed. It is not enough to hope that a breakthrough in human ageing research will automatically assuage public concerns about the effects of HLE on quality of life, overpopulation, economic sustainability, the environment and inequities in access to such technologies. The trajectories of other controversial research areas—such as human embryonic stem cell research and assisted reproductive technologies (Deech & Smajdor, 2007)—have shown that “listening to public concerns on research and responding appropriately” is a more effective way of fostering support than arrogant dismissal of public concerns (Anon, 2009).Biogerontologists need to take public concerns more seriously if they hope to foster support for their work? Open in a separate windowBrad PartridgeOpen in a separate windowJayne LuckeOpen in a separate windowWayne Hall  相似文献   

17.
The differentiation of pluripotent stem cells into various progeny is perplexing. In vivo, nature imposes strict fate constraints. In vitro, PSCs differentiate into almost any phenotype. Might the concept of ‘cellular promiscuity'' explain these surprising behaviours?John Gurdon''s [1] and Shinya Yamanaka''s [2] Nobel Prize involves discoveries that vex fundamental concepts about the stability of cellular identity [3,4], ageing as a rectified path and the differences between germ cells and somatic cells. The differentiation of pluripotent stem cells (PSCs) into progeny, including spermatids [5] and oocytes [6], is perplexing. In vivo, nature imposes strict fate constraints. Yet in vitro, reprogrammed PSCs liberated from the body government freely differentiate into any phenotype—except placenta—violating even somatic cell against germ cell segregations. Albeit that it is anthropomorphic, might the concept of ‘cellular promiscuity'' explain these surprising behaviours?Fidelity to one''s differentiated state is nearly universal in vivo—even cancers retain some allegiance. Appreciating the mechanisms in vitro that liberate reprogrammed cells from the numerous constraints governing development in vivo might provide new insights. Similarly to highway guiderails, a range of constraints preclude progeny cells within embryos and organisms from travelling too far away from the trajectory set by their ancestors. Restrictions are imposed externally—basement membranes and intercellular adhesions; internally—chromatin, cytoskeleton, endomembranes and mitochondria; and temporally by ageing.‘Cellular promiscuity'' was glimpsed previously during cloning; it was seen when somatic cells successfully ‘fertilized'' enucleated oocytes in amphibians [1] and later with ‘Dolly'' [7]. Embryonic stem cells (ESCs) corroborate this. The inner cell mass of the blastocyst cells develops faithfully, but liberation from the trophoectoderm generates pluripotent ESCs in vitro, which are freed from fate and polarity restrictions. These freedom-seeking ESCs still abide by three-dimensional rules as they conform to chimaera body patterning when injected into blastocysts. Yet if transplanted elsewhere, this results in chaotic teratomas or helter-skelter in vitro differentiation—that is, pluripotency.August Weismann''s germ plasm theory, 130 years ago, recognized that gametes produce somatic cells, never the reverse. Primordial germ cell migrations into fetal gonads, and parent-of-origin imprints, explain how germ cells are sequestered, retaining genomic and epigenomic purity. Left uncontaminated, these future gametes are held in pristine form to parent the next generation. However, the cracks separating germ and somatic lineages in vitro are widening [5,6]. Perhaps, they are restrained within gonads not for their purity but to prevent wild, uncontrolled misbehaviours resulting in germ cell tumours.The ‘cellular promiscuity'' concept regarding PSCs in vitro might explain why cells of nearly any desired lineage can be detected using monospecific markers. Are assays so sensitive that rare cells can be detected in heterogeneous cultures? Certainly population heterogeneity is considered for transplantable cells—dopaminergic neurons and islet cells—compared with applications needing few cells—sperm and oocytes. This dilemma of maintaining cellular identity in vitro after reprogramming is significant. If not addressed, the value of unrestrained induced PSCs (iPSCs) as reliable models for ‘diseases in a dish'', let alone for subsequent therapeutic transplantations, might be diminished. X-chromosome re-inactivation variants in differentiating human PSCs, epigenetic imprint errors and copy number variations are all indicators of in vitro infidelity. PSCs, which are held to be undifferentiated cells, are artefacts after all, as they undergo their programmed development in vivo.If correct, the hypothesis accounts for concerns raised about the inherent genomic and epigenomic unreliability of iPSCs; they are likely to be unfaithful to their in vivo differentiation trajectories due to both the freedom from in vivo developmental programmes, as well as poorly characterized modifications in culture conditions. ‘Memory'' of the PSC''s identity in vivo might need to be improved by using approaches that might not fully erase imprints. Regulatory authorities, including the Food & Drug Administration, require evidence that cultured PSCs do retain their original cellular identity. Notwithstanding fidelity lapses at the organismal level, the recognition that our cells have intrinsic freedom-loving tendencies in vitro might generate better approaches for only partly releasing somatic cells into probation, rather than full emancipation.  相似文献   

18.
Antony M Dean 《EMBO reports》2010,11(6):409-409
Antony Dean explores the past, present and future of evolutionary theory and our continuing efforts to explain biological patterns in terms of molecular processes and mechanisms.There are just two questions to be asked in evolution: how are things related, and what makes them differ? Lamarck was the first biologist—he invented the word—to address both. In his Philosophie Zoologique (1809) he suggested that the relationships among species are better described by branching trees than by a simple ladder, that new species arise gradually by descent with modification and that they adapt to changing environments through the inheritance of acquired characteristics. Much that Lamarck imagined has since been superseded. Following Wallace and Darwin, we now envision that species belong to a single highly branched tree and that natural selection is the mechanism of adaptation. Nonetheless, to Lamarck we owe the insight that pattern is produced by process and that both need mechanistic explanation.Questions of pattern, process and mechanism pervade the modern discipline of molecular evolution. The field was established when Zuckerkandl & Pauling (1965) noted that haemoglobins evolve at a roughly constant rate. Their “molecular evolutionary clock” forever changed our view of evolutionary history. Not only were seemingly intractable relationships resolved—for example, whales are allies of the hippopotamus—but also the eubacterial origins of eukaryotic organelles were firmly established and a new domain of life was discovered: the Archaea.Yet, different genes sometimes produce different trees. Golding & Gupta (1995) resolved two-dozen conflicting protein trees by suggesting that Eukarya arose following massive horizontal gene transfer between Bacteria and Archaea. Whole genome sequencing has since revealed so many conflicts that horizontal gene transfer seems characteristic of prokaryote evolution. In higher animals—where horizontal transfer is sufficiently rare that the tree metaphor remains robust—rapid and inexpensive whole genome sequencing promises to provide a wealth of data for population studies. The patterns of migration, admixture and divergence of species will be soon addressed in unprecedented detail.Sequence analyses are also used to infer processes. A constant molecular clock originally buttressed the neutral theory of molecular evolution (Kimura, 1985). The clock has since proven erratic, while the neutral theory now serves as a null hypothesis for statistical tests of ‘selection''. In truth, most tests are also sensitive to demographic changes. The promise of ultra-high throughput sequencing to provide genome-wide data should help dissect selection, which targets particular genes, from demography, which affects all the genes in a genome, although weak selection and ancient adaptations will remain undetected.In the functional synthesis (Dean & Thornton, 2007), molecular biology provides the experimental means to test evolutionary inferences decisively. For example, site-directed mutagenesis can be used to introduce putatively selected mutations into reconstructed ancestral sequences, the gene products are then expressed and purified and their functional properties determined in vitro. In microbial species, homologous recombination is used routinely to replace wild-type with engineered genes, enabling organismal phenotypes and fitnesses to be determined in vivo. The vision of Zuckerkandl & Pauling (1965) that by “furnishing probable structures of ancestral proteins, chemical paleogenetics will in the future lead to deductions concerning molecular functions as they were presumably carried out in the distant evolutionary past” is now a reality.If experimental tests of evolutionary inferences open windows on past mechanisms, directed evolution focuses on the mechanisms without attempting historical reconstruction. Today''s ‘fast-forward'' molecular breeding experiments use mutagenic PCR to generate vast libraries of variation and high throughput screens to identify rare novel mutants (Romero & Arnold, 2009; Khersonsky & Tawfik, 2010). Among numerous topics explored are: the role of intragenic recombination in furthering adaptation, the number and location of mutations in protein structures, the necessity—or lack thereof—of broadening substrate specificity before a new function is acquired, the evolution of robustness, and the alleged trade-off between stability and catalytic efficiency. Few, however, have approached the detail found in those classic studies of evolved β-galactosidase (Hall, 2003) that revealed how the free-energy profile of an enzyme-catalysed reaction evolved. Even further removed from natural systems are catalytic RNAs that, by combining phenotype and genotype within the same molecule, allow evolution to proceed in a lifeless series of chemical reactions. Recently, two RNA enzymes that catalyse each other''s synthesis were shown to undergo self-sustained exponential amplification (Lincoln & Joyce, 2009). Competition for limiting tetranucleotide resources favours mutants with higher relative fitness—faster replication—demonstrating that adaptive evolution can occur in a chemically defined abiotic genetic system.Lamarck was the first to attempt a coherent explanation of biological patterns in terms of processes and mechanisms. That his legacy can still be discerned in the vibrant field of molecular evolution would no doubt please him as much as it does us in promising extraordinary advances in our understanding of the mechanistic basis of molecular adaptation.  相似文献   

19.
Direct-to-consumer genetic tests and population genome research challenge traditional notions of privacy and consentThe concerns about genetic privacy in the 1990s were largely triggered by the Human Genome Project (HGP) and the establishment of population biobanks in the following decade. Citizens and lawmakers were worried that genetic information on people, or even subpopulations, could be used to discriminate or stigmatize. The ensuing debates led to legislation both in Europe and the USA to protect the privacy of genetic information and prohibit genetic discrimination.Notions of genetic determinism have also been eroded as population genomics research has discovered a plethora of risk factors that offer only probabilistic value…Times have changed. The cost of DNA sequencing has decreased markedly, which means it will soon be possible to sequence individual human genomes for a few thousand dollars. Notions of genetic determinism have also been eroded as population genomics research has discovered a plethora of risk factors that offer only probabilistic value for predicting disease. Nevertheless, there are several increasingly popular internet genetic testing services that do offer predictions to consumers of their health risks on the basis of genetic factors, medical history and lifestyle. Also, not to be underestimated is the growing popularity of social networks on the internet that expose the decline in traditional notions of the privacy of personal information. It was only a matter of time until all these developments began to challenge the notion of genetic privacy.For instance, the internet-based Personal Genome Project asks volunteers to make their personal, medical and genetic information publicly available so as, “to advance our understanding of genetic and environmental contributions to human traits and to improve our ability to diagnose, treat, and prevent illness” (www.personalgenomes.org). The Project, which was founded by George Church at Harvard University, has enrolled its first 10 volunteers and plans to expand to 100,000. Its proponents have proclaimed the limitations, if not the death, of privacy (Lunshof et al, 2008) and maintain that, under the principle of veracity, their own personal genomes will be made public. Moreover, they have argued that in a socially networked world there can be no total guarantee of confidentiality. Indeed, total protection of privacy is increasingly unrealistic in an era in which direct-to-consumer (DTC) genetic testing is offered on the internet (Lee & Crawley, 2009) and forensic technologies can potentially ‘identify'' individuals in aggregated data sets, even if their identity has been anonymized (Homer et al, 2008).Since the start of the HGP in the 1990s, personal privacy and the confidentiality of genetic information have been important ethical and legal issues. Their ‘regulatory'' expression in policies and legislation has been influenced by both genetic determinism and exceptionalism. Paradoxically, there has been a concomitant emergence of collaborative and international consortia conducting genomics research on populations. These consortia openly share data, on the premise that it is for public benefit. These developments require a re-examination of an ‘ethics of scientific research'' that is founded solely on the protection and rights of the individual.… total protection of privacy is increasingly unrealistic in an era in which direct-to-consumer (DTC) genetic testing is offered on the internetAlthough personalized medicine empowers consumers and democratizes the sharing of ‘information'' beyond the data sharing that characterizes population genomics research (Kaye et al, 2009), it also creates new social groups based on beliefs of common genetic susceptibility and risk (Lee & Crawley, 2009). The increasing allure of DTC genetic tests and the growth of online communities based on these services also challenges research in population genomics to provide the necessary scientific knowledge (Yang et al, 2009). The scientific data from population studies might therefore lend some useful validation to the results from DTC, as opposed to the probabilistic ‘harmful'' information that is now provided to consumers (Ransohoff & Khoury, 2010; Action Group on Erosion, Technology and Concentration, 2008). Population data clearly erodes the linear, deterministic model of Mendelian inheritance, in addition to providing information on inherited risk factors. The socio-demographic data provided puts personal genetic risk factors in a ‘real environmental'' context (Knoppers, 2009).Thus, beginning with a brief overview of the principles of data sharing and privacy under both population and consumer testing, we will see that the notion of identifiability is closely linked to the definition of what constitutes ‘personal'' information. It is against this background that we need to examine the issue of consumer consent to online offers of genetic tests that promise whole-genome sequencing and analysis. Moreover, we also demonstrate the need to restructure ethical reviews of genetic research that are not part of classical clinical trials and that are non-interventionist, such as population studies.The HGP heralded a new open access approach under the Bermuda Principles of 1996: “It was agreed that all human genomic sequence information, generated by centres funded for large-scale human sequencing, should be freely available and in the public domain in order to encourage research and development and to maximise its benefit to society” (HUGO, 1996). Reaffirmed in 2003 under the Fort Lauderdale Rules, the premise was that, “the scientific community will best be served if the results of community resource projects are made immediately available for free and unrestricted use by the scientific community to engage in the full range of opportunities for creative science” (HUGO, 2003). The international Human Genome Organization (HUGO) played an important role in achieving this consensus. Its Ethics Committee considered genomic databases as “global public goods” (HUGO Ethics Committee, 2003). The value of this information—based on the donation of biological samples and health information—to realize the benefits of personal genomics is maximized through collaborative, high-quality research. Indeed, it could be argued that, “there is an ethical imperative to promote access and exchange of information, provided confidentiality is protected” (European Society of Human Genetics, 2003). This promotion of data sharing culminated in a recent policy on releasing research data, including pre-publication data (Toronto International Data Release Workshop, 2009).There is room for improvement in both the personal genome and the population genome endeavoursIn its 2009 Guidelines for Human Biobanks and Genetic Research Databases, the Organization for Economic Cooperation and Development (OECD) states that the “operators of the HBGRD [Human Biobanks and Genetic Research Databases] should strive to make data and materials widely available to researchers so as to advance knowledge and understanding.” More specifically, the Guidelines propose mechanisms to ensure the validity of access procedures and applications for access. In fact, they insist that access to human biological materials and data should be based on “objective and clearly articulated criteria [...] consistent with the participants'' informed consent”. Access policies should be fair, transparent and not inhibit research (OECD, 2009).In parallel to such open and public science was the rise of privacy protection, particularly when it concerns genetic information. The United Nations Educational, Scientific and Cultural Organization''s (UNESCO) 2003 International Declaration on Human Genetic Data (UNESCO, 2003) epitomizes this approach. Setting genetic information apart from other sensitive medical or personal information, it mandated an “express” consent for each research use of human genetic data or samples in the absence of domestic law, or, when such use “corresponds to an important public interest reason”. Currently, however, large population genomics infrastructures use a broad consent as befits both their longitudinal nature as well as their goal of serving future unspecified scientific research. The risk is that ethics review committees that require such continuous “express” consents will thereby foreclose efficient access to data in such population resources for disease-specific research. It is difficult for researchers to provide proof of such “important public interest[s]” in order to avoid reconsents.Personal information itself refers to identifying and identifiable information. Logically, a researcher who receives a coded data set but who does not have access to the linking keys, would not have access to ‘identifiable'' information and so the rules governing access to personal data would not apply (Interagency Advisory Panel on Research Ethics, 2009; OHRP, 2008). In fact, in the USA, such research is considered to be on ‘non-humans'' and, in the absence of institutional rules to the contrary, it would theoretically not require research ethics approval (www.vanderbilthealth.com/main/25443).… the ethics norms that govern clinical research are not suited for the wide range of data privacy and consent issues in today''s social networks and bioinformatics systemsNevertheless, if the samples or data of an individual are accessible in more than one repository or on DTC internet sites, a remote possibility remains that any given individual could be re-identified (Homer et al, 2008). To prevent the restriction of open access to public databases, owing to the fear of re-identifiability, a more reasonable approach is necessary; “[t]his means that a mere hypothetical possibility to single out the individual is not enough to consider the persons as ‘identifiable''” (Data Protection Working Party, 2007). This is a proportionate and important approach because fundamental genomic ‘maps'' such as the International HapMap Project (www.hapmap.org) and the 1000 Genomes project (www.1000genomes.org) have stated as their goal “to make data as widely available as possible to further scientific progress” (Kaye et al, 2009). What then of the nature of the consent and privacy protections in DTC genetic testing?The Personal Genome Project makes the genetic and medical data of its volunteers publicly available. Indeed, there is a marked absence of the traditional confidentiality and other protections of the physician–patient relationship across such sites; overall, the degree of privacy protection by commercial DTC and other sequencing enterprises varies. The company 23andMe allows consumers to choose whether they wish to disclose personal information, but warns that disclosure of personal information is also possible “through other means not associated with 23andMe, […] to friends and/or family members […] and other individuals”. 23andMe also announces that it might enter into commercial or other partnerships for access to its databases (www.23andme.com). deCODEme offers tiered levels of visibility, but does not grant access to third parties in the absence of explicit consumer authorization (www.decodeme.com). GeneEssence will share coded DNA samples with other parties and can transfer or sell personal information or samples with an opt-out option according to their Privacy Policy, though the terms of the latter can be changed at any time (www.geneessence.com). Navigenics is transparent: “If you elect to contribute your genetic information to science through the Navigenics service, you allow us to share Your Genetic Data and Your Phenotype Information with not-for-profit organizations who perform genetic or medical research” (www.navigenics.com). Finally, SeqWright separates the personal information of its clients from their genetic information so as to avoid access to the latter in the case of a security breach (www.seqwright.com).Much has been said about the lack of clinical utility and validity of DTC genetic testing services (Howard & Borry, 2009), to say nothing of the absence of genetic counsellors or physicians to interpret the resulting probabilistic information (Knoppers & Avard, 2009; Wright & Kroese, 2010). But what are the implications for consent and privacy considering the seemingly divergent needs of ensuring data sharing in population projects and ‘protecting'' consumer-citizens in the marketplace?At first glance, the same accusations of paternalism levelled at ethics review committees who hesitate to respect the broad consent of participants in population databases could be applied to restraining the very same citizens from genetic ‘info-voyeurism'' on the internet. But, it should be remembered that citizen empowerment, which enables their participation both in population projects and in DTC, is expressed within very different contexts. Population biobanks, by the very fact of their broad consent and long-term nature, have complex security systems and are subject to governance and ongoing ethical monitoring and review. In addition, independent committees evaluate requests for access (Knoppers & Abdul-Rahman, 2010). The same cannot be said for the governance of the DTC companies just presented.There is room for improvement in both the personal genome and the population genome endeavours. The former require regulatory approaches to ensure the quality, safety, security and utility of their services. The latter require further clarification of their ongoing funding and operations and more transparency to the public as researchers begin to access these resources for disease-specific studies (Institute of Medicine, 2009). Public genomic databases should be interoperable and grant access to authenticated researchers internationally in order to be of utility and statistical significance (Burton et al, 2009). Moreover, to enable international access to such databases for disease-specific research means that the interests of publicly funded research and privacy protection must be weighed against each other, rather than imposing a requirement that research has to demonstrate that the public interest substantially outweighs privacy protection (Weisbrot, 2009). Collaboration through interoperability has been one of the goals of the Public Population Project in Genomics (P3G; www.p3g.org) and, more recently, of the Biobanking and Biomolecular Resources Research Infrastructure (www.bbmri.eu).Even if the tools for harmonization and standardization are built and used, will trans-border data flow still be stymied by privacy concerns? The mutual recognition between countries of privacy equivalent approaches—that is, safe harbour—the limiting of access to approved researchers and the development of international best practices in privacy, security and transparency through a Code of Conduct along with a system for penalizing those who fail to respect such norms, would go some way towards maintaining public trust in genomic and genetic research (P3G Consortium et al, 2009). Finally, consumer protection agencies should monitor DTC sites under a regulatory regime, to ensure that these companies adhere to their own privacy policies.… genetic information is probabilistic and participating in population or on-line studies may not create the fatalistic and harmful discriminatory scenarios originally perceived or imaginedMore importantly in both contexts, the ethics norms that govern clinical research are not suited for the wide range of data privacy and consent issues in today''s social networks and bioinformatics systems. One could go further and ask whether the current biomedical ethics review system is inadequate—if not inappropriate—in these ‘data-driven research'' contexts. Perhaps it is time to create ethics review and oversight systems that are particularly adapted for those citizens who seek either to participate through online services or to contribute to population research resources. Both are contexts of minimal risk and require structural governance reforms rather than the application of traditional ethics consent and privacy review processes that are more suited to clinical research involving drugs or devices. In this information age, genetic information is probabilistic, and participating in population or online studies might not create the fatalistic and harmful discriminatory scenarios originally perceived or imagined. The time is ripe for a change in governance and regulatory approaches, a reform that is consistent with what citizens seem to have already understood and acted on.? Open in a separate windowBartha Maria Knoppers  相似文献   

20.
Of mice and men     
Thomas Erren and colleagues point out that studies on light and circadian rhythmicity in humans have their own interesting pitfalls, of which all researchers should be mindful.We would like to compliment, and complement, the recent Opinion in EMBO reports by Stuart Peirson and Russell Foster (2011), which calls attention to the potential obstacles associated with linking observations on light and circadian rhythmicity made on nocturnal mice to diurnally active humans. Pitfalls to consider include that qualitative extrapolations from short-lived rodents to long-lived humans, quantitative extrapolations of very different doses (Gold et al, 1992), and the varying sensitivities of each species to experimental optical radiation as a circadian stimulus (Bullough et al, 2006) can all have a critical influence on an experiment. Thus, Peirson & Foster remind us that “humans are not big mice”. We certainly agree, but we also thought it worthwhile to point out that human studies have their own interesting pitfalls, of which all researchers should be mindful.Many investigations with humans—such as testing the effects of different light exposures on alertness, cognitive performance, well-being and depression—can suffer from what has been coined as the ‘Hawthorne effect''. The term is derived from a series of studies conducted at the Western Electric Company''s Hawthorne Works near Chicago, Illinois, between 1924 and 1932, to test whether the productivity of workers would change with changing illumination levels. One important punch line was that productivity increased with almost any change that was made at the workplaces. One prevailing interpretation of these findings is that humans who know that they are being studied—and in most investigations they cannot help but notice—might exhibit responses that have little or nothing to do with what was intended as the experiment. Those who conduct circadian biology studies in humans try hard to eliminate possible ‘Hawthorne effects'', but every so often, all they can do is to hope for the best and expect the Hawthorne effect to be insignificant.Even so, and despite the obstacles to circadian experiments with both mice and humans, the wealth of information from work in both species is indispensable. To exemplify, in the last handful of years alone, experimental research in mice has substantially contributed to our understanding of the retinal interface between visible light and circadian circuitry (Chen et al, 2011); has shown that disturbances of the circadian systems through manipulations of the light–dark cycles might accelerate carcinogenesis (Filipski et al, 2009); and has suggested that perinatal light exposure—through an imprinting of the stability of circadian systems (Ciarleglio et al, 2011)—might be related to a human''s susceptibility to mood disorders (Erren et al, 2011a) and internal cancer developments later in life (Erren et al, 2011b). Future studies in humans must now examine whether, and to what extent, what was found in mice is applicable to and relevant for humans.The bottom line is that we must be aware of, and first and foremost exploit, evolutionary legacies, such as the seemingly ubiquitous photoreceptive clockwork that marine and terrestrial vertebrates—including mammals such as mice and humans—share (Erren et al, 2008). Translating insights from studies in animals to humans (Erren et al, 2011a,b), and vice versa, into testable research can be a means to one end: to arrive at sensible answers to pressing questions about light and circadian clockworks that, no doubt, play key roles in human health and disease. Pitfalls, however, abound on either side, and we agree with Peirson & Foster that they have to be recognized and monitored.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号