首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
To close the gap between research and development, a number of funding organizations focus their efforts on large, translations research projects rather than small research teams and individual scientists. Yet, as Paul van Helden argues, if the support for small, investigator-driven research decreases, there will soon be a dearth of novel discoveries for large research groups to explore.What is medical science all about? Surely it is about the value chain, which begins with basic research and ends—if there is an end—with a useful product. There is a widespread perception that scientists do a lot of basic research, but neglect the application of their findings. To remedy this, a number of organizations and philanthropists have become dedicated advocates of applied or translational research and preferentially fund large consortia rather than small teams or individual scientists. Yet, this is only the latest round in the never-ending debate about how to optimize research. The question remains whether large teams, small groups or individuals are better at making ‘discoveries''.To some extent, a scientific breakthrough depends on the nature of the research. Einstein worked largely alone, and the development of E = mc2 is a case in point. He put together insights from many researchers to produce his breakthrough, which has subsequently required teams of scientists to apply. Similarly, drug development may require only an individual or a small team to make the initial discovery. However, it needs many individuals to develop a candidate compound and large teams to conduct clinical trials. On the other hand, Darwin could be seen to have worked the other way around: he had an initial ‘team'' of ‘field assistants''—including the crew of HMS Beagle—but he produced his seminal work essentially alone.Consortium funding is of course attractive for researchers because of the time-scale and the amount of money involved. Clinical trials or large research units may get financial support for 10 years or even longer and in the range of millions of dollars. However, organizations that provide funding on such a large scale require extensive and detailed planning from researchers. The work is subject to frequent reporting and review and often carries a large administrative burden. It has come to the point where this oversight threatens academic freedom. Principal investigators who try to conduct experiments outside the original plan, even if they make sense, lose their funding. Under such conditions, administrative officials are often not there to serve, but to govern.There is a widespread perception that small teams are more productive in terms of published papers. But large-scale science often generates outcomes and product value that a small team cannot. We therefore need both. The problem is the low level of funding for individual scientists and small teams and the resulting cut-throat competition for limited resources. This draws too many researchers to large consortia, which, if successful, can become comfort zones or, if they crash and burn, can cause serious damage.Other factors should also inform our deliberations about the size of research teams and consortia. Which is the better environment in which to train the next generation of scientists? By definition, research should question scientific dogmas and foster innovative thinking. Will a large consortium be able to achieve or even tolerate this?Perhaps these trends can be ascribed to generational differences. Neil Howe described people born between 1943 and 1980 as obsessed with values, individually strong and individualistic, whereas the younger folks born after 1981 place more trust in strong institutions that are seen to be moving society somewhere. If this is true, we can predict that the consortium approach is here to stay, at least for some time. Perhaps the emergence of large-scale science is driven by strong—maybe dictatorial—older individuals and arranged to accommodate the younger generation. If so, it is a win–win situation: we know the value of networking and interacting with others, which comes naturally in the ‘online age''.A down side of large groups is the loss of individual career development. The number of authors per paper has increased constantly. Who does the work and who gets the honour? There is often little recognition for the contribution of most people to publications that arise from large consortia, and it is difficult for peer-reviewers to assess individual contribution. We must take care that we measure what we value and not value what we measure.While it is clear that both large and small groups are essential, good management and balance is required. An alarming trend in my opinion is the inclination to fund new sites for clinical trials, to the detriment of existing facilities. This does not seem to be reasonable or the best use of scarce resources.In the long-term interest of science, we need to consider the correlation of major breakthroughs compared to incremental science with the size of the research group. This is hard to measure, but we must not forget that basic research produces the first leads that are then developed further into products. If the funding for basic science decreases, there will soon be a dearth of topics for ‘big science''.Is there a way out of this dilemma? I would like to suggest that organizations currently funding large consortia allow investigators to set aside a percentage of the money to support basic, curiosity-driven research within these consortia. If they do not rethink their funding strategy, these organizations may find with time that there are few novel discoveries for large groups to explore.  相似文献   

3.
4.
Zhang JY 《EMBO reports》2011,12(4):302-306
How can grass-roots movements evolve into a national research strategy? The bottom-up emergence of synthetic biology in China could give some pointers.Given its potential to aid developments in renewable energy, biosensors, sustainable chemical industries, microbial drug factories and biomedical devices, synthetic biology has enormous implications for economic development. Many countries are therefore implementing strategies to promote progress in this field. Most notably, the USA is considered to be the leader in exploring the industrial potential of synthetic biology (Rodemeyer, 2009). Synthetic biology in Europe has benefited from several cross-border studies, such as the ‘New and Emerging Science and Technology'' programme (NEST, 2005) and the ‘Towards a European Strategy for Synthetic Biology'' project (TESSY; Gaisser et al, 2008). Yet, little is known in the West about Asia''s role in this ‘new industrial revolution'' (Kitney, 2009). In particular, China is investing heavily in scientific research for future developments, and is therefore likely to have an important role in the development of synthetic biology.Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework…In 2010, as part of a study of the international governance of synthetic biology, the author visited four leading research teams in three Chinese cities (Beijing, Tianjin and Hefei). The main aims of the visits were to understand perspectives in China on synthetic biology, to identify core themes among its scientific community, and to address questions such as ‘how did synthetic biology emerge in China?'', ‘what are the current funding conditions?'', ‘how is synthetic biology generally perceived?'' and ‘how is it regulated?''. Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework; one that is more dynamic and comprises more options than existing national or international research and development (R&D) strategies. Such findings might contribute to Western knowledge of Chinese R&D, but could also expose European and US policy-makers to alternative forms and patterns of research governance that have emerged from a grass-roots level.…the process of developing a framework is at least as important to research governance as the big question it might eventually addressA dominant narrative among the scientists interviewed is the prospect of a ‘big-question'' strategy to promote synthetic-biology research in China. This framework is at a consultation stage and key questions are still being discussed. Yet, fieldwork indicates that the process of developing a framework is at least as important to research governance as the big question it might eventually address. According to several interviewees, this approach aims to organize dispersed national R&D resources into one grand project that is essential to the technical development of the field, preferably focusing on an industry-related theme that is economically appealling to the Chinese public.Chinese scientists have a pragmatic vision for research; thinking of science in terms of its ‘instrumentality'' has long been regarded as characteristic of modern China (Schneider, 2003). However, for a country in which the scientific community is sometimes described as an “uncoordinated ‘bunch of loose ends''” (Cyranoski, 2001) “with limited synergies between them” (OECD, 2007), the envisaged big-question approach implies profound structural and organizational changes. Structurally, the approach proposes that the foundational (industry-related) research questions branch out into various streams of supporting research and more specific short-term research topics. Within such a framework, a variety of Chinese universities and research institutions can be recruited and coordinated at different levels towards solving the big question.It is important to note that although this big-question strategy is at a consultation stage and supervised by the Ministry of Science and Technology (MOST), the idea itself has emerged in a bottom-up manner. One academic who is involved in the ongoing ministerial consultation recounted that, “It [the big-question approach] was initially conversations among we scientists over the past couple of years. We saw this as an alternative way to keep up with international development and possibly lead to some scientific breakthrough. But we are happy to see that the Ministry is excited and wants to support such an idea as well.” As many technicalities remain to be addressed, there is no clear time-frame yet for when the project will be launched. Yet, this nationwide cooperation among scientists with an emerging commitment from MOST seems to be largely welcomed by researchers. Some interviewees described the excitement it generated among the Chinese scientific community as comparable with the establishment of “a new ‘moon-landing'' project”.Of greater significance than the time-frame is the development process that led to this proposition. On the one hand, the emergence of synthetic biology in China has a cosmopolitan feel: cross-border initiatives such as international student competitions, transnational funding opportunities and social debates in Western countries—for instance, about biosafety—all have an important role. On the other hand, the development of synthetic biology in China has some national particularities. Factors including geographical proximity, language, collegial familiarity and shared interests in economic development have all attracted Chinese scientists to the national strategy, to keep up with their international peers. Thus, to some extent, the development of synthetic biology in China is an advance not only in the material synthesis of the ‘cosmos''—the physical world—but also in the social synthesis of aligning national R&D resources and actors with the global scientific community.To comprehend how Chinese scientists have used national particularities and global research trends as mutually constructive influences, and to identify the implications of this for governance, this essay examines the emergence of synthetic biology in China from three perspectives: its initial activities, the evolution of funding opportunities, and the ongoing debates about research governance.China''s involvement in synthetic biology was largely promoted by the participation of students in the International Genetically Engineered Machine (iGEM) competition, an international contest for undergraduates initiated by the Massachusetts Institute of Technology (MIT) in the USA. Before the iGEM training workshop that was hosted by Tianjin University in the Spring of 2007, there were no research records and only two literature reviews on synthetic biology in Chinese scientific databases (Zhao & Wang, 2007). According to Chunting Zhang of Tianjin University—a leading figure in the promotion of synthetic biology in China—it was during these workshops that Chinese research institutions joined their efforts for the first time (Zhang, 2008). From the outset, the organization of the workshop had a national focus, while it engaged with international networks. Synthetic biologists, including Drew Endy from MIT and Christina Smolke from Stanford University, USA, were invited. Later that year, another training camp designed for iGEM tutors was organized in Tianjin and included delegates from Australia and Japan (Zhang, 2008).Through years of organizing iGEM-related conferences and workshops, Chinese universities have strengthened their presence at this international competition; in 2007, four teams from China participated. During the 2010 competition, 11 teams from nine universities in six provinces/municipalities took part. Meanwhile, recruiting, training and supervising iGEM teams has become an important institutional programme at an increasing number of universities.…training for iGEM has grown beyond winning the student awards and become a key component of exchanges between Chinese researchers and the international communityIt might be easy to interpret the enthusiasm for the iGEM as a passion for winning gold medals, as is conventionally the case with other international scientific competitions. This could be one motive for participating. Yet, training for iGEM has grown beyond winning the student awards and has become a key component of exchanges between Chinese researchers and the international community (Ding, 2010). Many of the Chinese scientists interviewed recounted the way in which their initial involvement in synthetic biology overlapped with their tutoring of iGEM teams. One associate professor at Tianjin University, who wrote the first undergraduate textbook on synthetic biology in China, half-jokingly said, “I mainly learnt [synthetic biology] through tutoring new iGEM teams every year.”Participation in such contests has not only helped to popularize synthetic biology in China, but has also influenced local research culture. One example of this is that the iGEM competition uses standard biological parts (BioBricks), and new BioBricks are submitted to an open registry for future sharing. A corresponding celebration of open-source can also be traced to within the Chinese synthetic-biology community. In contrast to the conventional perception that the Chinese scientific sector consists of a “very large number of ‘innovative islands''” (OECD, 2007; Zhang, 2010), communication between domestic teams is quite active. In addition to the formally organized national training camps and conferences, students themselves organize a nationwide, student-only workshop at which to informally test their ideas.More interestingly, when the author asked one team whether there are any plans to set up a ‘national bank'' for hosting designs from Chinese iGEM teams, in order to benefit domestic teams, both the tutor and team members thought this proposal a bit “strange”. The team leader responded, “But why? There is no need. With BioBricks, we can get any parts we want quite easily. Plus, it directly connects us with all the data produced by iGEM teams around the world, let alone in China. A national bank would just be a small-scale duplicate.”From the beginning, interest in the development of synthetic biology in China has been focused on collective efforts within and across national borders. In contrast to conventional critiques on the Chinese scientific community''s “inclination toward competition and secrecy, rather than openness” (Solo & Pressberg, 2007; OECD, 2007; Zhang, 2010), there seems to be a new outlook emerging from the participation of Chinese universities in the iGEM contest. Of course, that is not to say that the BioBricks model is without problems (Rai & Boyle, 2007), or to exclude inputs from other institutional channels. Yet, continuous grass-roots exchanges, such as the undergraduate-level competition, might be as instrumental as formal protocols in shaping research culture. The indifference of Chinese scientists to a ‘national bank'' seems to suggest that the distinction between the ‘national'' and ‘international'' scientific communities has become blurred, if not insignificant.However, frequent cross-institutional exchanges and the domestic organization of iGEM workshops seem to have nurtured the development of a national synthetic-biology community in China, in which grass-roots scientists are comfortable relying on institutions with a cosmopolitan character—such as the BioBricks Foundation—to facilitate local research. To some extent, one could argue that in the eyes of Chinese scientists, national and international resources are one accessible global pool. This grass-roots interest in incorporating local and global advantages is not limited to student training and education, but also exhibited in evolving funding and regulatory debates.In the development of research funding for synthetic biology, a similar bottom-up consolidation of national and global resources can also be observed. As noted earlier, synthetic-biology research in China is in its infancy. A popular view is that China has the potential to lead this field, as it has strong support from related disciplines. In terms of genome sequencing, DNA synthesis, genetic engineering, systems biology and bioinformatics, China is “almost at the same level as developed countries” (Pan, 2008), but synthetic-biology research has only been carried out “sporadically” (Pan, 2008; Huang, 2009). There are few nationally funded projects and there is no discernible industrial involvement (Yang, 2010). Most existing synthetic-biology research is led by universities or institutions that are affiliated with the Chinese Academy of Science (CAS). As one CAS academic commented, “there are many Chinese scientists who are keen on conducting synthetic-biology research. But no substantial research has been launched nor has long-term investment been committed.”The initial undertaking of academic research on synthetic biology in China has therefore benefited from transnational initiatives. The first synthetic-biology project in China, launched in October 2006, was part of the ‘Programmable Bacteria Catalyzing Research'' (PROBACTYS) project, funded by the Sixth Framework Programme of the European Union (Yang, 2010). A year later, another cross-border collaborative effort led to the establishment of the first synthetic-biology centre in China: the Edinburgh University–Tianjing University Joint Research Centre for Systems Biology and Synthetic Biology (Zhang, 2008).There is also a comparable commitment to national research coordination. A year after China''s first participation in iGEM, the 2008 Xiangshan conference focused on domestic progress. From 2007 to 2009, only five projects in China received national funding, all of which came from the National Natural Science Foundation of China (NSFC). This funding totalled ¥1,330,000 (approximately £133,000; www.nsfc.org), which is low in comparison to the £891,000 funding that was given in the UK for seven Networks in Synthetic Biology in 2007 alone (www.bbsrc.ac.uk).One of the primary challenges in obtaining funding identified by the interviewees is that, as an emerging science, synthetic biology is not yet appreciated by Chinese funding agencies. After the Xiangshan conference, the CAS invited scientists to a series of conferences in late 2009. According to the interviewees, one of the main outcomes was the founding of a ‘China Synthetic Biology Coordination Group''; an informal association of around 30 conference delegates from various research institutions. This group formulated a ‘regulatory suggestion'' that they submitted to MOST, which stated the necessity and implications of supporting synthetic-biology research. In addition, leading scientists such as Chunting Zhang and Huanming Yang—President of the Beijing Genomic Institute (BGI), who co-chaired the Beijing Institutes of Life Science (BILS) conferences—have been active in communicating with government institutions. The initial results of this can be seen in the MOST 2010 Application Guidelines for the National Basic Research Program, in which synthetic biology was included for the first time, among ‘key supporting areas'' (MOST, 2010). Meanwhile, in 2010, NSFC allocated ¥1,500,000 (approximately £150,000) to synthetic-biology research, which is more than the total funding the area had received in the past three years.The search for funding further demonstrates the dynamics between national and transnational resources. Chinese R&D initiatives have to deal with the fact that scientific venture-capital and non-governmental research charities are underdeveloped in China. In contrast to the EU or the USA, government institutions in China, such as the NSFC and MOST, are the main and sometimes only domestic sources of funding. Yet, transnational funding opportunities facilitate the development of synthetic biology by alleviating local structural and financial constraints, and further integrate the Chinese scientific community into international research.This is not a linear ‘going-global'' process; it is important for Chinese scientists to secure and promote national and regional support. In addition, this alignment of national funding schemes with global research progress is similar to the iGEM experience, as it is being initiated through informal bottom-up associations between scientists, rather than by top-down institutional channels.As more institutions have joined iGEM training camps and participated in related conferences, a shared interest among the Chinese scientific community in developing synthetic biology has become visible. In late 2009, at the conference that founded the informal ‘coordination group'', the proposition of integrating national expertise through a big-question approach emerged. According to one professor in Beijing—who was a key participant in the discussion at the time—this proposition of a nationwide synergy was not so much about ‘national pride'' or an aim to develop a ‘Chinese'' synthetic biology, it was about research practicality. She explained, “synthetic biology is at the convergence of many disciplines, computer modelling, nano-technology, bioengineering, genomic research etc. Individual researchers like me can only operate on part of the production chain. But I myself would like to see where my findings would fit in a bigger picture as well. It just makes sense for a country the size of China to set up some collective and coordinated framework so as to seek scientific breakthrough.”From the first participation in the iGEM contest to the later exploration of funding opportunities and collective research plans, scientists have been keen to invite and incorporate domestic and international resources, to keep up with global research. Yet, there are still regulatory challenges to be met.…with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' mannerThe reputation of “the ‘wild East'' of biology” (Dennis, 2002) is associated with China'' previous inattention to ethical concerns about the life sciences, especially in embryonic-stem-cell research. Similarly, synthetic biology creates few social concerns in China. Public debate is minimal and most media coverage has been positive. Synthetic biology is depicted as “a core in the fourth wave of scientific development” (Pan, 2008) or “another scientific revolution” (Huang, 2009). Whilst recognizing its possible risks, mainstream media believe that “more people would be attracted to doing good while making a profit than doing evil” (Fang & He, 2010). In addition, biosecurity and biosafety training in China are at an early stage, with few mandatory courses for students (Barr & Zhang, 2010). The four leading synthetic-biology teams I visited regarded the general biosafety regulations that apply to microbiology laboratories as sufficient for synthetic biology. In short, with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' manner.Yet, fieldwork suggests that, in contrast to this previous insensitivity to global ethical concerns, the synthetic-biology community in China has taken a more proactive approach to engaging with international debates. It is important to note that there are still no synthetic-biology-specific administrative guidelines or professional codes of conduct in China. However, Chinese stakeholders participate in building a ‘mutual inclusiveness'' between global and domestic discussions.One of the most recent examples of this is a national conference about the ethical and biosafety implications of synthetic biology, which was jointly hosted by the China Association for Science and Technology, the Chinese Society of Biotechnology and the Beijing Institutes of Life Science CAS, in Suzhou in June 2010. The discussion was open to the mainstream media. The debate was not simply a recapitulation of Western worries, such as playing god, potential dual-use or ecological containment. It also focused on the particular concerns of developing countries about how to avoid further widening the developmental gap with advanced countries (Liu, 2010).In addition to general discussions, there are also sustained transnational communications. For example, one of the first three projects funded by the NSFC was a three-year collaboration on biosafety and risk-assessment frameworks between the Institute of Botany at CAS and the Austrian Organization for International Dialogue and Conflict Management (IDC).Chinese scientists are also keen to increase their involvement in the formulation of international regulations. The CAS and the Chinese Academy of Engineering are engaged with their peer institutions in the UK and the USA to “design more robust frameworks for oversight, intellectual property and international cooperation” (Royal Society, 2009). It is too early to tell what influence China will achieve in this field. Yet, the changing image of the country from an unconcerned wild East to a partner in lively discussions signals a new dynamic in the global development of synthetic biology.Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in ChinaFrom self-organized participation in iGEM to bottom-up funding and governance initiatives, two features are repeatedly exhibited in the emergence of synthetic biology in China: global resources and international perspectives complement national interests; and the national and cosmopolitan research strengths are mostly instigated at the grass-roots level. During the process of introducing, developing and reflecting on synthetic biology, many formal or informal, provisional or long-term alliances have been established from the bottom up. Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in China.However, the inputs of different social actors has not led to disintegration of the field into an array of individualized pursuits, but has transformed it into collective synergies, or the big-question approach. Underlying the diverse efforts of Chinese scientists is a sense of ‘inclusiveness'', or the idea of bringing together previously detached research expertise. Thus, the big-question strategy cannot be interpreted as just another nationally organized agenda in response to global scientific advancements. Instead, it represents a more intricate development path corresponding to how contemporary research evolves on the ground.In comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stageIn comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stage. Government input—such as the potential stewardship of the MOST in directing a big-question approach or long-term funding—remain important; the scientists who were interviewed expend a great deal of effort to attract governmental participation. Yet, China'' experience highlights that the key to comprehending regional scientific capacity lies not so much in what the government can do, but rather in what is taking place in laboratories. It is important to remember that Chinese iGEM victories, collaborative synthetic-biology projects and ethical discussions all took place before the government became involved. Thus, to appreciate fully the dynamics of an emerging science, it might be necessary to focus on what is formulated from the bottom up.The experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary researchThe experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary research. More specifically, it is a result of the commitment of Chinese scientists to incorporating national and international resources, actors and social concerns. For practical reasons, the national organization of research, such as through the big-question approach, might still have an important role. However, synthetic biology might be not only a mosaic of national agendas, but also shaped by transnational activities and scientific resources. What Chinese scientists will collectively achieve remains to be seen. Yet, the emergence of synthetic biology in China might be indicative of a new paradigm for how research practices can be introduced, normalized and regulated.  相似文献   

5.
Some view social constructivism as a threat to the unique objectivity of science in describing the world. But social constructivism merely observes the process of science and can offer ways for science to regain public esteem.Political groups, civil organizations, the media and private citizens increasingly question the validity of scientific findings about challenging issues such as global climate change, and actively resist the application of new technologies, such as GM crops. By using new communication technologies, these actors can reach out to many people in real time, which gives them a huge advantage over the traditional, specialist and slow communication of scientific research through peer-reviewed publications. They use emotive stories with a narrow focus, facts and accessible language, making them often, at least in the eyes of the public, more credible than scientific experts. The resulting strength of public opinion means that scientific expertise and validated facts are not always the primary basis for decision-making by policy-makers about issues that affect society and the environment.The scientific community has decried this situation not only as a crisis of public trust in experts but more so as a loss of trust in scientific objectivity. The reason for this development, some claim, is a postmodernist perception of science as a social construction [1]. This view claims that context—in other words society—determines the acceptance of a scientific theory and the reliability of scientific facts. This is in conflict with the more traditional view held by most scientists, that experimental evidence, analysis and validation by scientific means are the instruments to determine truth. ‘Social constructivism'', as this postmodernist view on science has been called, challenges the ‘traditional'' view of science: that it is an objective, experiment-based approach to collect evidence that results in a linear accumulation of knowledge, leading to reliable, scientifically proven facts and trust in the role of experts.However, constructivists maintain that society and science have always influenced one another, thereby challenging the notion that science is objective and only interested in uncovering the truth. Moderate social constructivism merely acknowledges a controversy and attempts to provide answers. The extreme interpretation of this approach sustains that all facts and all parties—no matter how absurd or unproven their ‘facts'' and claims—should be treated equally, without any consideration for their interests [2].…scientific expertise and validated facts are not always the primary basis for decision-making by policy-makers about issues that affect society and the environmentThe truth might actually be somewhere in the middle, between taking scientific results as absolute truths at one extreme, and requiring that all facts and all actors should be given equal attention and consideration at the other. What is needed, however, is a closer connection and mutual appreciation between science and society, especially when it comes to science policy and making decisions that require scientific expertise. To claim that all perspectives are equally important when there is a lack of absolute facts—leading to an ‘all truths are equal'' approach to decision-making—is surely ridiculous. Nonetheless, societies are highly complex and sufficient facts are often not available when policy-makers and regulatory bodies have to make a decision. The aim of this essay is to argue that social construction and scientific objectivity can coexist and even benefit from one another.The question is whether social constructivism really caused a crisis of objectivity and a change in the traditional view of science? A main characteristic of the traditional view is that science progresses in isolation from any societal influences. However, there are historical and contemporary examples of how social mores influence the acceptability of certain areas of research, the direction of scientific research and even the formation of a scientific consensus—or in the words of Thomas Kuhn, of a scientific paradigm.Arrival at a scientific consensus driven by non-scientific factors will probably happen in a new research field when there is insufficient scientific information or knowledge to make precise claims. As such, societal factors can become determinants in settling disputes, at least until more information emerges. Religious and ethical beliefs have had such an impact on science throughout history. One could argue, for example, that the focus on research into induced pluripotent stem cells and the potency of adult stem cells is driven, at least in part, by religious and ethical objections to using human embryonic stem cells. Similarly, the near universal consensus that scientists should not clone humans is not based on scientific reason, but on social, religious and ethical arguments.Another example of the influence of non-scientific values on the establishment of a scientific consensus comes from the field of artificial intelligence. In the 1960s, a controversy erupted between the proponents of symbolic processing—led by Marvin Minsky—and the proponents of neural nets—who had been led by the charismatic Frank Rosenblatt. The publication of a book by Minsky and Seymour Papert, which concluded that progress in neural networks faced insurmountable limitations, coincided with the unfortunate death of Rosenblatt and massive funding from the US Department of Defense through the Defense Advanced Research Projects Agency (DARPA) for projects on symbolic processing. DARPA''s decision to ignore neural networks—because they could not foresee any immediate military applications—convinced other funding agencies to avoid the field and blocked research on neural nets for a decade. This has become known as the first artificial intelligence winter [3]. The military, in particular, has often had a major influence on setting the direction of scientific research. The atomic bomb, radar and the first computers are just some examples of how military interests drove scientific progress and its application.The traditional perception of science also supposes a gradual and linear accumulation of scientific knowledge. Whilst the gradual part remains undisputed, scientific progress is not linear. Theories are proposed, discussed, rejected, accepted, sometimes forgotten, rediscovered and reborn with modifications as part of an ever-changing network of scientific facts and knowledge. Gregor Mendel discovered the laws of inheritance in 1865, but his finding received scant attention until their rediscovery in the early 1900s by Carl Correns and Erich von Tschermak. Ignaz Semmelweis, a Hungarian obstetrician, developed the theory that puerperal fever or childbed fever is mainly transmitted by the poor hygiene of doctors before assisting in births. He observed that when doctors washed their hands with a chlorine solution before obstetric consultations, deaths in obstetrics wards were drastically reduced. The medical community ridiculed Semmelweis at the time, but the development of Louis Pasteur''s germ theory of disease eventually vindicated him [4].Another challenge to the traditional view of science is the claim that scientific facts are constructed. This does not necessarily imply that they are false: it acknowledges the process of independently conducted experiments, ‘trial and error'' approaches, collaborations and discussions, to establish a final consensus that then becomes scientific fact. Critics of constructivism claim that viewing scientific discovery this way opens the gate to non-scientific influences and arguments, thereby undermining factuality. However, without consensus on the importance of a discovery, no fact is sufficient to change or establish a scientific theory. In fact, classical peer review treats scientific discoveries as constructions essentially by taking apart the proposed fact, analysing the process of its determination and, based on the evidence, accepting or rejecting it.‘Social constructivism'' […] challenges the ‘traditional'' view of science: that it is an objective, experiment-based approach to collect evidence…Ultimately, then, it seems that social constructivism itself is not the sole or most important factor for changing the traditional view of science. Social, religious and ethical values have always influenced human endeavours, and science is no exception. Yet, there is one aspect of traditional science for which constructivism only has the role of an observer: public trust in scientific experts. Societies can resist the introduction of new technologies owing to their potential risks. Traditionally, the potential victims of such hazards—consumers, affected communities and the environment—had no input into either the risk-assessment process, or the decisions that were made on the basis of the assessment.The difficulty is that postmodern societies tend to perceive certain risks as greater compared with how they were viewed by modern or premodern societies, ostensibly and partly because of globalization and better communication [5]. As a result, the evaluation of risk increasingly takes into account political considerations. Each stakeholder inevitably defines risks and their acceptability according to their own agenda, and brings their own cadre of experts and evidence to support their claims. As such, the role of unbiased experts is undermined not only because they are similarly accused of having their own agenda, but also because the line between experts and non-experts is redrawn [5]. In addition, the internet and other communication technologies have unprecedentedly empowered non-expert users to broadcast their opinions. The emergence of so-called ‘pseudo-experts'', enabled by “the cult of the amateur” [6], further challenges the position of scientific experts. Trust is no longer a given for anyone, and even when people trust science, it is not lasting, and has to be earned for new information. This erosion of trust cannot be blamed entirely on the “cult of the amateur”. The German sociologist Ulrich Beck argued that when scientists make recommendations to society on how to deal with risks, they inevitably make assumptions that are embedded in cultural values, moving into a social and cultural sphere without assessing the public view of those values. Scientists thus presuppose a certain set of social and cultural values and judge everything that comes against that set as irrational [5].…without consensus on the importance of a discovery, no fact is sufficient to change or establish a scientific theoryRegardless of how trust in expertise was eroded, and how pseudo-experts have filled the gap, the main issue is how to assess the implications of scientific results and new technologies, and how to manage any risks that they entail. To gain and maintain trust, decision-making must consider stakeholder involvement and public opinion. However, when public participation attempts to accommodate an increasing number of stakeholders, it raises the difficult issue of who should be involved, either as part of the administrative process or as producers of knowledge [7,8]. An increasing number of participants in decision-making and an increasing amount of information can result in conflicting perspectives, different perceptions of facts and even half-truths or half-lies when information is not available, missing or not properly explained. There is no dominant perspective and all evidence seems subjective. This seems to be the nightmare scenario when ‘all truths are equal''.It is important to point out that the constructivist perspective of looking at the interactions between science and society is not an attempt to impose a particular world-view; it is merely an attempt to understand the mechanisms of these interactions. It attempts to explain why, for example, anti-GMO activists destroy experimental field trials without any scientific proof regarding the harm of such experiments. In addition, constructivism does not attempt to destroy the credibility of science, nor to overemphasize alternative knowledge, but to offer possibilities for wider participation in policy-making, especially in contentious cases when the lines between the public and experts are no longer clear [8]. In this situation, expert knowledge is not meant to be replaced by non-expert knowledge, but to be enriched by it.Nonetheless, the main question is whether scientific objectivity can prevail when science meets society. The answer should be yes. Even when several seemingly valid perspectives persist, objective facts are and should be the foundation of decisions taken. Scientific facts do matter and there are objective frameworks in place to prove or disprove the validity of information. Yet, in settling disputes, the decision must also be accountable to prevent loss of trust. By establishing frameworks for inclusive discussions and acknowledging the role of non-expert knowledge, either by indicating areas of public concern or by improving the communication of scientific facts, consent and thus support for the decision can be achieved.Moreover, scientific facts are important, but they are only part of an informational package. In particular, the choice of words and the style of writing can become more important than the factual content of a message. Scientists cannot communicate to the wider public using scientific jargon and then expect unconditional trust. People tend to mistrust things they cannot understand. To be part of a decision-making process, members of the public need access to scientific information presented in an understandable manner. The core issue is communication, or more specifically, translation: explaining facts and findings by considering the receiver and context, and adapting the message and language accordingly. Scientists must therefore translate their work. Equally important, they must do this proactively to take advantage of social constructivism and its view of science. By understanding how controversies around new scientific discoveries and scientific expertise arise, they can devise better communication strategies.…the internet and other communication technologies have unprecedentedly empowered non-expert users to broadcast their opinionsSome examples show how better interaction between science and society—such as the involvement of more stakeholders and the use of appropriate language in communication—can raise awareness and acceptability of previously contentious technologies. In Burkina Faso in 1999, Monsanto partnered with Africare to provide farmers with GM cotton to address pest resistance to pesticides and to increase yields. The plan was originally met with suspicion from the public and public research institutes, but the partners managed to build trust among the different stakeholders by providing transparent and correct information. The project started with a public–private partnership. By being open about their motives, including profit-making, and acknowledging and discussing any potential risks, the project gradually achieved the full support of the main partners [9]. Another challenge was the relationship between scientists and journalists. By using scientific communicators that were both open to dialogue and careful to maintain the discussion within scientific boundaries, the relationship with the press improved [10]. In this case, efforts to translate scientific knowledge included transparency of information and contextualizing its delivery, as well as an increasingly wider participation of stakeholders in the development and commercialization of GM cotton.…scientists[…]should consider proactively translating their research for a wider audience […] in an inclusive and contextualized mannerWhen the Philippines, the first Asian country to adopt a GM food, approved Bt maize, environmental NGOs and the Catholic Church opposed the crop with regular protests. These slowly dissipated as farmers gradually adopted Bt maize [11] and the reporting media focused less on sensationalist stories [12]. Between 2000 and 2009, media coverage contributed substantially to a mostly positive (41%) or neutral (38%) public perception of biotechnology in the Philippines [12]. Most newspaper reports focused on the public accountability of biotechnology governance and analysed the validity of scientific information, together with the way in which conflicts in biotechnology research were managed. Science writers translated scientific facts into language that the wider public could understand. In addition, sources in which the public placed trust—either scientists or environmentalists—were cited in the media, which helped to facilitate public discussion [12]. In this case, the efforts of science writers to provide balanced, well-informed coverage, as well as a platform for public discussions, effectively translated the scientific facts and improved public opinion of Bt maize.Constructivism is not a threat to science. It is a concept that looks at the components and the processes through which a scientific theory or fact emerges; it is not an alternative to these processes. In fact, scientists should consider embracing constructivism, not only to understand what happens with the products of their labour beyond the laboratory, but also to understand the forces that determine the fate of scientific developments. We live in a complex world in which individual actors are empowered through modern communication tools. This might make it more challenging to prove and maintain scientific objectivity, but it does not make it unnecessary. Public decision-making requires an objective fact base for all decisions concerning the use of scientific discoveries in society. If scientists want to prevent their messages from being misunderstood or hijacked for political purposes, they should consider proactively translating their research for a wider audience themselves, in an inclusive and contextualized manner.? Open in a separate windowMonica Racovita  相似文献   

6.
L Bornmann 《EMBO reports》2012,13(8):673-676
The global financial crisis has changed how nations and agencies prioritize research investment. There has been a push towards science with expected benefits for society, yet devising reliable tools to predict and measure the social impact of research remains a major challenge.Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier, and it has become the underlying rationale for public support and funding of science.However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [1]. In fact, there is no direct link between the scientific quality of a research project and its societal value. As Paul Nightingale and Alister Scott of the University of Sussex''s Science and Technology Policy Research centre have pointed out: “research that is highly cited or published in top journals may be good for the academic discipline but not for society” [2]. Moreover, it might take years, or even decades, until a particular body of knowledge yields new products or services that affect society. By way of example, in an editorial on the topic in the British Medical Journal, editor Richard Smith cites the original research into apoptosis as work that is of high quality, but that has had “no measurable impact on health” [3]. He contrasts this with, for example, research into “the cost effectiveness of different incontinence pads”, which is certainly not seen as high value by the scientific community, but which has had an immediate and important societal impact.…the growth of scientific research during the past decades has outpaced the public resources available to fund itThe problem actually begins with defining the ‘societal impact of research''. A series of different concepts has been introduced: ‘third-stream activities'' [4], ‘societal benefits'' or ‘societal quality'' [5], ‘usefulness'' [6], ‘public values'' [7], ‘knowledge transfer'' [8] and ‘societal relevance'' [9, 10]. Yet, each of these concepts is ultimately concerned with measuring the social, cultural, environmental and economic returns from publicly funded research, be they products or ideas.In this context, ‘societal benefits'' refers to the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making. ‘Cultural benefits'' are those that add to the cultural capital of a nation, for example, by giving insight into how we relate to other societies and cultures, by providing a better understanding of our history and by contributing to cultural preservation and enrichment. ‘Environmental benefits'' benefit the natural capital of a nation, by reducing waste and pollution, and by increasing natural preserves or biodiversity. Finally, ‘economic benefits'' increase the economic capital of a nation by enhancing its skills base and by improving its productivity [11].Given the variability and the complexity of evaluating the societal impact of research, Barend van der Meulen at the Rathenau Institute for research and debate on science and technology in the Netherlands, and Arie Rip at the School of Management and Governance of the University of Twente, the Netherlands, have noted that “it is not clear how to evaluate societal quality, especially for basic and strategic research” [5]. There is no accepted framework with adequate datasets comparable to,for example, Thomson Reuters'' Web of Science, which enables the calculation of bibliometric values such as the h index [12] or journal impact factor [13]. There are also no criteria or methods that can be applied to the evaluation of societal impact, whilst conventional research and development (R&D) indicators have given little insight, with the exception of patent data. In fact, in many studies, the societal impact of research has been postulated rather than demonstrated [14]. For Benoît Godin at the Institut National de la Recherche Scientifique (INRS) in Quebec, Canada, and co-author Christian Doré, “systematic measurements and indicators [of the] impact on the social, cultural, political, and organizational dimensions are almost totally absent from the literature” [15]. Furthermore, they note, most research in this field is primarily concerned with economic impact.A presentation by Ben Martin from the Science and Technology Policy Research Unit at Sussex University, UK, cites four common problems that arise in the context of societal impact measurements [16]. The first is the causality problem—it is not clear which impact can be attributed to which cause. The second is the attribution problem, which arises because impact can be diffuse or complex and contingent, and it is not clear what should be attributed to research or to other inputs. The third is the internationality problem that arises as a result of the international nature of R&D and innovation, which makes attribution virtually impossible. Finally, the timescale problem arises because the premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impact.…in many studies, the societal impact of research has been postulated rather than demonstratedIn addition, there are four other problems. First, it is hard to find experts to assess societal impact that is based on peer evaluation. As Robert Frodeman and James Britt Holbrook at the University of North Texas, USA, have noted, “[s]cientists generally dislike impacts considerations” and evaluating research in terms of its societal impact “takes scientists beyond the bounds of their disciplinary expertise” [10]. Second, given that the scientific work of an engineer has a different impact than the work of a sociologist or historian, it will hardly be possible to have a single assessment mechanism [4, 17]. Third, societal impact measurement should take into account that there is not just one model of a successful research institution. As such, assessment should be adapted to the institution''s specific strengths in teaching and research, the cultural context in which it exists and national standards. Finally, the societal impact of research is not always going to be desirable or positive. For example, Les Rymer, graduate education policy advisor to the Australian Group of Eight (Go8) network of university vice-chancellors, noted in a report for the Go8 that, “environmental research that leads to the closure of a fishery might have an immediate negative economic impact, even though in the much longer term it will preserve a resource that might again become available for use. The fishing industry and conservationists might have very different views as to the nature of the initial impact—some of which may depend on their view about the excellence of the research and its disinterested nature” [18].Unlike scientific impact measurement, for which there are numerous established methods that are continually refined, research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishments. Even so, governments already conduct budget-relevant measurements, or plan to do so. The best-known national evaluation system is the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. Efforts are under way to set up the Research Excellence Framework (REF), which is set to replace the RAE in 2014 “to support the desire of modern research policy for promoting problem-solving research” [21]. In order to develop the new arrangements for the assessment and funding of research in the REF, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches for evaluating the impact of research [20]. The recommendation from this consultation is that impact should be measured in a quantifiable way, and expert panels should review narrative evidence in case studies supported by appropriate indicators [19,21].…premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impactMany of the studies that have carried out societal impact measurement chose to do so on the basis of case studies. Although this method is labour-intensive and a craft rather than a quantitative activity, it seems to be the best way of measuring the complex phenomenon that is societal impact. The HEFCE stipulates that “case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe” [22]. Claire Donovan at Brunel University, London, UK, considers the preference for a case-study approach in the REF to be “the ‘state of the art'' [for providing] the necessary evidence-base for increased financial support of university research across all fields” [23]. According to Finn Hansson from the Department of Leadership, Policy and Philosophy at the Copenhagen Business School, Denmark, and co-author Erik Ernø-Kjølhede, the new REF is “a clear political signal that the traditional model for assessing research quality based on a discipline-oriented Mode 1 perception of research, first and foremost in the form of publication in international journals, was no longer considered sufficient by the policy-makers” [19]. ‘Mode 1'' describes research governed by the academic interests of a specific community, whereas ‘Mode 2'' is characterized by collaboration—both within the scientific realm and with other stakeholders—transdisciplinarity and basic research that is being conducted in the context of application [19].The new REF will also entail changes in budget allocations. The evaluation of a research unit for the purpose of allocations will determine 20% of the societal influence dimension [19]. The final REF guidance contains lists of examples for different types of societal impact [24].Societal impact is much harder to measure than scientific impact, and there are probably no indicators that can be used across all disciplines and institutions for collation in databases [17]. Societal impact often takes many years to become apparent, and “[t]he routes through which research can influence individual behaviour or inform social policy are often very diffuse” [18].Yet, the practitioners of societal impact measurement should not conduct this exercise alone; scientists should also take part. According to Steve Hanney at Brunel University, an expert in assessing payback or impacts from health research, and his co-authors, many scientists see societal impact measurement as a threat to their scientific freedom and often reject it [25]. If the allocation of funds is increasingly oriented towards societal impact issues, it challenges the long-standing reward system in science whereby scientists receive credits—not only citations and prizes but also funds—for their contributions to scientific advancement. However, given that societal impact measurement is already important for various national evaluations—and other countries will follow probably—scientists should become more concerned with this aspect of their research. In fact, scientists are often unaware that their research has a societal impact. “The case study at BRASS [Centre for Business Relationships, Accountability, Sustainability and Society] uncovered activities that were previously ‘under the radar'', that is, researchers have been involved in activities they realised now can be characterized as productive interactions” [26] between them and societal stakeholders. It is probable that research in many fields already has a direct societal impact, or induces productive interactions, but that it is not yet perceived as such by the scientists conducting the work.…research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishmentsThe involvement of scientists is also necessary in the development of mechanisms to collect accurate and comparable data [27]. Researchers in a particular discipline will be able to identify appropriate indicators to measure the impact of their kind of work. If the approach to establishing measurements is not sufficiently broad in scope, there is a danger that readily available indicators will be used for evaluations, even if they do not adequately measure societal impact [16]. There is also a risk that scientists might base their research projects and grant applications on readily available and ultimately misleading indicators. As Hansson and Ernø-Kjølhede point out, “the obvious danger is that researchers and universities intensify their efforts to participate in activities that can be directly documented rather than activities that are harder to document but in reality may be more useful to society” [19]. Numerous studies have documented that scientists already base their activities on the criteria and indicators that are applied in evaluations [19, 28, 29].Until reliable and robust methods to assess impact are developed, it makes sense to use expert panels to qualitatively assess the societal relevance of research in the first instance. Rymer has noted that, “just as peer review can be useful in assessing the quality of academic work in an academic context, expert panels with relevant experience in different areas of potential impact can be useful in assessing the difference that research has made” [18].Whether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting the public funding and support of basic researchWhether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting public funding and support of basic research. This has always been the case, but new research into measures that can assess the societal impact of research would provide better qualitative and quantitative data on which funding agencies and politicians could base decisions. At the same time, such measurement should not come at the expense of basic, blue-sky research, given that it is and will remain near-impossible to predict the impact of certain research projects years or decades down the line.  相似文献   

7.
The scientific process requires a critical attitude towards existing hypotheses and obvious explanations. Teaching this mindset to students is both important and challenging.People who read about scientific discoveries might get the misleading impression that scientific research produces a few rare breakthroughs—once or twice per century—and a large body of ‘merely incremental'' studies. In reality, however, breakthrough discoveries are reported on a weekly basis, and one can cite many fields just in biology—brain imaging, non-coding RNAs and stem cell biology, to name a few—that have undergone paradigm shifts within the past decade.The truly surprising thing about discovery is not just that it happens at a regular pace, but that most significant discoveries occurred only after the scientific community had already accepted another explanation. It is not merely the accrual of new data that leads to a breakthrough, but a willingness to acknowledge that a problem that is already ‘solved'' might require an entirely different explanation. In the case of breakthroughs or paradigm shifts, this new explanation might seem far-fetched or nonsensical and not even worthy of serious consideration. It is as if new ideas are sitting right in front of everyone, but in their blind spots so that only those who use their peripheral vision can see them.Scientists do not all share any single method or way of working. Yet they tend to share certain prevalent attitudes: they accept ‘facts'' and ‘obvious'' explanations only provisionally, at arm''s length, as it were; they not only imagine alternatives, but—almost as a reflex—ask themselves what alternative explanations are possible.When teaching students, it is a challenge to convey this critical attitude towards seemingly obvious explanations. In the spring semester of 2009, I offered a seminar entitled The Process of Scientific Discovery to Honours undergraduate students at the University of Illinois-Chicago in the USA. I originally planned to cover aspects of discovery such as the impact of funding agencies, the importance of mentoring and hypothesis-driven as opposed to data-driven research. As the semester progressed, however, my sessions moved towards ‘teaching moments'' drawn from everyday life, which forced the students to look at familiar things in unfamiliar ways. These served as metaphors for certain aspects of the process by which scientists discover new paradigms.For the first seven weeks of the spring semester, the class read Everyday Practice of Science by Frederick Grinnell [1]. During the discussion of the first chapter, one of the students noted that Grinnell referred to a scientist generically as ‘she'' rather than ‘he'' or the neutral ‘he or she''. This use is unusual and made her vaguely uneasy: she wondered whether the author was making a sexist point. Before considering her hypothesis, I asked the class to make a list of assumptions that they took for granted when reading the chapter, together with the possible explanations for the use of ‘she'' in the first chapter, no matter how far-fetched or unlikely they might seem.For example, one might assume that Frederick Grinnell or ‘Fred'' is from a culture similar to our own. How would we interpret his behaviour and outlook if we knew that Fred came from an exotic foreign land? Another assumption is that Fred is male; how would we view the remark if we discover that Frederick is short for Fredericka? We have equally assumed that Fred, as with most humans, wants us to like him. Instead, perhaps he is being intentionally provocative in order to get our attention or move us out of our comfort zone. Perhaps he planted ‘she'' as a deliberate example for us to discuss, as he does later in the second chapter, in which he deliberately hides a strange item in plain sight within one of the illustrations in order to make a point about observing anomalies. Perhaps the book was written not by Fred but by a ghost writer? Perhaps the ‘she'' was a typo?The truly surprising thing about discovery is […] that most significant discoveries occurred only after the scientific community had already accepted another explanationLooking for patterns throughout the book, and in Fred''s other writing, might persuade us to discard some of the possible explanations: does ‘she'' appear just once? Does Fred use other unusual or provocative turns of phrase? Does Fred discuss gender bias or sexism explicitly? Has anyone written or complained about him? Of course, one could ask Fred directly what he meant, although without knowing him personally, it would be difficult to know how to interpret his answer or whether to take his remarks at face value. Notwithstanding the answer, the exercise is an important lesson about considering and weighing all possible explanations.Arguably, the most prominent term used in science studies is the notion of a ‘paradigm''. I use this term with reluctance, as it is extraordinarily ambiguous. For example, it could simply refer to a specific type of experimental design: a randomized, placebo-controlled clinical trial could be considered a paradigm. In the context of science studies, however, it most often refers to the idea of large-scale leaps in scientific world views, as promoted by Thomas Kuhn in The Structure of Scientific Revolutions [2]. Kuhn''s notion of a paradigm can lead one to believe—erroneously in my opinion—that paradigm shifts are the opposite of practical, everyday scientific problem-solving.A paradigm is recognized by the set of assumptions that an observer might not realize he or she is making…Instead, I propose here a definition of ‘paradigm'' that emphasizes not the nature of the problem, the type of discovery or the scope of its implications, but rather the psychology of the scientist. A scientist viewing a problem or phenomenon resides within a paradigm when he or she does not notice, and cannot imagine, that an alternative way of looking at things needs to be considered seriously. Importantly, a paradigm is not a viewpoint, model, interpretation, hypothesis or conclusion. A paradigm is not the object that is viewed but the lenses through which it is viewed. A paradigm is recognized by the set of assumptions that an observer might not realize he or she is making, but which imply many automatic expectations and simultaneously prevent the observer from seeing the issue in any other fashion.For example, the teacher–student paradigm feels natural and obvious, yet it is merely set up by habit and tradition. It implies lectures, assignments, grades, ways of addressing the professor and so on, all of which could be done differently, if we had merely thought to consider alternatives. What feels most natural in a paradigm is often the most arbitrary. When we have a birthday, we expect to have a cake with candles, yet there is no natural relationship at all between birthdays, cakes and candles. In fact, when something is arbitrary or conventional yet feels entirely natural, that is an important clue that a paradigm is present.It is certainly natural for people to colour their observations according to their expectations: “To a man with a hammer, everything looks like a nail,” as Mark Twain put it. However, this is a pitfall that scientists (and doctors) must try hard to avoid. When I was a first-year medical student at Albert Einstein College of Medicine in New York City, we took a class on how to approach patients. As part of this course, we attended a session in which a psychiatrist interviewed a ‘normal, healthy old person'' in order to understand better the lives and perspectives of the elderly.A man came in, and the psychiatrist began to ask him some benign questions. After about 10 minutes, however, the man began to pause before answering; then his answers became terse; then he said he did not feel well, excused himself and abruptly left the room. The psychiatrist continued to lecture to the students for another half-hour, analysing and interpreting the halting responses in terms of the emotional conflicts that the man was experiencing. ‘Repression'', ‘emotional blocks'', and ‘reaction formation'' were some of the terms bandied about.However, unbeknown to the class, the man had collapsed just on the other side of the classroom door. Two cardiologists happened to be walking by and instantly realized the man was having an acute heart attack. They instituted CPR on the spot, but the man died within a few minutes.The psychiatrist had been told that the man was healthy, and thus interpreted everything that he saw in psychological terms. It never entered his mind that the man might have been dying in front of his eyes. The cardiologists saw a man having a heart attack, and it never entered their minds that the man might have had psychological issues.The movie The Sixth Sense [3] resonated particularly well with my students and served as a platform for discussing attitudes that are helpful for scientific investigation, such as “keep an open mind”, “reality is much stranger than you can imagine” and “our conclusions are always provisional at best”. Best of all, The Sixth Sense demonstrates the tension that exists between different scientific paradigms in a clear and beautiful way. When Haley Joel Osment says, “I see dead people,” does he actually see ghosts? Or is he hallucinating?…when scientists reach a conclusion, it is merely a place to pause and rest for a moment, not a final destinationIt is important to emphasize that these are not merely different viewpoints, or different ways of defining terms. If we argued about which mountain is higher, Everest or K2, we might disagree about which kind of evidence is more reliable, but we would fundamentally agree on the notion of measurement. By contrast, in The Sixth Sense, the same evidence used by one paradigm to support its assertion is used with equal strength by the other paradigm as evidence in its favour. In the movie, Bruce Willis plays a psychologist who assumes that Osment must be a troubled youth. However, the fact that he says he sees ghosts is also evidence in favour of the existence of ghosts, if you do not reject out of hand the possibility of their existence. These two explanations are incommensurate. One cannot simply weigh all of the evidence because each side rejects the type of evidence that the other side accepts, and regards the alternative explanation not merely as wrong but as ridiculous or nonsensical. It is in this sense that a paradigm represents a failure of imagination—each side cannot imagine that the other explanation could possibly be true, or at least, plausible enough to warrant serious consideration.The failure of imagination means that each side fails to notice or to seek ‘objective'' evidence that would favour one explanation over the other. For example, during the episodes when Osment saw ghosts, the thermostat in the room fell precipitously and he could see his own breath. This certainly would seem to constitute objective evidence to favour the ghost explanation, and the fact that his mother had noticed that the heating in her apartment was erratic suggests that the temperature change was not simply another imagined symptom. But the mother assumed that the problem was in the heating system and did not even conceive that this might be linked to ghosts—so the ‘objective'' evidence certainly was not compelling or even suggestive on its own.Osment did succeed eventually in convincing his mother that he saw ghosts, and he did it in the same way that any scientist would convince his colleagues: namely, he produced evidence that made perfect sense in the context of one, and only one, explanation. First, he told his mother a secret that he said her dead mother had told him. This secret was about an incident that had occurred before he was born, and presumably she had never spoken of it, so there was no obvious way that he could have learned about it. Next, he told her that the grandmother had heard her say “every day” when standing near her grave. Again, the mother had presumably visited the grave alone and had not told anyone about the visit or about what was said. So, the mother was eventually convinced that Osment must have spoken with the dead grandmother after all. No other explanation seemed to fit all the facts.Is this the end of the story? We, the audience, realize that it is possible that Osment had merely guessed about the incidents, heard them second-hand from another relative or (as with professional psychics) might have retold his anecdotes whilst looking for validation from his mother. The evidence seems compelling only because these alternatives seem even less likely. It is in this same sense that when scientists reach a conclusion, it is merely a place to pause and rest for a moment, not a final destination.Near the end of the course, I gave a pop-quiz asking each student to give a ‘yes'' or ‘no'' answer, plus a short one-sentence explanation, to the following question: Donald Trump seems to be a wealthy businessman. He dresses like one, he has a TV show in which he acts like one, he gives seminars on wealth building and so on. Everything we know about him says that he is wealthy as a direct result of his business activities. On the basis of this evidence, are we justified in concluding that he is, in fact, a wealthy businessman?About half the class said that yes, if all the evidence points in one direction, that suffices. About half the class said ‘no'', the stated evidence is circumstantial and we do not know, for example, what his bank balance is or whether he has more debt than equity. All the evidence we know about points in one direction, but we might not know all the facts.Even when looked at carefully, not every anomaly is attractive enough or ‘ripe'' enough to be pursued when first noticedHow do we know whether or not we know all the facts? Again, it is a matter of imagination. Let us review a few possible alternatives. Maybe his wealth comes from inheritance rather than business acumen; or from silent partners; or from drug running. Maybe he is dangerously over-extended and living on borrowed money; maybe his wealth is more apparent than real. Maybe Trump Casinos made up the role of Donald Trump as its symbol, the way McDonald''s made up the role of Ronald McDonald?Several students complained that this was a ridiculous question. Yet I had posed this just after Bernard Madoff''s arrest was blanketing the news. Madoff was known as a billionaire investor genius for decades and had even served as the head of the Securities and Exchange Commission. As it turned out, his money was obtained by a massive Ponzi scheme. Why was Madoff able to succeed for so long? Because it was inconceivable that such a famous public figure could be a common con man and the people around him could not imagine the possibility that his livelihood needed to be scrutinized.To this point, I have emphasized the benefits of paying attention to anomalous, strange or unwelcome observations. Yet paradoxically, scientists often make progress by (provisionally) putting aside anomalous or apparently negative findings that seem to invalidate or distract from their hypothesis. When Rita Levi-Montalcini was assaying the neurite-promoting effects of tumour tissue, she had predicted that this was a property of tumours and was devastated to find that normal tissue had the same effects. Only by ‘ignoring'' this apparent failure could she move forward to characterize nerve growth factor and eventually understand its biology [4].Another classic example is Huntington disease—a genetic disorder in which an inherited alteration in the gene that encodes a protein, huntingtin, leads to toxicity within certain types of neuron and causes a progressive movement disorder associated with cognitive decline and psychiatric symptoms. Clinicians observed that the offspring of Huntington disease patients sometimes showed symptoms at an earlier age than their parents, and this phenomenon, called ‘genetic anticipation'', could affect successive generations at earlier and earlier ages of onset. This observation was met with scepticism and sometimes ridicule, as everything that was known about genetics at the time indicated that genes do not change across generations. Ascertainment bias was suggested as a much more probable explanation; in other words, once a patient is diagnosed with Huntington disease, their doctors will look at their offspring much more closely and will thus tend to identify the onset of symptoms at an earlier age. Eventually, once the detailed genetics of the disease were understood at the molecular level, it was shown that the structure of the altered huntingtin gene does change. Genetic anticipation is now an accepted phenomenon.…in fact, schools teach a lot about how to test hypotheses but little about how to find good hypotheses in the first placeWhat does this teach us about discovery? Even when looked at carefully, not every anomaly is attractive enough or ‘ripe'' enough to be pursued when first noticed. The biologists who identified the structure of the abnormal huntingtin gene did eventually explain genetic anticipation, although they set aside the puzzling clinical observations and proceeded pragmatically according to their (wrong) initial best-guess as to the genetics. The important thing is to move forward.Finally, let us consider the case of Grigori Perelman, an outstanding mathematician who solved the Poincaré Conjecture a few years ago. He did not tell anyone he was working on the problem, lest their ‘helpful advice'' discourage him; he posted his historic proof online, bypassing peer-reviewed journals altogether; he turned down both the Fields Medal and a million dollar prize; and he has refused professorial posts at prestigious universities. Having made a deliberate decision to eschew the external incentives associated with science as a career, his choices have been written off as examples of eccentric anti-social behaviour. I suggest, however, that he might have simply recognized that the usual rules for success and the usual reward structure of the scientific community can create roadblocks, which had to be avoided if he was to solve a supposedly unsolvable problem.If we cannot imagine new paradigms, then how can they ever be perceived, much less tested? It should be clear by now that the ‘process of scientific discovery'' can proceed by many different paths. However, here is one cognitive exercise that can be applied to almost any situation. (i) Notice a phenomenon, even if (especially if) it is familiar and regarded as a solved problem; regard it as if it is new and strange. In particular, look hard for anomalous and strange aspects of the phenomenon that are ignored by scientists in the field. (ii) Look for the hidden assumptions that guide scientists'' thinking about the phenomenon, and ask what kinds of explanation would be possible if the assumptions were false (or reversed). (iii) Make a list of possible alternative explanations, no matter how unlikely they seem to be. (iv) Ask if one of these explanations has particular appeal (for example, if it is the most elegant theoretically; if it can generalize to new domains; and if it would have great practical impact). (v) Ask what kind of evidence would allow one to favour that hypothesis over the others, and carry out experiments to test the hypothesis.The process just outlined is not something that is taught in graduate school; in fact, schools teach a lot about how to test hypotheses but little about how to find good hypotheses in the first place. Consequently, this cognitive exercise is not often carried out within the brain of an individual scientist. Yet this creative tension happens naturally when investigators from two different fields, who have different assumptions, methods and ways of working, meet to discuss a particular problem. This is one reason why new paradigms so often emerge in the cross-fertilization of different disciplines.There are of course other, more systematic ways of searching for hypotheses by bringing together seemingly unrelated evidence. The Arrowsmith two-node search strategy [5], for instance, is based on distinct searches of the biomedical literature to retrieve articles on two different areas of science that have not been studied in relation to each other, but that the investigator suspects might be related in some fashion. The software identifies common words or phrases, which might point to meaningful links between them. This is but one example of ‘literature-based discovery'' as a heuristic technique [6], and in turn, is part of the larger data-driven approach of ‘text mining'' or ‘data mining'', which looks for unusual, new or unexpected patterns within large amounts of observational data. Regardless of whether one follows hypothesis-driven or data-driven models of investigation, let us teach our students to repeat the mantra: ‘odd is good''!? Open in a separate windowNeil R Smalheiser  相似文献   

8.
Samuel Caddick 《EMBO reports》2008,9(12):1174-1176
  相似文献   

9.
The life sciences are at loggerheads with society and neither much trusts the other. To unleash the full potential of molecular life science, a new contract with society and more organized ways of doing research are needed.Suppose that an international group of scientists presents a 15-year, €750 million research programme to unravel the molecular mechanisms of metabolic syndrome, a disorder implicated in obesity, type 2 diabetes, cardiovascular diseases, cancer and other diseases. The results from this project will neither replace obvious preventive measures—such as reducing food intake or increasing exercise—nor will it magically generate pharmaceutical treatments. Instead, the research aims to gain a systems-level understanding of metabolic syndrome that could lead to more effective prevention schemes, improved food quality, improved early diagnosis of people with higher risk, and better therapies and drugs. From an economic point of view, this is an excellent investment, as the costs of obesity and its co-morbidities in the European Union are estimated to increase to €100 billion per year in 2030 [1]. If the proposed research project helped to lower the costs by only 10%, it would generate a considerable return on investment.Sadly, such a project would be unlikely to receive funding. The reason is not because it would be scientifically unrealistic or unfeasible, or because society does not want to support expensive research programmes. The Manhattan Project to develop the atomic bomb, the Large Hadron Collider at CERN to identify the Higgs boson and the European Southern Observatory in Chile are all examples of large-scale research programmes that are publicly funded. Rather, the life sciences suffer from a unique set of problems that have developed in the past decades and would prevent such a project receiving popular support and funding. This article explores why this is the case and how modern life sciences could contribute more to society. Specifically, we argue that two areas need rethinking: the embedding of the life sciences in society and the way that research is organized.There are more than seven billion humans on this planet who need food, energy and health care, and life science research has a huge potential to address these needs. However, critics point out that many of the promises made by life scientists in the past have still not materialized. One example is the promises made to justify the US $3 billion spent on the Human Genome Project. It was supposed to provide a ‘blueprint of life'' that would quickly lead to new approaches for curing diseases. In the end, however, despite its success at generating new research fields and knowledge, the Human Genome Project has not (yet) lived up to its promises.…critics point out that many of the promises made by life scientists in the past have still not materializedWorse still, some critics perceive the life sciences as a problem that is creating physical and moral hazards. Rather than writing a blank cheque to allow scientists to pursue their research goals, governments are increasingly demanding control over the direction and application of research. This approach tends to reward short-term applications of scientific resources to help solve societal problems. Moreover, the success of projects has to be demonstrated at the application stage, before any of the research has even begun, which is fundamentally incompatible with the trial-and-error processes at the heart of creative research. Long-term, in-depth investments in research have become unpopular. Ironically, this ‘short-termism'' undermines the potential of the life sciences to be realized. Life sciences and society seem to be in a dead-lock.The life sciences must therefore regain the trust of society. This cannot be done merely by emphasizing academic freedom and the autonomy of science, or by promoting the so-called ‘cornucopia'' of science and technology, implying that both bring only precious gifts. It is also not useful to suggest that morals should follow scientific and technological developments rather than shape them. It is equally counterproductive when scientists deny taking responsibility for how their findings are applied after they leave the laboratory. As Ravetz [2] famously remarked: “Scientists claim credit for penicillin, while Society takes the blame for the Bomb”. In reality, the definition of ends is always influenced by the available means.One approach to restore the trust between the life sciences and society is to create platforms that allow for open and symmetrical dialogue. Fortunately, we do not have to start from scratch, as the recent discussions about responsible research and innovation (RRI; [3,4,5]) have already begun this process. According to René von Schomberg, policy officer at the European Commission, Directorate-General for Research, RRI is “a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products” [5]. An example is the EU ‘Code of conduct for responsible nanosciences and nanotechnologies research''. The concept is also expected to have a major role in the upcoming EU Framework Programme for Research and Innovation ‘Horizon 2020'', and national research councils in the UK, Norway and the Netherlands are supporting initiatives under this heading. The National Nanotechnology Initiative in the USA heralds ‘responsible development'', and the Presidential Commission on the Study of Bioethical Issues recommends ‘responsible stewardship'' and ‘prudent vigilance'' in relation to synthetic biology.…‘short-termism'' undermines the potential of the life sciences to be realizedThe starting point of RRI is that society, science, technology and morality comprise a single system. If one component changes, the others are also affected. The traditional division of labour—science provides knowledge and instruments, whilst society determines values and application—therefore does not work anymore. Life scientists have to acknowledge and accept that society is co-shaping their agenda. Scientists should also realise that vice versa they co-shape society, rather than just offering knowledge and tools. In other words, science and society co-evolve [6]. Bringing life sciences and society closer together requires the concerted efforts of life scientists, social scientists, ethicists, legal experts, economists, policy-makers, market parties and laypeople.Research and technology are rational activities at the micro level. Science is superior to other methods for creating reliable, testable knowledge, and technology is unmatched in providing solutions to many problems. But both tend to become irrational at higher levels of organization—we drive highly sophisticated cars and still get stuck in traffic jams, nuclear waste remains a huge problem for generations to come, and knowledge and technology have enabled increasingly lethal weaponry. Science and technology therefore need moral guidelines—such as ‘the precautionary principle''—to direct what kind of knowledge is worth pursuing and should be applied, why and for whom. This is a one-sided view, however, that ignores the perspective of the scientific method. As science proceeds by trial and error, it must be fundamentally open to pursue any avenue of knowledge. As such, new scientific insight and technological opportunities can often necessitate a reappraisal of established morals. Keeping science, technology and morality in contact with each other requires more than rules and prohibitions that can be ticked off on a form. It needs a culture of attentiveness and reflection to avoid tunnel vision whilst allowing flexibility and improvisation to learn from mistakes and change course if needed.During the past two decades, many experiments in public understanding, awareness and participation in science and technology have been conducted. In the Netherlands, for instance, this has included science cafes, citizen panels [7], nation-wide public debates about nuclear energy [8], biotechnology and food in 2001 [9], and public participation (http://www.nanopodium.nl). It is yet to be determined how effective these efforts have been. Appreciating that science and technology, as well as society and morality, constitute parts of a larger system demands skills, institutions and procedures that have not as yet been adequately developed. Indeed, many scientists continue to adhere to a ‘knowledge deficit'' model of public outreach that has scientists ‘explaining'' scientific developments to an audience that is perceived to be ignorant at best and technophobic at worst. On the other hand, NGOs have sometimes hijacked discussions by simply being ‘against'', rather than contributing to the debate in an open and constructive manner. Another problem is that the life sciences are poorly organized. They lack an organization that represents the community and can professionalize the relationship between society and life sciences.What are the requirements for developing responsibly the relationship between society and the life sciences? First, it is important that scientists are explicit about the trial-and-error character of research, and are honest and transparent about the results they expect, rather than inducing unrealistic expectations. Laypeople should be made aware of how the life sciences function and what they might achieve. It is also important that citizens understand that science produces theories that can be corroborated to some degree rather than absolute certainties, and that technology offers solutions that can do both more or less than expected. This should also make clear that the distinction between fundamental science and applied technology is both real and illusionary. It is real because it is important to allow space for unfettered curiosity. It is illusionary because even the most ‘fundamental'' research is usually embedded in a general normative vision about its possible use. Moreover, even truly applied research can generate new fundamental insight.Second, it is important to engage societal actors as early as possible in the research enterprise, so that they can determine the research agenda rather than be confronted with final results and products. A specific suggestion would be to attach a ‘layman advisory board'' to large research programmes. Although the tasks of such boards should be defined carefully to avoid tying them up in minutiae, such boards can help to enhance the involvement of society in the life sciences. Engaging interested individuals and stakeholders early on in the process and offering them an opportunity to influence the research agenda would help to foster a constructive attitude.These boards should put the definitions, visions and goals of scientific research to the social test and set research priorities accordingly. Research is inevitably conducted with reference to priorities—such as advancing knowledge, improving public health, economic development and military use—but setting these priorities exceeds the authority of science. We need social scientists, philosophers, lawyers, policy-makers, politicians, companies, opinion-makers, NGOs and patient organizations to discuss with life scientists how scientific results can be developed to benefit society. For example, because knowledge can be viewed as an economically valuable resource, we have to consider whether intellectual property laws are adequate to deal with new forms of knowledge. The ongoing debate about the legal and moral problems of intellectual property [10] also highlights issues of fairness and justice. How will science and technology influence prevailing ideas about the meaning of a ‘healthy life''? How much do we value physical and mental health, how much are we willing to invest in this value, and how will society react to choices made by individuals? The life sciences are creating enormous data collections about individuals, which raises issues about access, privacy, ownership and consent. Discussing these issues would neither put the life sciences in a strait jacket nor would it curtail curiosity-driven research. On the contrary, we should be more worried about the current situation, in which governments set the agenda by choosing which areas are funded without consulting the life sciences community or other stakeholders.The sequencing of the human genome and the rapid development of new technologies and research fields—genomics, proteomics, advanced light microscopy, bioinformatics, systems biology and synthetic biology—have not yet paid off in terms of highly visible applications that provide a significant benefit to society. However, there are many examples of progress both scientifically and clinically, such as the ongoing Encyclopaedia of DNA Elements project (ENCODE; http://encodeproject.org), the stratification of patients for anti-cancer therapies based on molecular markers or the creation of genetically modified disease-resistant plants. Notwithstanding, applying the new technologies made scientists realize that biological systems are much more complex than anticipated. The multi-layered and multi-scale complexity of cells, organisms and ecosystems is a huge challenge for research in terms of generating, analysing and integrating enormous amounts of data to gain a better understanding of living systems at all levels of organization.However, and remarkably, the life sciences have not adapted accordingly to tackle bigger challenges with larger teams comparable with their colleagues in physics and astronomy. Most research is still carried out by small groups or collaborations that are woefully inadequate to address the full complexity of living systems. This type of research is grounded in the history of molecular biology when scientists focused on individual genes, proteins and metabolic pathways. Scientists hope that many small discoveries and advances made by many individual research groups will eventually add up to a more complete picture, an approach that clearly does not work and must change. The new challenge is to systematically acquire and integrate comprehensive data sets on the huge number of components at the cellular level and that of tissues, organs and complete organisms. This requires the life sciences to scale up their research efforts into larger projects.The putative research programme described at the beginning of this paper could help to reduce the health and economic burden of metabolic syndrome. This multifactorial disorder is an excellent example of the complex interplay between organs, tissues, cells, molecules, lifestyle, genetic factors, age and stress. Unravelling this daunting but finite complexity requires a major and well-coordinated effort. It would have to combine diverse skills and disciplines, including biology, chemistry, medicine, mathematics, physics and engineering. Similar considerations are true for research into areas such as cancer, Alzheimer disease or the development of efficient biofuels. Why, then, are we not making the necessary investments? The answer to this question has four components that we address below: scaling and management of research programmes, the academic culture and funding.First, one could argue that many national and European research programmes already focus on many aspects of metabolic syndrome. Together, they probably represent an investment of several hundred million Euros. So why spend another €750 million? The problem is that the results from individual research efforts simply do not add up due to a lack of standardization in regard to experiments, the use of model systems and protocols. Given the complexity of the disease, defining standard operation protocols (SOPs) is not a trivial task and requires a considerable research effort. However, experience shows that developing SOPs should be an integral part of larger research programmes. Moreover, SOPs change as our knowledge increases. SOPs can only be effective and stimulate research in the context of a sufficiently large and receptive research community, dedicated to a common well-defined research goal. An example of the successful development and implementation of SOPs in a ‘learning-by-doing'' setting is the German Virtual Liver Network (VLN) programme (http://www.virtual-liver.de). Obviously, a considerable fraction of the €750 million should come from regrouping and readjusting existing research in the field of metabolic syndrome.Second, we have remarkably little experience in managing concerted large-scale research efforts in the life sciences in the range of €100–1,000 million. The Human Genome Project cost US $3 billion. However, genomes are just DNA sequences, and it is relatively easy to define SOPs and integrate the contributions of many research groups. In terms of research management it was relatively straightforward compared with, for instance, a metabolic syndrome programme, which would have many more dimensions and components. It would require a highly coordinated approach, as the experiments of the participating research groups from different institutes and countries are strongly interdependent and the results must add up to a larger picture.To keep such a complex research programme on track, on time and within budget, strategic decisions will inevitably need to be made at a central level, rather than by individual researchers. A serious risk is that scaling-up coordinated research efforts will result in excessive bureaucracy and inflexibility that could kill creativity. Avoiding this problem is a major challenge. The life sciences could learn here from large, long-term projects in other fields, such as high-energy physics, astronomy and ecology. An instructive example is the Census of Marine Life programme [11] that successfully measured the diversity, distribution and abundance of marine life in the oceans over ten years. It involved 80 countries and 640 institutions and had a budget of US $650 million [12]. A recent overview and analysis of large-scale research efforts in the life sciences has been presented and discussed in [13,14,15].A third aspect is academic culture, which cherishes freedom, independence and competition and will not necessarily mesh well with the idea of large, tightly managed research programmes. Even so, research institutions will probably compete to participate in goal-oriented, large-scale research programmes. In fact, this is largely similar to the way most research is funded presently, for instance, through the Framework Programmes of the European Commission, except that the total collective effort is scaled up by one to two orders of magnitude. Moreover, as large-scale efforts involve longer timescales of between 10 and 15 years, it would offer a more stable basis of financing research. However, as the cooperation between parties and their interdependency will need to increase, participating research institutes and consortia will be held much more accountable for their contribution to the overall goals. ‘Take-the-money-and-run'' is no longer an option. This will obviously require explicit agreements between a programme director and participating institutions.In addition, academic institutions should rethink their criteria for selecting and promoting researchers. The impact factor, h-factor and citation scores [16] will have to be abandoned or modified as large-scale research efforts with an increasing number of multi-author publications will decrease their ability to assess individuals. Consequently, less emphasis will be put on first or last authorships, whilst project and team management skills might become more relevant.Academic freedom and creativity will be just as essential to a collective research enterprise as it is to small projects and collaborations. Large-scale research programmes will create new scientific questions and challenges, but they will not tell investigators how to address these. Academics will be free to choose how to proceed within the programme. Moreover, researchers will benefit from tapping into well-structured, large, knowledge bases, and they will have early access to the relevant data of others. Hence, being part of a large, well-organized research community brings benefits that might compensate for a perceived decrease in independence. Again, all of this depends strongly on how a large-scale programme is organized and managed, including the distribution of responsibilities, data-sharing policies and exchanges of expertise. Competition will probably remain part of the scientific endeavour, but it is important that it does not hamper collaboration and data sharing.Whilst we stress the need for larger-scale research efforts in the life sciences to have an impact on society, we also want to acknowledge the crucial role of classical curiosity-driven research programmes. It is essential to develop a reasonable and effective balance between different types of research in life sciences.A fourth issue is the lack of adequate funding mechanisms for international, large-scale research efforts. Given the ambiguous relationship between the life sciences and society—as argued in the first part of this essay—this problem can only be solved if life scientists convince policy-makers, funders and politicians that research can significantly contribute to solving societal problems and at reasonable costs. If not, life sciences will not flourish and society will not profit. As argued above, research efforts should be scaled to the complexity of the systems they intend to investigate. There are only a few problem-focused research programmes with a volume of more than €50 million. The VLN, funded to the tune of €43 million over five years by the German ministry for education and research, comes close. Its aim is to deliver a multi-scale representation of liver physiology, integrating data from the molecular, cellular and organ levels. The VLN involves 69 principal investigators dispersed throughout Germany and is headed by a director who is responsible for keeping the programme on track and on time. Issues such as standardization, division of responsibilities between the director and principal investigators, and decision-making procedures are tackled as the programme develops.If it is possible to rekindle the trust between society and the life sciences, will society be more willing to fund expensive, large-scale research programmes? Previous examples of publicly funded scientific and technological programmes in the multi-billion Euro range are the Large Hadron Collider, the Apollo programme and the Human Genome Project. Amazingly, none of these contributed directly to human well-being, although the Human Genome Project did make such promises. Hence, a life science programme that targets a crucial societal problem convincingly, such as metabolic syndrome, should have a fair chance of being acceptable and fundable.The above four issues must be addressed before the life sciences can successfully tackle major societal problems. It will need action from the research community itself, which is painfully lacking an organization to speak on behalf of life scientists and that can take the lead in discussions, internally and between society and science. Any such organization could learn from other areas, such as high-energy physics (CERN), astronomy (European Southern Observatory) and space research (European Space Agency). It would be tremendously helpful if a group of life scientists would get this issue on the agenda. This paper is meant to stimulate that process.In summary, we argue that if society wants to benefit from what the modern life sciences have to offer, we must act on two parallel tracks. One is to bring the life sciences closer to society and accept that society, science and morality are inseparable. The other is to rethink how we organize research in the life sciences. Both tracks create major challenges that can only be tackled successfully if the life sciences get organized and create a body that can lead the debate. At a more fundamental level, we need to decide what type of knowledge we want to acquire and why. Clearly, the value of generating knowledge for the sake of knowledge itself is important, but it must constantly be balanced with the values of society, which requires a dialogue between researchers and society.? Open in a separate windowTsjalling SwierstraOpen in a separate windowNiki VermeulenOpen in a separate windowJohan BraeckmanOpen in a separate windowRoel van Driel  相似文献   

10.
Christian De Duve''s decision to voluntarily pass away gives us a pause to consider the value and meaning of death. Biologists have much to contribute to the discussion of dying with dignity.Christian de Duve''s voluntary passing away on 4 May 2013 could be seen as the momentous contribution of an eminent biologist and Nobel laureate to the discussion about ‘last things''. In contrast to his fellow scientists Ludwig Boltzmann and Allan Turing, who had made a deliberate choice to end their life in a state of depression and despair, de Duve “left with a smile and a good-bye”, as his daughter told a newspaper.What is the value and meaning of life? Is death inevitable? Should dying with dignity become an inalienable human right? Theologians, philosophers, doctors, politicians, sociologists and jurists have all offered their answers to these fundamental questions. The participation of biologists in the discussion is long overdue and should, in fact, dominate the discourse.We can start from de Duve''s premise—expressed as a subtitle of his book Cosmic Dust—that life is a cosmic imperative; a phenomenon that inevitably takes place anywhere in the universe as permitted by appropriate physicochemical conditions. Under such conditions, the second law of thermodynamics rules—prebiotic organic syntheses proceed, matter self-organizes into more complex structures and darwinian evolution begins, with its subsequent quasi-random walks towards increasing complexity. The actors of this cosmic drama are darwinian individuals—cells, bodies, groups and species—who strive to maintain their structural integrity and to survive as entities. By virtue of the same law, their components undergo successive losses of correlation, so that structures sustain irreparable damage and eventually break down. Because of this ‘double-edge'' of the second law, life progresses in cycles of birth, maturation, ageing and rejuvenation.Death is the inevitable link in this chain of events. ‘The struggle for existence'' is very much the struggle for individual survival, yet it is the number of offspring—the expression of darwinian fitness—that ultimately counts. Darwinian evolution is creative, but its master sculptor is death.Humans are apparently the only species endowed with self-consciousness and thereby a strongly amplified urge to survive. However, self-consciousness has also made humans aware of the existence of death. The clash between the urge for survival and the awareness of death must have inevitably engendered religion, with its delusion of an existence after death, and it might have been one of the main causes of the emergence of culture. Culture divides human experience into two parts: the sacred and the profane. The sacred constitutes personal transcendence: the quest for meaning, the awe of mystery, creativity and aesthetic feelings, the capacity for boundless love and hate, the joy of playing, and peaks of ecstasy. The psychologist Jonathan Haidt observed in his book The Righteous Mind: Why Good People Are Divided by Politics and Religion that “The great trick that humans developed at some point in the last few hundred thousand years is the ability to circle around a tree, rock, ancestor, flag, book or god, and then treat that thing as sacred. People who worship the same idol can trust one another, work as a team and prevail over less cohesive groups.” He considers sacredness as crucial for understanding morality. At present, biology knows almost nothing about human transcendence. Our ignorance of the complexity of human life bestows on it both mystery and sacredness.The religious sources of Western culture, late Judaism and Christianity, adopted Plato''s idea of the immortality of the human soul into their doctrines. The concept of immortality and eternity has continued to thrive in many secular versions and serves as a powerful force to motivate human creativity. Yet, immortality is ruled out by thermodynamics, and the religious version of eternal life in continuous bliss constitutes a logical paradox—eternal pleasure would mean eternal recurrence of everything across infinite time, with no escape; Heaven turned Hell. It is not immortality but temporariness that gives human life its value and meaning.There is no ‘existence of death''. Dying exists, but death does not. Death equals nothingness—no object, no action, no thing. Death is out of reach to human imagination, the intentionality of consciousness—its directedness towards objects—does not allow humans to grasp it. Death is no mystery, no issue at all—it does not concern us, as the philosopher Epicurus put it. The real human issue is dying and the terror of it. We might paraphrase Michel Montaigne''s claim that a mission of philosophy is to learn to die, and say that a mission of biology is to teach to die. Biology might complement its research into apoptosis—programmed cell death—by efforts to discover or to invent a ‘mental apoptosis''. A hundred years ago, the micro-biologist Ilya Mechnikov envisaged, in his book Essais Optimistes, that a long and gratifying personal life might eventually reach a natural state of satiation and evoke a specific instinct to withdraw, similar to the urge to sleep. Biochemistry could assist the process of dying by nullifying fear, pain and distress.In these days of advanced healthcare and technologies that can artificially extend the human lifespan, dying with dignity should become the principal concern of all humanists, not only that of scientists. It would therefore be commendable if Western culture could abandon the fallacy of immortality and eternity, whilst Oriental and African cultures ought to be welcomed to the discussion about the ‘last things''. Dying with dignity will become the ultimate achievement of a dignified life.  相似文献   

11.
12.
Meneghini R 《EMBO reports》2012,13(2):106-108
Emerging countries have established national scientific journals as an alternative publication route for their researchers. However, these journals eventually need to catch up to international standards.Since the first scientific journal was founded—The Philosophical Transactions of the Royal Society in 1665—the number of journals dedicated to publishing academic research has literally exploded. The Thomson Reuters Web of Knowledge database alone—which represents far less than the total number of academic journals—includes more than 11,000 journals from non-profit, society and commercial publishers, published in numerous languages and with content ranging from the natural sciences to the social sciences and humanities. Notwithstanding the sheer scale and diversity of academic publishing, however, there is a difference between the publishing enterprise in developed countries and emerging countries in terms of the commercial rationale behind the journals.…‘national'' or even ‘local'' journals are published and supported because they report important, practical information that would be declined by international journals…Although all academic journals seek to serve their readership by publishing the highest quality and most interesting advances, a growing trend in the twentieth century has also seen publishers in developed countries viewing academic publishing as a way of generating profit, and the desire of journal editors to publish the best and most interesting science thereby serves the commercial interest of publishers who want people to buy the publication.In emerging countries, however, there are few commercial reasons to publish a journal. Instead, ‘national'' or even ‘local'' journals are published and supported because they report important, practical information that would be declined by international journals, either because the topic is of only local or marginal interest, or because the research does not meet the high standards for publication at an international level. Consequently, most ‘national'' journals are not able to finance themselves and depend on public funding. In Brazil, for instance, the national journals account for one-third of the publications of all scientific articles from Brazil and are mostly funded by the government. Other emerging countries that invest in research—notably China, India and Russia—also have a sizable number of national journals, most of which are published in their native language.There is little competition between developed countries to publish the most or the best scientific journals. There is clear competition between the top-flight journals—Nature and Science, for example—but this competition is academically and/or commercially, rather than nationally, based. In fact, countries with similar scientific calibres in terms of the research they generate, differ greatly in terms of the number of journals published within their borders. According to the Thomson Reuters database, for example, the Netherlands, Switzerland and Sweden published 847, 202 and 30 scientific journal, respectively, in 2010—the Netherlands has been a traditional haven for publishers. However, the number of articles published by researchers in these countries in journals indexed by Thomson Reuters—a rough measurement of scientific productivity—does not differ significantly.To overcome the perceived dominance of international journals […] some emerging countries have increased the number of national journalsScientists who edit directly or serve on the editorial boards of high-quality, international journals have a major responsibility because they guide the direction and set the standards of scientific research. In deciding what to publish, they define the quality of research, promote emerging research areas and set the criteria by which research is judged to be new and exciting; they are the gatekeepers of science. The distribution of these scientists also reflects the division between developed and emerging countries in scientific publishing. Using the Netherlands, Switzerland and Sweden as examples, they respectively contributed 235, 256 and 160 scientists to the editorial teams or boards of 220 high-impact, selected journals in 2005 (Braun & Diospatonyi, 2005). These numbers are comparable with the scientific production of these countries in terms of publications. On the other hand, Brazil, South Korea and Russia, countries as scientifically productive in terms of total number of articles as the Netherlands, Switzerland and Sweden, contributed only 28, 29 and 55 ‘gatekeepers'', respectively. A principal reason for this difference is, of course, the more variable quality of the science produced in emerging countries, but it is nevertheless clear that their scientists are under-represented on the teams that define the course and standards of scientific research.To overcome the perceived dominance of international journals, and to address the significant barriers to getting published that their scientists face, some emerging countries have increased the number of national journals (Sumathipala et al, 2004). Such barriers have been well documented and include poor written English and the generally lower or more variable quality of the science produced in emerging countries. However, although English, which is the lingua franca of modern science (Meneghini & Packer, 2007), is not as great a barrier as some would claim, there is some evidence of a conscious or subconscious bias among reviewers and editors in judging articles from emerging countries. (Meneghini et al, 2008; Sumathipala et al, 2004).A third pressure has also forced some emerging countries to introduce more national journals in which to publish academic research from within their borders: greater scientific output. During the past two or three decades, several of these countries have made huge investments into research—notably China, India and Brazil, among others—which has enormously increased their scientific productivity. Initially, the new national journals aspired to adopt the rigid rules of peer review and the quality standards of international journals, but this approach did not produce satisfactory results in terms of the quality of papers published. On the one hand, it is hard for national journals to secure the expertise of scientists competent to review their submissions; on the other, the reviewers who do agree tend to be more lenient, ostensibly believing that peer review as rigorous as that of international journals would run counter to the purpose of making scientific results publicly available, at least on the national level.The establishment of national journals has, in effect, created two parallel communication streams for scientists in emerging countries: publication in international journals—the selective route—and publication in national journals—the regional route. On the basis of their perceived chances to be accepted by an international journal, authors can choose the route that gives them the best opportunity to make their results public. Economic conditions are also important as the resources to produce national journals come from government, so national journals can face budget cuts in times of austerity. In the worst case, this can lead to the demise of national journals to the disadvantage of authors who have built their careers by publishing in them.…to not publish, for any reason, is to break the process of science and potentially inhibit progressThere is some anecdotal evidence that authors who often or almost exclusively publish in international journals hold national journals in some contempt—they regard them as a way of avoiding the effort and hassle of publishing internationally. Moreover, although the way in which governments regard and support the divergent routes varies between countries, in general, scientists who endure and succeed through the selective route often receive more prestige and have more influence in shaping national science policies. Conversely, authors who choose the regional publication route regard their efforts as an important contribution to the dissemination of information generated by the national scientific community, which might otherwise remain locked away—by either language or access policies. Either way, it is worth mentioning that publication is obviously not the end point of a scientific discovery: the results should feed into the pool of knowledge and might inspire other researchers to pursue new avenues or devise new experiments. Hence, to not publish, for any reason, is to break the process of science and potentially inhibit progress.The choice of pursuing publication in regional or international journals also has direct consequences for the research being published. The selective, international route ensures greater visibility, especially if the paper is published in a high-impact journal. The regional route also makes the results and experiments public, but it fails to attract international visibility, in particular if the research is not published in English.It seems that, for the foreseeable future, this scenario will not change. If it is to change, however, then the revolution must be driven by the national journals. In fact, a change that raises the quality and value of national journals would be prudent because it would give scientists from emerging countries the opportunity to sit on the editorial boards of, or referee for, the resulting high-quality national journals. In this way, the importance of national journals would be enhanced and scientists from emerging countries would invest effort and gain experience in serving as editors or referees.The regional route has various weaknesses, however, the most important of which is the peer-review process. Peer-review at national journals is simply of a lower standard owing to several factors that include a lack of training in objective research assessment, greater leniency and tolerance of poor-quality science, and an unwillingness by top researchers to participate because they prefer to give their time to the selective journals. This creates an awkward situation: on the one hand, the inability to properly assess submissions, and on the other hand, a lack of motivation to do so.Notwithstanding these difficulties, most editors and authors of national journals hope that their publications will ultimately be recognized as visible, reliable sources of information, and not only as instruments to communicate national research to the public. In other words, their aspiration is not only to publish good science—albeit of lesser interest to international journals—but also to attain the second or third quartiles of impact factors in their areas. These journals should eventually be good enough to compete with the international ones, mitigating their national character and attracting authors from other countries.The key is to raise the assessment procedures at national journals to international standards, and to professionalize their operations. Both goals are interdependent. The vast majority of national journals are published by societies and research organizations and their editorial structures are often limited to local researchers. As a result, they are shoestring operations that lack proper administrative support and international input, and can come across as amateurish. The SciELO (Scientific Electronic Library Online), which indexes national journals and measures their quality, can require certain changes when it indexes a journal, including the requirement to internationalize the editorial body or board.…experienced international editors should be brought in to strengthen national journals, raise their quality and educate local editors…In terms of improving this status quo, a range of other changes could be introduced. First, more decision-making authority should be given to publishers to decide how to structure the editorial body. The choice of ad hoc assistants—that is, professional scientists who can lend expertise at the editorial level should be selected by the editors—who should also assess journal performance. Moreover, publishers should try to attract international scientists with editorial experience to join a core group of two or three chief or senior editors. Their English skills, their experience in their research field and their influence in the community would catalyse a rapid improvement of the journals and their quality. In other words, experienced international editors should be brought in to strengthen national journals, raise their quality and educate local editors with the long-term objective to join the international scientific editing community. It would eventually merge the national and the selective routes of publishing into a single international route of scientific communication.Of course, there is a long way to go. The problem is that many societies and organizations do not have sufficient resources—money or experience—to attract international scientists as editors. However, new publishing and financial models could provide incentives to attract this kind of expertise. Ultimately, relying on government money alone is neither a reliable nor sufficient source of income to make national journals successful. One way of enhancing revenue streams might be to switch to an open-access model that would charge author fees that could be reinvested to improve the journals. In Brazil, for instance, almost all journals have adopted the open access model (Hedlund et al, 2004). The author fees—around US$1,250—if adopted, would provide financial support for increasing the quality and performance of the journals. Moreover, increased competition between journals at a national level should create a more dynamic and competitive situation among journals, raising the general quality of the science they publish. This would also feed back to the scientific community and help to raise the general standards of science in emerging countries.  相似文献   

13.
Crop shortages     
A lack of breeders to apply the knowledge from plant science is jeopardizing public breeding programmes and the training of future plant scientistsIn the midst of an economic downturn, many college and university students in the USA face an uncertain future. There is one crop of graduates, though, who need not worry about unemployment: plant breeders. “Our students start with six-digit salaries once they leave and they have three or four offers. We have people coming to molecular biology and they can''t find jobs. People coming to plant breeding, they have as many jobs as they want,” said Edward Buckler, a geneticist with the US Department of Agriculture''s Agricultural Research Service Institute for Genomic Diversity at Cornell University (Ithaca, NY, USA).The lure of Big Ag depletes universities and research institutes of plant breeders […] and jeopardizes the training of future generations of plant scientists and breedersThe secret behind the success of qualified breeders on the job market is that they can join ‘Big Ag''—big agriculture—that is, major seed companies. Roger Boerma, coordinator of academic research for the Center for Applied Genetic Technologies at the University of Georgia (Athens, GA, USA), said that most of his graduate and postdoctoral students find jobs at companies such as Pioneer, Monsanto and Syngenta, rather than working in the orchards and fields of academic research. According to Todd Wehner, a professor and cucurbit breeder at the Department of Horticultural Science, North Carolina State University (Raleigh, NC, USA), the best-paying jobs—US$100,000 plus good benefits and research conditions—are at seed companies that deal with the main crops (Guner & Wehner, 2003). By contrast, university positions typically start at US$75,000 and tenure track.As a result, Wehner said, public crop breeding in the USA has begun to disappear. “To be clear, there is no shortage of plant breeders,” he said. “There is a shortage of plant breeders in the public sector.” The lure of Big Ag depletes universities and research institutes of plant breeders—who, after all, are the ones who create new plant varieties for agriculture—and jeopardizes the training of future generations of plant scientists and breeders. Moreover, there is an increasing demand for breeders to address the challenge of creating environmentally sustainable ways to grow more food for an increasing human population on Earth.At the same time, basic plant research is making rapid progress. The genomes of most of the main crop plants and many vegetables have been sequenced, which has enabled researchers to better understand the molecular details of how plants fend off pests and pathogens, or withstand drought and flooding. This research has also generated molecular markers—short regions of DNA that are linked to, for example, better resistance to fungi or other pathogens. So-called marker-assisted breeding based on this information is now able to create new plant varieties more effectively than would be possible with the classical strategy of crossing, selection and backcrossing.However, applying the genomic knowledge requires both breeders and plant scientists with a better understanding of each other''s expertise. As David Baulcombe, professor of botany at the University of Cambridge, UK, commented, “I think the important gap is actually in making sure that the fundamental scientists working on genomics understand breeding, and equally that those people doing breeding understand the potential of genomics. This is part of the translational gap. There''s incomplete understanding on both sides.”…applying the genomic knowledge requires both breeders and plant scientists with a better understanding of each other''s expertiseIn the genomic age, plant breeding has an image problem: like other hands-on agricultural work, it is dirty and unglamorous. “A research project in agriculture in the twenty-first century resembles agriculture for farmers in the eighteenth century,” Wehner said. “Harvesting in the fields in the summer might be considered one of the worst jobs, but not to me. I''m harvesting cucumbers just like everybody else. I don''t mind working at 105 degrees, with 95% humidity and insects biting my ankles. I actually like that. I like that better than office work.”For most students, however, genomics is the more appealing option as a cutting-edge and glamorous research field. “The exciting photographs that you always see are people holding up glass test tubes and working in front of big computer screens,” Wehner explained.In addition, Wehner said that federal and state governments have given greater priority and funding to molecular genetics than to plant breeding. “The reason we''ve gone away from plant breeding of course is that faculty can get competitive grants for large amounts of money to do things that are more in the area of molecular genetics,” he explained. “Plant breeders have switched over to molecular genetics because they can get money there and they can''t get money in plant breeding.”“The frontiers of science shifted from agriculture to genetics, especially the genetics of corn, wheat and rice,” agreed Richard Flavell, former Director of the John Innes Centre (Norwich, UK) and now Chief Scientific Officer of Ceres (Thousand Oaks, CA, USA). “As university departments have chased their money, chased the bright students, they have [focused on] programmes that pull in research dollars on the frontiers, and plant breeding has been left behind as something of a Cinderella subject.”In the genomic age, plant breeding has an image problem: like other hands-on agricultural work, it is dirty and unglamorousIn a sense, public plant breeding has become a victim of its own success. Wehner explained that over the past century, the protection of intellectual property has created a profitable market for private corporations to the detriment of public programmes. “It started out where they could protect seed-propagated crops,” he said. “The companies began to hire plant breeders and develop their own varieties. And that started the whole agricultural business, which is now huge.”As a result, Wehner said, the private sector can now outmanoeuvre public breeders at will. “[Seed companies] have huge teams that can go much faster than I can go. They have winter nurseries and big greenhouses and lots of pathologists and molecular geneticists and they have large databases and seed technologists and sales reps and catalogue artists and all those things. They can do much faster cucumber breeding than I can. They can beat me in any area that they choose to focus on.”He said that seed corporations turn only to public breeders when they are looking for rare seeds obtained on expeditions around the world or special knowledge. These crops and the breeders and other scientists who work on them receive far less financial support from government than do the more profitable crops, such as corn and soybean. In effect, these crops are in an analogous position to orphan drugs that receive little attention because the patients who need them represent a small economic market.The dwindling support for public breeding programmes is also a result of larger political developments. Since the 1980s, when British Prime Minister Margaret Thatcher and US President Ronald Regan championed the private sector in all things, government has consistently withdrawn support for public research programmes wherever the private sector can profit. “Plant breeding programmes are expensive. My programme costs about US$500,000 a year to run for my crops, watermelon and cucumber. Universities don''t want to spend that money if they don''t have to, especially if it''s already being done by the private sector,” Wehner said.“Over the last 30 years or so, food supplies and food security have fallen off the agenda of policymakers”…“Over the last 30 years or so, food supplies and food security have fallen off the agenda of policymakers,” Baulcombe explained. “Applied research in academic institutions is disappearing, and so the opportunities for linking the achievements of basic research with applications, at least in the public sector, are disappearing. You''ve got these two areas of the work going in opposite directions.”There''s another problem for plant breeding in the publish-or-perish world of academia. According to Ian Graham, Director of the Centre for Novel Agricultural Products at York University in the UK, potential academics in the plant sciences are turned off by plant breeding as a discipline because it is difficult to publish the research in high-impact journals.Graham, who is funded by the Bill & Melinda Gates Foundation to breed new varieties of Artemisia—the plant that produces the anti-malarial compound artemisinin—said this could change. “Now with the new [genomic] technologies, the whole subject of plant breeding has come back into the limelight. We can start thinking seriously about not just the conventional crops […] but all the marginal crops as well that we can really start employing these technologies on and doing exciting science and linking phenotypes to genes and phenotypes to the underlying biology,” he said. “It takes us back again closer to the science. That will bring more people into plant breeding.”…potential academics in the plant sciences are turned off by plant breeding as a discipline because it is difficult to publish the research in high-impact journalsBuckler, who specializes in functional genomic approaches to dissect complex traits in maize, wheat and Arabidopsis, said that public breeding still moves at a slower pace. “The seed companies are trying to figure out how to move genomics from gene discovery all the way to the breeding side. And it''s moving forward,” he said. “There have been some real intellectual questions that people are trying to overcome as to how fast to integrate genomics. I think it''s starting to occur also with a lot of the public breeders. A lot of it has been that the cost of genotyping, especially for specialty crops, was too high to develop marker systems that would really accelerate breeding.”Things might be about to change on the cost side as well. Buckler said that decreasing costs for sequencing and genotyping will give public breeding a boost. Using today''s genomic tools, researchers and plant breeders could match the achievements of the last century in maize breeding within three years. He said that comparable gains could be made in specialty crops, the forte of public breeding. “Right now, most of the simulations suggest that we can accelerate it about threefold,” Buckler said. “Maybe as our knowledge increases, maybe we can approach a 15-fold rate increase.”Indeed, the increasing knowledge from basic research could well contribute to significant advances in the coming years. “We''ve messed around with genes in a rather blind, sort of non-predictive process,” said Scott Jackson, a plant genomics expert at Purdue University (West Lafayette, IN, USA), who headed the team that decoded the soybean genome (Schmutz et al, 2010). “Having a full genome sequence, having all the genes underlying all the traits in whatever plant organism you''re looking at, makes it less blind. You can determine which genes affect the trait and it has the potential to make it a more predictive process where you can take specific genes in combinations and you can predict what the outcome might be. I think that''s where the real revolution in plant breeding is going to come.”Nevertheless, the main problem that could hold back this revolution is a lack of trained people in academia and the private sector. Ted Crosbie, Head of Plant Breeding at Monsanto (St Louis, MO, USA), commented at the national Plant Breeding Coordinating Committee meeting in 2008 that “[w]e, in the plant breeding industry, face a number of challenges. More plant breeders are reaching retirement age at a time when the need for plant breeders has never been greater […] We need to renew our nation''s capacity for plant breeding.”“…with the new [genomic] technologies, the whole subject of plant breeding has come back into the limelight”Dry bean breeder James Kelly, a professor of crop and soil sciences at Michigan State University (East Lansing, MI, USA), said while there has been a disconnect between public breeders and genomics researchers, new federal grants are designed to increase collaboration.In the meantime, developing countries such as India and China have been filling the gap. “China is putting a huge amount of effort into agriculture. They actually know the importance of food. They have plant breeders all over the place,” Wehner said. “The US is starting to fall behind. And now, agricultural companies are looking around wondering—where are we going to get our plant breeders?”To address the problem, major agriculture companies have begun to fund fellowships to train new plant breeders. Thus far, Buckler said, these efforts have had only a small impact. He noted that 500 new PhDs a year are needed just in maize breeding. “It''s not uncommon for the big companies like Monsanto, Pioneer and Syngenta to spend money on training, on endowing chairs at universities,” Flavell said. “It''s good PR, but they''re serious about the need for breeders.”The US government has also taken some measures to alleviate the problem. Congress decided to establish the US National Institute of Food and Agriculture (Washington, DC, USA) under the auspices of the US Department of Agriculture to make more efficient use of research money, advance the application of plant science and attract new students to plant breeding (see the interview with Roger Beachy in this issue, pp 504–507). Another approach is to use distance education to train breeders, such as technicians who want to advance their careers, in certificate programmes rather than master''s or doctorate programmes.“If [breeding] is not done in universities in the public sector, where is it done?”…“If [breeding] is not done in universities in the public sector, where is it done?” Flavell asked about the future of public breeding. “I can wax lyrical and perhaps be perceived as being over the top, but if we''re going to manage this planet on getting more food out of less land, this has to be almost one of the highest things that has got to be taken care of by government.” Wehner added, “The public in the developed world thinks food magically appears in grocery stores. There is no civilization without agriculture. Without plant breeders to work on improving our crops, civilization is at risk.”  相似文献   

14.
Lessons from science studies for the ongoing debate about ‘big'' versus ‘little'' research projectsDuring the past six decades, the importance of scientific research to the developed world and the daily lives of its citizens has led many industrialized countries to rebrand themselves as ‘knowledge-based economies''. The increasing role of science as a main driver of innovation and economic growth has also changed the nature of research itself. Starting with the physical sciences, recent decades have seen academic research increasingly conducted in the form of large, expensive and collaborative ‘big science'' projects that often involve multidisciplinary, multinational teams of scientists, engineers and other experts.Although laboratory biology was late to join the big science trend, there has nevertheless been a remarkable increase in the number, scope and complexity of research collaborations…Although laboratory biology was late to join the big science trend, there has nevertheless been a remarkable increase in the number, scope and complexity of research collaborations and projects involving biologists over the past two decades (Parker et al, 2010). The Human Genome Project (HGP) is arguably the most well known of these and attracted serious scientific, public and government attention to ‘big biology''. Initial exchanges were polarized and often polemic, as proponents of the HGP applauded the advent of big biology and argued that it would produce results unattainable through other means (Hood, 1990). Critics highlighted the negative consequences of massive-scale research, including the industrialization, bureaucratization and politicization of research (Rechsteiner, 1990). They also suggested that it was not suited to generating knowledge at all; Nobel laureate Sydney Brenner joked that sequencing was so boring it should be done by prisoners: “the more heinous the crime, the bigger the chromosome they would have to decipher” (Roberts, 2001).A recent Opinion in EMBO reports summarized the arguments against “the creeping hegemony” of ‘big science'' over ‘little science'' in biomedical research. First, many large research projects are of questionable scientific and practical value. Second, big science transfers the control of research topics and goals to bureaucrats, when decisions about research should be primarily driven by the scientific community (Petsko, 2009). Gregory Petsko makes a valid point in his Opinion about wasteful research projects and raises the important question of how research goals should be set and by whom. Here, we contextualize Petsko''s arguments by drawing on the history and sociology of science to expound the drawbacks and benefits of big science. We then advance an alternative to the current antipodes of ‘big'' and ‘little'' biology, which offers some of the benefits and avoids some of the adverse consequences.Big science is not a recent development. Among the first large, collaborative research projects were the Manhattan Project to develop the atomic bomb, and efforts to decipher German codes during the Second World War. The concept itself was put forward in 1961 by physicist Alvin Weinberg, and further developed by historian of science Derek De Solla Price in his pioneering book, Little Science, Big Science. “The large-scale character of modern science, new and shining and all powerful, is so apparent that the happy term ‘Big Science'' has been coined to describe it” (De Solla Price, 1963). Weinberg noted that science had become ‘big'' in two ways. First, through the development of elaborate research instrumentation, the use of which requires large research teams, and second, through the explosive growth of scientific research in general. More recently, big science has come to refer to a diverse but strongly related set of changes in the organization of scientific research. This includes expensive equipment and large research teams, but also the increasing industrialization of research activities, the escalating frequency of interdisciplinary and international collaborations, and the increasing manpower needed to achieve research goals (Galison & Hevly, 1992). Many areas of biological research have shifted in these directions in recent years and have radically altered the methods by which biologists generate scientific knowledge.Despite this long history of collaboration, laboratory biology remained ‘small-scale'' until the rising prominence of molecular biology changed the research landscapeUnderstanding the implications of this change begins with an appreciation of the history of collaborations in the life sciences—biology has long been a collaborative effort. Natural scientists accompanied the great explorers in the grand alliance between science and exploration during the sixteenth and seventeenth centuries (Capshew & Rader, 1992), which not only served to map uncharted territories, but also contributed enormously to knowledge of the fauna and flora discovered. These early expeditions gradually evolved into coordinated, multidisciplinary research programmes, which began with the International Polar Years, intended to concentrate international research efforts at the North and South Poles (1882–1883; 1932–1933). The Polar Years became exemplars of large-scale life science collaboration, begetting the International Geophysical Year (1957–1958) and the International Biological Programme (1968–1974).For Weinberg, the potentially negative consequences associated with big science were “adminstratitis, moneyitis, and journalitis”…Despite this long history of collaboration, laboratory biology remained ‘small-scale'' until the rising prominence of molecular biology changed the research landscape. During the late 1950s and early 1960s, many research organizations encouraged international collaboration in the life sciences, spurring the creation of, among other things, the European Molecular Biology Organization (1964) and the European Molecular Biology Laboratory (1974). In addition, international mapping and sequencing projects were developed around model organisms such as Drosophila and Caenorhabditis elegans, and scientists formed research networks, exchanged research materials and information, and divided labour across laboratories. These new ways of working set the stage for the HGP, which is widely acknowledged as the cornerstone of the current ‘post-genomics era''. As an editorial on ‘post-genomics cultures'' put it in the journal Nature, “Like it or not, big biology is here to stay” (Anon, 2001).Just as big science is not new, neither are concerns about its consequences. As early as 1948, the sociologist Max Weber worried that as equipment was becoming more expensive, scientists were losing autonomy and becoming more dependent on external funding (Weber, 1948). Similarly, although Weinberg and De Solla Price expressed wonder at the scope of the changes they were witnessing, they too offered critical evaluations. For Weinberg, the potentially negative consequences associated with big science were “adminstratitis, moneyitis, and journalitis”; meaning the dominance of science administrators over practitioners, the tendency to view funding increases as a panacea for solving scientific problems, and progressively blurry lines between scientific and popular writing in order to woo public support for big research projects (Weinberg, 1961). De Solla Price worried that the bureaucracy associated with big science would fail to entice the intellectual mavericks on which science depends (De Solla Price, 1963). These concerns remain valid and have been voiced time and again.As big science represents a major investment of time, money and manpower, it tends to determine and channel research in particular directions that afford certain possibilities and preclude others (Cook & Brown, 1999). In the worst case, this can result in entire scientific communities following false leads, as was the case in the 1940s and 1950s for Soviet agronomy. Huge investments were made to demonstrate the superiority of Lamarckian over Mendelian theories of heritability, which held back Russian biology for decades (Soyfer, 1994). Such worst-case scenarios are, however, rare. A more likely consequence is that big science can diminish the diversity of research approaches. For instance, plasma fusion scientists are now under pressure to design projects that are relevant to the large-scale International Thermonuclear Experimental Reactor, despite the potential benefits of a wide array of smaller-scale machines and approaches (Hackett et al, 2004). Big science projects can also involve coordination challenges, take substantial time to realize success, and be difficult to evaluate (Neal et al, 2008).Importantly, big science projects allow for the coordination and activation of diverse forms of expertise across disciplinary, national and professional boundariesAnother danger of big science is that researchers will lose the intrinsic satisfaction that arises from having personal control over their work. Dissatisfaction could lower research productivity (Babu & Singh, 1998) and might create the concomitant danger of losing talented young researchers to other, more engaging callings. Moreover, the alienation of scientists from their work as a result of big science enterprises can lead to a loss of personal responsibility for research. In turn, this can increase the likelihood of misconduct, as effective social control is eroded and “the satisfactions of science are overshadowed by organizational demands, economic calculations, and career strategies” (Hackett, 1994).Practicing scientists are aware of these risks. Yet, they remain engaged in large-scale projects because they must, but also because of the real benefits these projects offer. Importantly, big science projects allow for the coordination and activation of diverse forms of expertise across disciplinary, national and professional boundaries to solve otherwise intractable basic and applied problems. Although calling for international and interdisciplinary collaboration is popular, practicing it is notably less popular and much harder (Weingart, 2000). Big science projects can act as a focal point that allows researchers from diverse backgrounds to cooperate, and simultaneously advances different scientific specialties while forging interstitial connections among them. Another major benefit of big science is that it facilitates the development of common research standards and metrics, allowing for the rapid development of nascent research frontiers (Fujimura, 1996). Furthermore, the high profile of big science efforts such as the HGP and CERN draw public attention to science, potentially enhancing scientific literacy and the public''s willingness to support research.Rather than arguing for or against big science, molecular biology would best benefit from strategic investments in a diverse portfolio of big, little and ‘mezzo'' research projectsBig science can also ease some of the problems associated with scientific management. In terms of training, graduate students and junior researchers involved in big science projects can gain additional skills in problem-solving, communication and team working (Court & Morris, 1994). The bureaucratic structure and well-defined roles of big science projects also make leadership transitions and researcher attrition easier to manage compared with the informal, refractory organization of most small research projects. Big science projects also provide a visible platform for resource acquisition and the recruitment of new scientific talent. Moreover, through their sheer size, diversity and complexity, they can also increase the frequency of serendipitous social interactions and scientific discoveries (Hackett et al, 2008). Finally, large-scale research projects can influence scientific and public policy. Big science creates organizational structures in which many scientists share responsibility for, and expectations of, a scientific problem (Van Lente, 1993). This shared ownership and these shared futures help coordinate communication and enable researchers to present a united front when advancing the potential benefits of their projects to funding bodies.Given these benefits and pitfalls of big science, how might molecular biology best proceed? Petsko''s response is that, “[s]cientific priorities must, for the most part, be set by the free exchange of ideas in the scientific literature, at meetings and in review panels. They must be set from the bottom up, from the community of scientists, not by the people who control the purse strings.” It is certainly the case, as Petsko also acknowledges, that science has benefited from a combination of generous public support and professional autonomy. However, we are less sanguine about his belief that the scientific community alone has the capacity to ascertain the practical value of particular lines of inquiry, determine the most appropriate scale of research, and bring them to fruition. In fact, current mismatches between the production of scientific knowledge and the information needs of public policy-makers strongly suggest that the opposite is true (Sarewitz & Pielke, 2007).Instead, we maintain that these types of decision should be determined through collective decision-making that involves researchers, governmental funding agencies, science policy experts and the public. In fact, the highly successful HGP involved such collaborations (Lambright, 2002). Taking into account the opinions and attitudes of these stakeholders better links knowledge production to the public good (Cash et al, 2003)—a major justification for supporting big biology. We do agree with Petsko, however, that large-scale projects can develop pathological characteristics, and that all programmes should therefore undergo regular assessments to determine their continuing worth.Rather than arguing for or against big science, molecular biology would best benefit from strategic investments in a diverse portfolio of big, little and ‘mezzo'' research projects. Their size, duration and organizational structure should be determined by the research question, subject matter and intended goals (Westfall, 2003). Parties involved in making these decisions should, in turn, aim at striking a profitable balance between differently sized research projects to garner the benefits of each and allow practitioners the autonomy to choose among them.This will require new, innovative methods for supporting and coordinating research. An important first step is ensuring that funding is made available for all kinds of research at a range of scales. For this to happen, the current funding model needs to be modified. The practice of allocating separate funds for individual investigator-driven and collective research projects is a positive step in the right direction, but it does not discriminate between projects of different sizes at a sufficiently fine resolution. Instead, multiple funding pools should be made available for projects of different sizes and scales, allowing for greater accuracy in project planning, funding and evaluation.It is up to scientists and policymakers to discern how to benefit from the advantages that ‘bigness'' has to offer, while avoiding the pitfalls inherent in doing soSecond, science policy should consciously facilitate the ‘scaling up'', ‘scaling down'' and concatenation of research projects when needed. For instance, special funds might be established for supporting small-scale but potentially transformative research with the capacity to be scaled up in the future. Alternatively, small-scale satellite research projects that are more nimble, exploratory and risky, could complement big science initiatives or be generated by them. This is also in line with Petsko''s statement that “the best kind of big science is the kind that supports and generates lots of good little science.” Another potentially fruitful strategy we suggest would be to fund independent, small-scale research projects to work on co-relevant research with the later objective of consolidating them into a single project in a kind of building-block assembly. By using these and other mechanisms for organizing research at different scales, it could help to ameliorate some of the problems associated with big science, while also accruing its most important benefits.Within the life sciences, the field of ecology perhaps best exemplifies this strategy. Although it encompasses many small-scale laboratory and field studies, ecologists now collaborate in a variety of novel organizations that blend elements of big, little and mezzo science and that are designed to catalyse different forms of research. For example, the US National Center for Ecological Analysis and Synthesis brings together researchers and data from many smaller projects to synthesize their findings. The Long Term Ecological Research Network consists of dozens of mezzo-scale collaborations focused on specific sites, but also leverages big science through cross-site collaborations. While investments are made in classical big science projects, such as the National Ecological Observatory Network, no one project or approach has dominated—nor should it. In these ways, ecologists have been able to reap the benefits of big science whilst maintaining diverse research approaches and individual autonomy and still being able to enjoy the intrinsic satisfaction associated with scientific work.Big biology is here to stay and is neither a curse nor a blessing. It is up to scientists and policy-makers to discern how to benefit from the advantages that ‘bigness'' has to offer, while avoiding the pitfalls inherent in so doing. The challenge confronting molecular biology in the coming years is to decide which kind of research projects are best suited to getting the job done. Molecular biology itself arose, in part, from the migration of physicists to biology; as physics research projects and collaborations grew and became more dependent on expensive equipment, appreciating the saliency of one''s own work became increasingly difficult, which led some to seek refuge in the comparatively little science of biology (Dev, 1990). The current situation, which Petsko criticizes in his Opinion article, is thus the result of an organizational and intellectual cycle that began more than six decades ago. It would certainly behoove molecular biologists to heed his warnings and consider the best paths forward.? Open in a separate windowNiki VermeulenOpen in a separate windowJohn N. ParkerOpen in a separate windowBart Penders  相似文献   

15.
16.
Hunter P 《EMBO reports》2010,11(12):924-926
The global response to the credit crunch has varied from belt tightening to spending sprees. Philip Hunter investigates how various countries react to the financial crisis in terms of supporting scientific research.The overall state of biomedical research in the wake of the global financial crisis remains unclear amid growing concern that competition for science funding is compromising the pursuit of research. Such concerns pre-date the credit crunch, but there is a feeling that an increasing amount of time and energy is being wasted in the ongoing scramble for grants, in the face of mounting pressure from funding agencies demanding value for money. Another problem is balancing funding between different fields; while the biomedical sciences have generally fared well, they are increasingly dependent on basic research in physics and chemistry that are in greater jeopardy. This has led to calls for rebalancing funding, in order to ensure the long-term viability of all fields in an increasingly multidisciplinary and collaborative research world.For countries that are cutting funding—such as Spain, Italy and the UK—the immediate priority is to preserve the fundamental research base and avoid a significant drain of expertise, either to rival countries or away from science altogether. This has highlighted the plight of postdoctoral researchers who have traditionally been the first to suffer from funding cuts, partly because they have little immediate impact on on a country''s scientific competitiveness. Postdocs have been the first to go whenever budgets have been cut, according to Richard Frankel, a physicist at California Polytechnic State University in Saint Luis Obispo, who investigates magnetotaxis in bacteria. “In the short term there will be little effect but the long-term effects can be devastating,” he said.…there is a feeling that an increasing amount of time and energy is being wasted in the ongoing scramble for grants, in the face of mounting pressure from funding agencies…According to Peter Stadler, head of a bioinformatics group at the University of Leipzig in Germany, such cuts tend to cause the long-term erosion of a country''s science skills base. “Short-term cuts in science funding translate totally into a brain drain, since they predominantly affect young researchers who are paid from the soft money that is drying up first,” said Stadler. “They either leave science, an irreversible step, or move abroad but do not come back later, because the medium-term effect of cuts is a reduction in career opportunities and fiercer competition giving those already in the system a big advantage.”Even when young researchers are not directly affected, the prevailing culture of short-term funding—which requires ongoing grant applications—can be disruptive, according to Xavier Salvatella, principal investigator in the Laboratory of Molecular Biophysics at the Institute for Research in Biomedicine in Barcelona, Spain. “I do not think the situation is dramatic but too much time is indeed spent writing proposals,” he commented. “Because success rates are decreasing, the time devoted to raise funds to run the lab necessarily needs to increase.”At the University of Adelaide in Australia, Andrew Somogyi, professor of pharmacology, thinks that the situation is serious: “[M]y postdocs would spend about half their time applying for grants.” Somogyi pointed out that the success rate has been declining in Australia, as it has in some other countries. “For ARC [Australian Research Council] the success rate is now close to 20%, which means many excellent projects don''t get funding because the assessment is now so fine cut,” he said.Similar developments have taken place in the USA at both the National Institutes of Health (NIH)—which provides US$16 billion funding per year and the American Cancer Society (ACS), the country''s largest private non-profit funder of cancer research, with a much smaller pot of US$120 million per year. The NIH funded 21% of research proposals submitted to it in 2009, compared with 32% a decade earlier, while the ACS approves only 15% of grant applications, down several percentage points over the past few years.While the NIH is prevented by federal law from allowing observers in to its grant review meetings, the ACS did allow a reporter from Nature to attend one of its sessions on the condition that the names of referees and the applications themselves were not revealed (Powell, 2010). The general finding was that while the review process works well when around 30% of proposals are successful, it tends to break down as the success rate drops, as more arbitrary decisions are made and the risk of strong pitches being rejected increases. This can also discourage the best people from being reviewers because the process becomes more tiring and time-consuming.Even when young researchers are not directly affected, the prevailing culture of short-term funding—which requires ongoing grant applications—can be disruptive…In some countries, funding shortfalls are also leading to the loss of permanent jobs, for example in the UK where finance minister George Osborne announced on October 20 that the science budget would be frozen at £4.6 billion, rather than cut as had been expected. Even so, combined with the cut in funding for universities that was announced on the same day, this raises the prospect of reductions in academic staff numbers, which could affect research projects. This follows several years of increasing funding for UK science. Such uncertainty is damaging, according to Cornelius Gross, deputy head of the mouse biology unit, European Molecular Biology Laboratory in Monterotondo, Italy. “Large fluctuations in funding have been shown to cause damage beyond their direct magnitude as can be seen in the US where the Clinton boom was inevitably followed by a slowdown that led to rapid and extreme tightening of budgets,” he said.Some countries are aware of these dangers and have acted to protect budgets and, in some cases, even increase spending. A report by the OECD argued that countries and companies that boosted research and development spending during the ‘creative destruction'' of an economic downturn tended to gain ground on their competitors and emerge from the crisis in a relatively stronger position (OECD, 2009). This was part of the rationale of the US stimulus package, which was intended to provide an immediate lift to the economy and has been followed by a slight increase in funding. The NIH''s budget is set to increase by $1 billion, or 3% from 2010 to 2011, reaching just over $32 billion. This looks like a real-term increase, since inflation in the USA is now between 1 and 2%. However, there are fears that budgets will soon be cut; even now the small increase at the Federal level is being offset by cuts in state support, according to Mike Seibert, research fellow at the US Department of Energy''s National Renewable Energy Laboratory. “The stimulus funds are disappearing in the US, and the overall budget for science may be facing a correction at the national level as economic, budget, and national debt issues are addressed,” he said. “The states in most cases are suffering their own budget crises and will be cutting back on anything that is not nailed down.”…countries and companies that boosted research and development spending during the ‘creative destruction'' of an economic downturn tended to gain ground on their competitors…In Germany, the overall funding situation is also confused by a split between the Federal and 16 state governments, each of which has its own budget for science. In contrast to many other countries though, both federal and state governments have responded boldly to the credit crisis by increasing the total budget for the DFG (Deutsche Forschungsgemeinschaft)—Germany''s largest research funding agency—to €2.3 billion in 2011. Moreover, total funding for research and education from the BMBF (Federal Ministry for Education and Research) is expected to increase by another 7% from €10.9 billion in 2010 to €11.64 billion, although the overall federal budget is set to shrink by 3.8% under Germany''s austerity measures (Anon, 2010). There have also been increases in funding from non-government sources, such as the Fraunhofer Society, Europe''s largest application-oriented research organization, which has an annual budget of €1.6 billion.The German line has been strongly applauded by the European Union, which since 2007 has channelled its funding for cutting-edge research through the European Research Council (ERC). The ERC''s current budget of €7.5 billion, which runs until 2013, was set in 2007 and negotiations for the next period have not yet begun, but the ERC''s executive agency director Jack Metthey has indicated that it will be increased: “The Commission will firmly sustain in the negotiations the view that research and innovation, central to the Europe 2020 Strategy agreed by the Member States, should be a top budgetary priority.” Metthey also implied that governments cutting funding, as the UK had been planning to do, were making a false economy that would gain only in the short term. “Situations vary at the national level but the European Commission believes that governments should maintain and even increase research and innovation investments during difficult times, because these are pro-growth, anti-crisis investments,” he said.Many other countries have to cope with flat or declining science budgets; some are therefore exploring ways in which to do more with less. In Japan, for instance, money has been concentrated on larger projects and fewer scientists, with the effect of intensifying the grant application process. Since 2002, the total Japanese government budget for science and technology has remained flat at around ¥3,500 billion—or €27 billion at current exchange rates—with a 1% annual decline in university support but increased funding for projects considered to be of high value to the economy. This culminated in March 2010 with the launch of the ¥100 billion (€880 million) programme for World Leading Innovative Research and Development on Science and Technology.But such attempts to make funding more competitive or focus it on specific areas could have unintended side effects on innovation and risk taking. One side effect can be favouring scientists who may be less creative but good at attracting grants, according to Roger Butlin, evolutionary biologist at the University of Sheffield in the UK. “Some productive staff are being targeted because they do not bring in grants, so money is taking precedence over output,” said Butlin. “This is very dangerous if it results in loss of good theoreticians or data specialists, especially as the latter will be a critical group in the coming years.”“Scientists are usually very energetic when they can pursue their own ideas and less so when the research target is too narrowly prescribed”There have been attempts to provide funding for young scientists based entirely on merit, such as the ERC ‘Starting Grant'' for top young researchers, whose budget was increased by 25% to €661 million for 2011. Although they are welcome, such schemes could also backfire unless they are supported by measures to continue supporting the scientists after these early career grants expire, according to Gross. “There are moves to introduce significant funding for young investigators to encourage independence, so called anti-brain-drain grants,” he said. “These are dangerous if provided without later independent positions for these people and a national merit-based funding agency to support their future work.”Such schemes might work better if they are incorporated into longer-term funding programmes that provide some security as well as freedom to expand a project and explore promising side avenues. Butlin cited the Canadian ‘Discovery Grant'' scheme as an example worth adopting elsewhere; it supports ongoing programmes with long-term goals, giving researchers freedom to pursue new lines of investigation, provided that they fit within the overall objective of the project.To some extent the system of ‘open calls''—supported by some European funding agencies—has the same objective, although it might not provide long-term funding. The idea is to allow scientists to manoeuvre within a broad objective, rather than confining them to specific lines of research or ‘thematic calls'', which tend to be highly focused. “The majority of funding should be distributed through open calls, rather than thematic calls,” said Thomas Höfer from the Modeling Research Group at the German Cancer Research Center & BioQuant Center in Heidelberg. “Scientists are usually very energetic when they can pursue their own ideas and less so when the research target is too narrowly prescribed. In my experience as a reviewer at both the national and EU level, open calls are also better at funding high-quality research whereas too narrow thematic calls often result in less coherent proposals.”“Cutting science, and education, is the national equivalent of a farmer eating his ‘seed corn'', and will lead to developing nation status within a generation”Common threads seems to be emerging from the different themes and opinions about funding: budgets should be consistent over time and spread fairly among all disciplines, rather than focused on targeted objectives. They should also be spread across the working lifetime of a scientist rather than being shot in a scatter-gun approach at young researchers. Finally, policies should put a greater emphasis on long-term support for the best scientists and projects, chosen for their merit. Above all, funding policy should reflect the fundamental importance of science to economies, as Seibert concluded: “Cutting science, and education, is the national equivalent of a farmer eating his ‘seed corn'', and will lead to developing nation status within a generation.”  相似文献   

17.
18.
19.
P Hunter 《EMBO reports》2012,13(9):795-797
A shortage of skilled science labour in Europe could hold back research progress. The EU will increase science funding to address the problem, but real long-term measures need to start in schools, not universities.Scientists have always warned about the doom of research that could result from a shortage of students and skilled labour in the biomedical sciences. In the past, this apocalyptic vision of empty laboratories and unclaimed research grants has seemed improbable, but some national research councils and the European Union (EU) itself now seem to think that we may be on the brink of a genuine science labour crisis in Europe. This possibility, and its potential effects on economic growth, has proven sufficiently convincing for the European Commission (EC) to propose a 45% increase to its seven-year research and development budget of 45%—from €55 billion, provided under the Framework Programme (FP7), to €80 billion—for a new strategic programme for research and innovation called Horizon 2020 that will start in 2014.This bold proposal to drastically increase research funding, which comes at a time when many other budgets are being frozen or cut, was rigorously defended in May 2012 by the EU ministers responsible for science and innovation, against critics who argued that such a massive increase could not be justified given the deepening economic crisis across the EU. So far, the EU seems to be holding to the line that it has to invest more into research if Europe is to compete globally through technological innovation underpinned by scientific research.Europe is caught in a pincer movement between its principle competitors—the USA and Japan, which are both increasing their research budgets way ahead of inflation—and the emerging economies of China, India, Brazil and Russia, which are quickly closing from behind. The main argument for the Horizon 2020 funding boost came from a study commissioned by the EU [1], which led the EC to claim that Europe faces an “innovation emergency” because its businesses are falling behind US and Japanese rivals in terms of investment and new patents. As Martin Lange, Policy Officer for Marie Curie Actions—an EU fellowship programme for scientists—pointed out, “China, India and Brazil have started to rapidly catch up with the EU by improving their performance seven per cent, three per cent and one per cent faster than the EU year on year over the last five years.”According to Lange, Europe''s innovation gap equates to a shortage of around 1 million researchers across the EU, including a large number in chemistry and the life sciences. This raises fundamental issues of science recruitment and retention that a budget increase alone cannot address. The situation has also been confused by the economic crisis, which has led to the position where many graduates are unemployed, and yet there is still an acute shortage of specialist skills in areas vital to research.This is a particularly serious issue in the UK, where around 2,000 researcher jobs were lost following the closure of pharmaceutical company Pfizer''s R&D facility in Kent, announced in February 2011. “The travails of Pfizer have affected the UK recruitment market,” explained Charlie Ball, graduate labour market specialist at the UK''s Higher Education Careers Services Unit. The closure has contributed to high unemployment among graduates, particularly chemists, who tend to be employed in pharmaceutical research in the UK. “Even among people with chemistry doctorates, the unemployment rate is higher than the average,” he said.The issue for chemists, at least in the UK, is not a skills shortage, but a skills mismatch. Ball identified analytical chemistry as one area without enough skilled people, despite the availability of chemists with other specialties. He attributes part of the problem to the pharmaceutical industry''s inability to communicate its requirements to universities and graduates, although he concedes that doing so can be challenging. “One issue is that industry is changing so quickly that it is genuinely difficult to say that in three or four years time we will need people with specific skills,” Ball explained.So far, the EU seems to be holding to the line that it has to invest more into research […] to compete globally through technological innovation underpinned by scientific researchAlongside this shortage of analytical skills, the UK Medical Research Council (MRC) has identified a lack of people with practical research knowledge, and in particular of experience working with animals, as a major factor holding back fundamental and pre-clinical biomedical research in the country. It has responded by encouraging applications from non-UK and even non-EU candidates for doctoral studentships that it funds, in cases where there is a scarcity of suitable UK applicants.But, the underlying problem common to the whole of Europe is more fundamental, at least according to Bengt Norden, Professor of Physical Chemistry at the University of Gothenburg in Sweden. The issue is not a shortage of intellectual capital, Norden argues, but a growing lack of investment into training chemists, which in turn undermines life sciences research. Similarly to many other physical chemists, Norden has worked mainly in biology, where he has applied his expertise in molecular recognition and function to DNA recombination and membrane translocation mechanisms. He therefore views a particularly acute recruitment and retention crisis in chemistry as being a drag on both fundamental and applied research across the life sciences. “The recruitment crisis is severe,” Norden said. “While a small rill of genuinely devoted‘young amateur scientists‘ still may sustain the recruitment chain, there is a general drain of interest in science in general and chemistry in particular.” He attributes this in part to sort of a ‘chemophobia'', resulting from the association of chemistry with environmental pollution or foul odours, but he also blames ignorant politicians and other public figures for their negative attitude towards chemistry. “A former Swedish Prime Minister, Goran Persson, claimed that ‘his political goal was to make Sweden completely free from chemicals'',” Norden explained by way of example.Scientists themselves also need to do a better job of countering the negative perceptions of chemistry and science, perhaps by highlighting the contribution that chemistry is already making to clearing up pollution. Chemistry has been crucial to the development of microorganisms that can be used to break down organic pollutants in industrial waste, or clear up accidental spillage during transport. In fact, chemistry has specifically addressed the two major challenges involved: the risk that genetically engineered microorganisms could threaten the wider environment if they escape, and the problem that the microorganisms themselves can be poisoned if the concentration of pollutants is too high.A team at the University of Buenos Aires in Argentina has solved both problems by developing a material comprising an alginate bead surrounded by a silica gel [2]. This container houses a fungus that produces enzymes that break up a variety of organic pollutants. The pores of the hydrogel can limit the intake of toxic compounds from the polluted surroundings, thus controlling the level of toxicity experienced by the fungus, whilst the fungus itself is encapsulated inside the unit and cannot escape. Norden and others believe that if such examples were given more publicity, they would both improve the reputation of chemistry and science in general, and help to enthuse school students at a formative age.…Europe''s innovation gap equates to a shortage of around 1 million researchers across the EU, including a large number in chemistry and the life sciencesUnfortunately, this is not happening in schools, according to Norden, where the curriculum is failing both to enthuse pupils through practical work, and to inform them of the value of chemistry across society: “school chemistry neither stimulates curiosity nor does it promote understanding of what is most important to everybody,” he said. “It should be realized that well-taught chemistry is a necessary tool for dealing with everyday problems, at home or at work, and in the environment, relating to function of medicines, as well as what is poisonous and what is less noxious. As it is, all chemicals are presented simply as poisons.”Norden believes that a broader cultural element also tends to explain the particular shortage of analytical skills in chemistry. He believes that young people are more inclined than ever before to weigh up the probable rewards of a chosen profession in relation to the effort involved. “There seems to be a ‘cost–benefit'' aspect that young people apply when choosing an academic career: science, including maths, is too hard in relation to the jobs that eventually are available in research,” he explained. This ‘cost–benefit'' factor might not deter people from studying subjects up to university level, but can divert them into careers that pay a lot more. Ball believes that there is also an issue of esteem, in that people tend to gravitate towards careers where they feel valued. “Our most able graduates don''t see parity in esteem between research and other professions being represented by the salary they are paid,” he explained. “That is an issue that needs to be resolved, and it is not just about money, but working hard to convince these graduates that there is a worthwhile career in research.”Our most able graduates don''t see parity in esteem between research and other professions being represented by the salary they are paid,Lange suggests that it would be much easier to persuade the best graduates to stay in science if they were able to pursue their ideas free from bureaucracy or other constraints. This was a main reason to start the Marie Curie Actions programme of which Lange is a part, and which will be continued under Horizon 2020 with a new name, Marie Skłodowska-Curie Actions, and an increased budget. “The Marie Curie Actions have been applying a bottom-up principle, allowing researchers to freely choose their topic of research,” Lange explained. “The principle of ‘individual-driven mobility'' that is used in the Individual Fellowships empowers researchers to make their own choices about the scientific topic of their work, as well as their host institutions. […] It is a clear win–win situation for both sides: researchers are more satisfied because they are given the opportunity to take their careers in their own hands, while universities and research organizations value top-class scientists coming from abroad to work at their institutes.”Lange also noted that although Marie Curie Fellows choose their own research subjects, they tend to pursue topics that are relevant to societal needs because they want to find work afterwards. “More than 50% of the FP7 Marie Curie budget has been dedicated to research that can be directly related to the current societal challenges, such as an ageing population, climate change, energy shortage, food and water supply and health,” he said. “This demonstrates that researchers are acting in a responsible way. Even though they have the freedom to choose their own research topics, they still address problems that concern society in general.” In addition, Marie Curie Actions also encourages engagement with the public, feeding back into the wider campaign to draw more people into science careers. “Communicating science to the general public will be of importance as well, if we want to attract more young people to science,” Lange said. “Recently, the Marie Curie Actions started encouraging their Fellows to engage in outreach activities. In addition, we have just launched a call for the Marie Curie Prize, where one of the three Prize categories will be ‘Communicating Science''.”Another important element of the EU''s strategy to stimulate innovative cutting edge research is the European Research Council (ERC). It was the first pan-European funding body for front-line research across the sciences, with a budget of €7.5 billion for the FP7 period of 2007–2013, and has been widely heralded as a success. As a result, the ERC is set to receive an even bigger percentage increase than other departments within Horizon 2020 for the period 2014–2020, with a provisional budget of €13.2 billion.Leading scientists, such as Nobel laureate Jean-Marie Lehn, from Strasbourg University in France, believe that the ERC has made a substantial contribution to innovative research and, as a result, has boosted the reputation of European science. “The ERC has done a fantastic job which is quite independent of pressures from the outside,” he said. “It is good to hear that taking risks is regarded as important.” Lehn also highlighted the importance of making it clear that there are plenty of opportunities in research beyond those funded, and therefore dictated, by the big pharmaceutical companies. “There is chemistry outside big pharma, and life beyond return on investment,” he said. Lehn agreed that there must be a blend between blue sky and goal-oriented research, even if there is an argument over what the blend and goals should be.…the ERC has made a substantial contribution to innovative research and, as a result, has boosted the reputation of European scienceThere is growing optimism that Europe''s main funding bodies, including the national research councils of individual countries, have not only recognized the recruitment problem, but are taking significant steps to address it. Even so, there is still work to be done to improve the image of science and to engage students through more stimulating teaching. Chemistry in particular would benefit from broader measures to attract young people to science. Ultimately, the success of such initiatives will have much broader effects in the life sciences and drug development.  相似文献   

20.
Rinaldi A 《EMBO reports》2012,13(4):303-307
Scientists and journalists try to engage the public with exciting stories, but who is guilty of overselling research and what are the consequences?Scientists love to hate the media for distorting science or getting the facts wrong. Even as they do so, they court publicity for their latest findings, which can bring a slew of media attention and public interest. Getting your research into the national press can result in great boons in terms of political and financial support. Conversely, when scientific discoveries turn out to be wrong, or to have been hyped, the negative press can have a damaging effect on careers and, perhaps more importantly, the image of science itself. Walking the line between ‘selling'' a story and ‘hyping'' it far beyond the evidence is no easy task. Professional science communicators work carefully with scientists and journalists to ensure that the messages from research are translated for the public accurately and appropriately. But when things do go wrong, is it always the fault of journalists, or are scientists and those they employ to communicate sometimes equally to blame?Walking the line between ‘selling'' a story and ‘hyping'' it far beyond the evidence is no easy taskHyping in science has existed since the dawn of research itself. When scientists relied on the money of wealthy benefactors with little expertise to fund their research, the temptation to claim that they could turn lead into gold, or that they could discover the secret of eternal life, must have been huge. In the modern era, hyping of research tends to make less exuberant claims, but it is no less damaging and no less deceitful, even if sometimes unintentionally so. A few recent cases have brought this problem to the surface again.The most frenzied of these was the report in Science last year that a newly isolated bacterial strain could replace phosphate with arsenate in cellular constituents such as nucleic acids and proteins [1]. The study, led by NASA astrobiologist Felisa Wolfe-Simon, showed that a new strain of the Halomonadaceae family of halofilic proteobacteria, isolated from the alkaline and hypersaline Mono Lake in California (Fig 1), could not only survive in arsenic-rich conditions, such as those found in its original environment, but even thrive by using arsenic entirely in place of phosphorus. “The definition of life has just expanded. As we pursue our efforts to seek signs of life in the solar system, we have to think more broadly, more diversely and consider life as we do not know it,” commented Ed Weiler, NASA''s associate administrator for the Science Mission Directorate at the agency''s Headquarters in Washington, in the original press release [2].Open in a separate windowFigure 1Sunrise at Mono Lake. Mono Lake, located in eastern California, is bounded to the west by the Sierra Nevada mountains. This ancient alkaline lake is known for unusual tufa (limestone) formations rising from the water''s surface (shown here), as well as for its hypersalinity and high concentrations of arsenic. See Wolfe-Simon et al [1]. Credit: Henry Bortman.The accompanying “search for life beyond Earth” and “alternative biochemistry makeup” hints contained in the same release were lapped up by the media, which covered the breakthrough with headlines such as “Arsenic-loving bacteria may help in hunt for alien life” (BBC News), “Arsenic-based bacteria point to new life forms” (New Scientist), “Arsenic-feeding bacteria find expands traditional notions of life” (CNN). However, it did not take long for criticism to manifest, with many scientists openly questioning whether background levels of phosphorus could have fuelled the bacteria''s growth in the cultures, whether arsenate compounds are even stable in aqueous solution, and whether the tests the authors used to prove that arsenic atoms were replacing phosphorus ones in key biomolecules were accurate. The backlash was so bitter that Science published the concerns of several research groups commenting on the technical shortcomings of the study and went so far as to change its original press release for reporters, adding a warning note that reads “Clarification: this paper describes a bacterium that substitutes arsenic for a small percentage of its phosphorus, rather than living entirely off arsenic.”Microbiologists Simon Silver and Le T. Phung, from the University of Illinois, Chicago, USA, were heavily critical of the study, voicing their concern in one of the journals of the Federation of European Microbiological Societies, FEMS Microbiology Letters. “The recent online report in Science […] either (1) wonderfully expands our imaginations as to how living cells might function […] or (2) is just the newest example of how scientist-authors can walk off the plank in their imaginations when interpreting their results, how peer reviewers (if there were any) simply missed their responsibilities and how a press release from the publisher of Science can result in irresponsible publicity in the New York Times and on television. We suggest the latter alternative is the case, and that this report should have been stopped at each of several stages” [3]. Meanwhile, Wolfe-Simon is looking for another chance to prove she was right about the arsenic-loving bug, and Silver and colleagues have completed the bacterium''s genome shotgun sequencing and found 3,400 genes in its 3.5 million bases (www.ncbi.nlm.nih.gov/Traces/wgs/?val=AHBC01).“I can only comment that it would probably be best if one had avoided a flurry of press conferences and speculative extrapolations. The discovery, if true, would be similarly impressive without any hype in the press releases,” commented John Ioannidis, Professor of Medicine at Stanford University School of Medicine in the USA. “I also think that this is the kind of discovery that can definitely wait for a validation by several independent teams before stirring the world. It is not the type of research finding that one cannot wait to trumpet as if thousands and millions of people were to die if they did not know about it,” he explained. “If validated, it may be material for a Nobel prize, but if not, then the claims would backfire on the credibility of science in the public view.”Another instructive example of science hyping was sparked by a recent report of fossil teeth, dating to between 200,000 and 400,000 years ago, which were unearthed in the Qesem Cave near Tel Aviv by Israeli and Spanish scientists [4]. Although the teeth cannot yet be conclusively ascribed to Homo sapiens, Homo neanderthalensis, or any other species of hominid, the media coverage and the original press release from Tel Aviv University stretched the relevance of the story—and the evidence—proclaiming that the finding demonstrates humans lived in Israel 400,000 years ago, which should force scientists to rewrite human history. Were such evidence of modern humans in the Middle East so long ago confirmed, it would indeed clash with the prevailing view of human origin in Africa some 200,000 years ago and the dispersal from the cradle continent that began about 70,000 years ago. But, as freelance science writer Brian Switek has pointed out, “The identity of the Qesem Cave humans cannot be conclusively determined. All the grandiose statements about their relevance to the origin of our species reach beyond what the actual fossil material will allow” [5].An example of sensationalist coverage? “It has long been believed that modern man emerged from the continent of Africa 200,000 years ago. Now Tel Aviv University archaeologists have uncovered evidence that Homo sapiens roamed the land now called Israel as early as 400,000 years ago—the earliest evidence for the existence of modern man anywhere in the world,” reads a press release from the New York-based organization, American Friends of Tel Aviv University [6].“The extent of hype depends on how people interpret facts and evidence, and their intent in the claims they are making. Hype in science can range from ‘no hype'', where predictions of scientific futures are 100% fact based, to complete exaggeration based on no facts or evidence,” commented Zubin Master, a researcher in science ethics at the University of Alberta in Edmonton, Canada. “Intention also plays a role in hype and the prediction of scientific futures, as making extravagant claims, for example in an attempt to secure funds, could be tantamount to lying.”Are scientists more and more often indulging in creative speculation when interpreting their results, just to get extraordinary media coverage of their discoveries? Is science journalism progressively shifting towards hyping stories to attract readers?“The vast majority of scientific work can wait for some independent validation before its importance is trumpeted to the wider public. Over-interpretation of results is common and as scientists we are continuously under pressure to show that we make big discoveries,” commented Ioannidis. “However, probably our role [as scientists] is more important in making sure that we provide balanced views of evidence and in identifying how we can question more rigorously the validity of our own discoveries.”“The vast majority of scientific work can wait for some independent validation before its importance is trumpeted to the wider public”Stephanie Suhr, who is involved in the management of the European XFEL—a facility being built in Germany to generate intense X-ray flashes for use in many disciplines—notes in her introduction to a series of essays on the ethics of science journalism that, “Arguably, there may also be an increasing temptation for scientists to hype their research and ‘hit the headlines''” [7]. In her analysis, Suhr quotes at least one instance—the discovery in 2009 of the Darwinius masillae fossil, presented as the missing link in human evolution [8]—in which the release of a ‘breakthrough'' scientific publication seems to have been coordinated with simultaneous documentaries and press releases, resulting in what can be considered a study case for science hyping [7].Although there is nothing wrong in principle with a broad communication strategy aimed at the rapid dissemination of a scientific discovery, some caveats exist. “[This] strategy […] might be better applied to a scientific subject or body of research. When applied to a single study, there [is] a far greater likelihood of engaging in unmerited hype with the risk of diminishing public trust or at least numbing the audience to claims of ‘startling new discoveries'',” wrote science communication expert Matthew Nisbet in his Age of Engagement blog (bigthink.com/blogs/age-of-engagement) about how media communication was managed in the Darwinius affair. “[A]ctivating the various channels and audiences was the right strategy but the language and metaphor used strayed into the realm of hype,” Nisbet, who is an Associate Professor in the School of Communication at American University, Washington DC, USA, commented in his post [9]. “We are ethically bound to think carefully about how to go beyond the very small audience that follows traditional science coverage and think systematically about how to reach a wider, more diverse audience via multiple media platforms. But in engaging with these new media platforms and audiences, we are also ethically bound to avoid hype and maintain accuracy and context” [9].But the blame for science hype cannot be laid solely at the feet of scientists and press officers. Journalists must take their fair share of reproach. “As news online comes faster and faster, there is an enormous temptation for media outlets and journalists to quickly publish topics that will grab the readers'' attention, sometimes at the cost of accuracy,” Suhr wrote [7]. Of course, the media landscape is extremely varied, as science blogger and writer Bora Zivkovic pointed out. “There is no unified thing called ‘Media''. There are wonderful specialized science writers out there, and there are beat reporters who occasionally get assigned a science story as one of several they have to file every day,” he explained. “There are careful reporters, and there are those who tend to hype. There are media outlets that value accuracy above everything else; others that put beauty of language above all else; and there are outlets that value speed, sexy headlines and ad revenue above all.”…the blame for science hype cannot be laid solely at the feet of scientists and press officers. Journalists must take their fair share of reproachOne notable example of media-sourced hype comes from J. Craig Venter''s announcement in the spring of 2010 of the first self-replicating bacterial cell controlled by a synthetic genome (Fig 2). A major media buzz ensued, over-emphasizing and somewhat distorting an anyway remarkable scientific achievement. Press coverage ranged from the extremes of announcing ‘artificial life'' to saying that Venter was playing God, adding to cultural and bioethical tension the warning that synthetic organisms could be turned into biological weapons or cause environmental disasters.Open in a separate windowFigure 2Schematic depicting the assembly of a synthetic Mycoplasma mycoides genome in yeast. For details of the construction of the genome, please see the original article. From Gibson et al [13] Science 329, 52–56. Reprinted with permission from AAAS.“The notion that scientists might some day create life is a fraught meme in Western culture. One mustn''t mess with such things, we are told, because the creation of life is the province of gods, monsters, and practitioners of the dark arts. Thus, any hint that science may be on the verge of putting the power of creation into the hands of mere mortals elicits a certain discomfort, even if the hint amounts to no more than distorted gossip,” remarked Rob Carlson, who writes on the future role of biology as a human technology, about the public reaction and the media frenzy that arose from the news [10].Yet the media can also behave responsibly when faced with extravagant claims in press releases. Fiona Fox, Chief Executive of the Science Media Centre in the UK, details such an example in her blog, On Science and the Media (fionafox.blogspot.com). The Science Media Centre''s role is to facilitate communication between scientists and the press, so they often receive calls from journalists asking to be put in touch with an expert. In this case, the journalist asked for an expert to comment on a story about silver being more effective against cancer than chemotherapy. A wild claim; yet, as Fox points out in her blog, the hype came directly from the institution''s press office: “Under the heading ‘A silver bullet to beat cancer?'' the top line of the press release stated that ‘Lab tests have shown that it (silver) is as effective as the leading chemotherapy drug—and may have far fewer side effects.'' Far from including any caveats or cautionary notes up front, the press office even included an introductory note claiming that the study ‘has confirmed the quack claim that silver has cancer-killing properties''” [11]. Fox praises the majority of the UK national press that concluded that this was not a big story to cover, pointing out that, “We''ve now got to the stage where not only do the best science journalists have to fight the perverse news values of their news editors but also to try to read between the lines of overhyped press releases to get to the truth of what a scientific study is really claiming.”…the concern is that hype inflates public expectations, resulting in a loss of trust in a given technology or research avenue if promises are not kept; however, the premise is not fully provenYet, is hype detrimental to science? In many instances, the concern is that hype inflates public expectations, resulting in a loss of trust in a given technology or research avenue if promises are not kept; however, the premise is not fully proven (Sidebar A). “There is no empirical evidence to suggest that unmet promises due to hype in biotechnology, and possibly other scientific fields, will lead to a loss of public trust and, potentially, a loss of public support for science. Thus, arguments made on hype and public trust must be nuanced to reflect this understanding,” Master pointed out.

Sidebar A | Up and down the hype cycle

AlthoughAlthough hype is usually considered a negative and largely unwanted aspect of scientific and technological communication, it cannot be denied that emphasizing, at least initially, the benefits of a given technology can further its development and use. From this point of view, hype can be seen as a normal stage of technological development, within certain limits. The maturity, adoption and application of specific technologies apparently follow a common trend pattern, described by the information technology company, Gartner, Inc., as the ‘hype cycle''. The idea is based on the observation that, after an initial trigger phase, novel technologies pass through a peak of over-excitement (or hype), often followed by a subsequent general disenchantment, before eventually coming under the spotlight again and reaching a stable plateau of productivity. Thus, hype cycles “[h]ighlight overhyped areas against those that are high impact, estimate how long technologies and trends will take to reach maturity, and help organizations decide when to adopt” (www.gartner.com).“Science is a human endeavour and as such it is inevitably shaped by our subjective responses. Scientists are not immune to these same reactions and it might be valuable to evaluate the visibility of different scientific concepts or technologies using the hype cycle,” commented Pedro Beltrao, a cellular biologist at the University of California San Francisco, USA, who runs the Public Rambling blog (pbeltrao.blogspot.com) about bioinformatics science and technology. The exercise of placing technologies in the context of the hype cycle can help us to distinguish between their real productive value and our subjective level of excitement, Beltrao explained. “As an example, I have tried to place a few concepts and technologies related to systems biology along the cycle''s axis of visibility and maturity [see illustration]. Using this, one could suggest that technologies like gene-expression arrays or mass-spectrometry have reached a stable productivity level, while the potential of concepts like personalized medicine or genome-wide association studies (GWAS) might be currently over-valued.”Together with bioethicist colleague David Resnik, Master has recently highlighted the need for empirical research that examines the relationships between hype, public trust, and public enthusiasm and/or support [12]. Their argument proposes that studies on the effect of hype on public trust can be undertaken by using both quantitative and qualitative methods: “Research can be designed to measure hype through a variety of sources including websites, blogs, movies, billboards, magazines, scientific publications, and press releases,” the authors write. “Semi-structured interviews with several specific stakeholders including genetics researchers, media representatives, patient advocates, other academic researchers (that is, ethicists, lawyers, and social scientists), physicians, ethics review board members, patients with genetic diseases, government spokespersons, and politicians could be performed. Also, members of the general public would be interviewed” [12]. They also point out that such an approach to estimate hype and its effect on public enthusiasm and support should carefully define the public under study, as different publics might have different expectations of scientific research, and will therefore have different baseline levels of trust.Increased awareness of the underlying risks of over-hyping research should help to balance the scientific facts with speculation on the enticing truths and possibilities they revealUltimately, exaggerating, hyping or outright lying is rarely a good thing. Hyping science is detrimental to various degrees to all science communication stakeholders—scientists, institutions, journalists, writers, newspapers and the public. It is important that scientists take responsibility for their share of the hyping done and do not automatically blame the media for making things up or getting things wrong. Such discipline in science communication is increasingly important as science searches for answers to the challenges of this century. Increased awareness of the underlying risks of over-hyping research should help to balance the scientific facts with speculation on the enticing truths and possibilities they reveal. The real challenge lies in favouring such an evolved approach to science communication in the face of a rolling 24-hour news cycle, tight science budgets and the uncontrolled and uncontrollable world of the Internet.? Open in a separate windowThe hype cycle for the life sciences. Pedro Beltrao''s view of the excitement–disappointment–maturation cycle of bioscience-related technologies and/or ideas. GWAS: genome-wide association studies. Credit: Pedro Beltrao.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号