首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Despite extensive genetic analysis, the evolutionary relationship between polar bears (Ursus maritimus) and brown bears (U. arctos) remains unclear. The two most recent comprehensive reports indicate a recent divergence with little subsequent admixture or a much more ancient divergence followed by extensive admixture. At the center of this controversy are the Alaskan ABC Islands brown bears that show evidence of shared ancestry with polar bears. We present an analysis of genome-wide sequence data for seven polar bears, one ABC Islands brown bear, one mainland Alaskan brown bear, and a black bear (U. americanus), plus recently published datasets from other bears. Surprisingly, we find clear evidence for gene flow from polar bears into ABC Islands brown bears but no evidence of gene flow from brown bears into polar bears. Importantly, while polar bears contributed <1% of the autosomal genome of the ABC Islands brown bear, they contributed 6.5% of the X chromosome. The magnitude of sex-biased polar bear ancestry and the clear direction of gene flow suggest a model wherein the enigmatic ABC Island brown bears are the descendants of a polar bear population that was gradually converted into brown bears via male-dominated brown bear admixture. We present a model that reconciles heretofore conflicting genetic observations. We posit that the enigmatic ABC Islands brown bears derive from a population of polar bears likely stranded by the receding ice at the end of the last glacial period. Since then, male brown bear migration onto the island has gradually converted these bears into an admixed population whose phenotype and genotype are principally brown bear, except at mtDNA and X-linked loci. This process of genome erosion and conversion may be a common outcome when climate change or other forces cause a population to become isolated and then overrun by species with which it can hybridize.  相似文献   

2.
3.
Polar bears are an arctic, marine adapted species that is closely related to brown bears. Genome analyses have shown that polar bears are distinct and genetically homogeneous in comparison to brown bears. However, these analyses have also revealed a remarkable episode of polar bear gene flow into the population of brown bears that colonized the Admiralty, Baranof and Chichagof islands (ABC islands) of Alaska. Here, we present an analysis of data from a large panel of polar bear and brown bear genomes that includes brown bears from the ABC islands, the Alaskan mainland and Europe. Our results provide clear evidence that gene flow between the two species had a geographically wide impact, with polar bear DNA found within the genomes of brown bears living both on the ABC islands and in the Alaskan mainland. Intriguingly, while brown bear genomes contain up to 8.8% polar bear ancestry, polar bear genomes appear to be devoid of brown bear ancestry, suggesting the presence of a barrier to gene flow in that direction.  相似文献   

4.
Zhang JY 《EMBO reports》2011,12(4):302-306
How can grass-roots movements evolve into a national research strategy? The bottom-up emergence of synthetic biology in China could give some pointers.Given its potential to aid developments in renewable energy, biosensors, sustainable chemical industries, microbial drug factories and biomedical devices, synthetic biology has enormous implications for economic development. Many countries are therefore implementing strategies to promote progress in this field. Most notably, the USA is considered to be the leader in exploring the industrial potential of synthetic biology (Rodemeyer, 2009). Synthetic biology in Europe has benefited from several cross-border studies, such as the ‘New and Emerging Science and Technology'' programme (NEST, 2005) and the ‘Towards a European Strategy for Synthetic Biology'' project (TESSY; Gaisser et al, 2008). Yet, little is known in the West about Asia''s role in this ‘new industrial revolution'' (Kitney, 2009). In particular, China is investing heavily in scientific research for future developments, and is therefore likely to have an important role in the development of synthetic biology.Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework…In 2010, as part of a study of the international governance of synthetic biology, the author visited four leading research teams in three Chinese cities (Beijing, Tianjin and Hefei). The main aims of the visits were to understand perspectives in China on synthetic biology, to identify core themes among its scientific community, and to address questions such as ‘how did synthetic biology emerge in China?'', ‘what are the current funding conditions?'', ‘how is synthetic biology generally perceived?'' and ‘how is it regulated?''. Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework; one that is more dynamic and comprises more options than existing national or international research and development (R&D) strategies. Such findings might contribute to Western knowledge of Chinese R&D, but could also expose European and US policy-makers to alternative forms and patterns of research governance that have emerged from a grass-roots level.…the process of developing a framework is at least as important to research governance as the big question it might eventually addressA dominant narrative among the scientists interviewed is the prospect of a ‘big-question'' strategy to promote synthetic-biology research in China. This framework is at a consultation stage and key questions are still being discussed. Yet, fieldwork indicates that the process of developing a framework is at least as important to research governance as the big question it might eventually address. According to several interviewees, this approach aims to organize dispersed national R&D resources into one grand project that is essential to the technical development of the field, preferably focusing on an industry-related theme that is economically appealling to the Chinese public.Chinese scientists have a pragmatic vision for research; thinking of science in terms of its ‘instrumentality'' has long been regarded as characteristic of modern China (Schneider, 2003). However, for a country in which the scientific community is sometimes described as an “uncoordinated ‘bunch of loose ends''” (Cyranoski, 2001) “with limited synergies between them” (OECD, 2007), the envisaged big-question approach implies profound structural and organizational changes. Structurally, the approach proposes that the foundational (industry-related) research questions branch out into various streams of supporting research and more specific short-term research topics. Within such a framework, a variety of Chinese universities and research institutions can be recruited and coordinated at different levels towards solving the big question.It is important to note that although this big-question strategy is at a consultation stage and supervised by the Ministry of Science and Technology (MOST), the idea itself has emerged in a bottom-up manner. One academic who is involved in the ongoing ministerial consultation recounted that, “It [the big-question approach] was initially conversations among we scientists over the past couple of years. We saw this as an alternative way to keep up with international development and possibly lead to some scientific breakthrough. But we are happy to see that the Ministry is excited and wants to support such an idea as well.” As many technicalities remain to be addressed, there is no clear time-frame yet for when the project will be launched. Yet, this nationwide cooperation among scientists with an emerging commitment from MOST seems to be largely welcomed by researchers. Some interviewees described the excitement it generated among the Chinese scientific community as comparable with the establishment of “a new ‘moon-landing'' project”.Of greater significance than the time-frame is the development process that led to this proposition. On the one hand, the emergence of synthetic biology in China has a cosmopolitan feel: cross-border initiatives such as international student competitions, transnational funding opportunities and social debates in Western countries—for instance, about biosafety—all have an important role. On the other hand, the development of synthetic biology in China has some national particularities. Factors including geographical proximity, language, collegial familiarity and shared interests in economic development have all attracted Chinese scientists to the national strategy, to keep up with their international peers. Thus, to some extent, the development of synthetic biology in China is an advance not only in the material synthesis of the ‘cosmos''—the physical world—but also in the social synthesis of aligning national R&D resources and actors with the global scientific community.To comprehend how Chinese scientists have used national particularities and global research trends as mutually constructive influences, and to identify the implications of this for governance, this essay examines the emergence of synthetic biology in China from three perspectives: its initial activities, the evolution of funding opportunities, and the ongoing debates about research governance.China''s involvement in synthetic biology was largely promoted by the participation of students in the International Genetically Engineered Machine (iGEM) competition, an international contest for undergraduates initiated by the Massachusetts Institute of Technology (MIT) in the USA. Before the iGEM training workshop that was hosted by Tianjin University in the Spring of 2007, there were no research records and only two literature reviews on synthetic biology in Chinese scientific databases (Zhao & Wang, 2007). According to Chunting Zhang of Tianjin University—a leading figure in the promotion of synthetic biology in China—it was during these workshops that Chinese research institutions joined their efforts for the first time (Zhang, 2008). From the outset, the organization of the workshop had a national focus, while it engaged with international networks. Synthetic biologists, including Drew Endy from MIT and Christina Smolke from Stanford University, USA, were invited. Later that year, another training camp designed for iGEM tutors was organized in Tianjin and included delegates from Australia and Japan (Zhang, 2008).Through years of organizing iGEM-related conferences and workshops, Chinese universities have strengthened their presence at this international competition; in 2007, four teams from China participated. During the 2010 competition, 11 teams from nine universities in six provinces/municipalities took part. Meanwhile, recruiting, training and supervising iGEM teams has become an important institutional programme at an increasing number of universities.…training for iGEM has grown beyond winning the student awards and become a key component of exchanges between Chinese researchers and the international communityIt might be easy to interpret the enthusiasm for the iGEM as a passion for winning gold medals, as is conventionally the case with other international scientific competitions. This could be one motive for participating. Yet, training for iGEM has grown beyond winning the student awards and has become a key component of exchanges between Chinese researchers and the international community (Ding, 2010). Many of the Chinese scientists interviewed recounted the way in which their initial involvement in synthetic biology overlapped with their tutoring of iGEM teams. One associate professor at Tianjin University, who wrote the first undergraduate textbook on synthetic biology in China, half-jokingly said, “I mainly learnt [synthetic biology] through tutoring new iGEM teams every year.”Participation in such contests has not only helped to popularize synthetic biology in China, but has also influenced local research culture. One example of this is that the iGEM competition uses standard biological parts (BioBricks), and new BioBricks are submitted to an open registry for future sharing. A corresponding celebration of open-source can also be traced to within the Chinese synthetic-biology community. In contrast to the conventional perception that the Chinese scientific sector consists of a “very large number of ‘innovative islands''” (OECD, 2007; Zhang, 2010), communication between domestic teams is quite active. In addition to the formally organized national training camps and conferences, students themselves organize a nationwide, student-only workshop at which to informally test their ideas.More interestingly, when the author asked one team whether there are any plans to set up a ‘national bank'' for hosting designs from Chinese iGEM teams, in order to benefit domestic teams, both the tutor and team members thought this proposal a bit “strange”. The team leader responded, “But why? There is no need. With BioBricks, we can get any parts we want quite easily. Plus, it directly connects us with all the data produced by iGEM teams around the world, let alone in China. A national bank would just be a small-scale duplicate.”From the beginning, interest in the development of synthetic biology in China has been focused on collective efforts within and across national borders. In contrast to conventional critiques on the Chinese scientific community''s “inclination toward competition and secrecy, rather than openness” (Solo & Pressberg, 2007; OECD, 2007; Zhang, 2010), there seems to be a new outlook emerging from the participation of Chinese universities in the iGEM contest. Of course, that is not to say that the BioBricks model is without problems (Rai & Boyle, 2007), or to exclude inputs from other institutional channels. Yet, continuous grass-roots exchanges, such as the undergraduate-level competition, might be as instrumental as formal protocols in shaping research culture. The indifference of Chinese scientists to a ‘national bank'' seems to suggest that the distinction between the ‘national'' and ‘international'' scientific communities has become blurred, if not insignificant.However, frequent cross-institutional exchanges and the domestic organization of iGEM workshops seem to have nurtured the development of a national synthetic-biology community in China, in which grass-roots scientists are comfortable relying on institutions with a cosmopolitan character—such as the BioBricks Foundation—to facilitate local research. To some extent, one could argue that in the eyes of Chinese scientists, national and international resources are one accessible global pool. This grass-roots interest in incorporating local and global advantages is not limited to student training and education, but also exhibited in evolving funding and regulatory debates.In the development of research funding for synthetic biology, a similar bottom-up consolidation of national and global resources can also be observed. As noted earlier, synthetic-biology research in China is in its infancy. A popular view is that China has the potential to lead this field, as it has strong support from related disciplines. In terms of genome sequencing, DNA synthesis, genetic engineering, systems biology and bioinformatics, China is “almost at the same level as developed countries” (Pan, 2008), but synthetic-biology research has only been carried out “sporadically” (Pan, 2008; Huang, 2009). There are few nationally funded projects and there is no discernible industrial involvement (Yang, 2010). Most existing synthetic-biology research is led by universities or institutions that are affiliated with the Chinese Academy of Science (CAS). As one CAS academic commented, “there are many Chinese scientists who are keen on conducting synthetic-biology research. But no substantial research has been launched nor has long-term investment been committed.”The initial undertaking of academic research on synthetic biology in China has therefore benefited from transnational initiatives. The first synthetic-biology project in China, launched in October 2006, was part of the ‘Programmable Bacteria Catalyzing Research'' (PROBACTYS) project, funded by the Sixth Framework Programme of the European Union (Yang, 2010). A year later, another cross-border collaborative effort led to the establishment of the first synthetic-biology centre in China: the Edinburgh University–Tianjing University Joint Research Centre for Systems Biology and Synthetic Biology (Zhang, 2008).There is also a comparable commitment to national research coordination. A year after China''s first participation in iGEM, the 2008 Xiangshan conference focused on domestic progress. From 2007 to 2009, only five projects in China received national funding, all of which came from the National Natural Science Foundation of China (NSFC). This funding totalled ¥1,330,000 (approximately £133,000; www.nsfc.org), which is low in comparison to the £891,000 funding that was given in the UK for seven Networks in Synthetic Biology in 2007 alone (www.bbsrc.ac.uk).One of the primary challenges in obtaining funding identified by the interviewees is that, as an emerging science, synthetic biology is not yet appreciated by Chinese funding agencies. After the Xiangshan conference, the CAS invited scientists to a series of conferences in late 2009. According to the interviewees, one of the main outcomes was the founding of a ‘China Synthetic Biology Coordination Group''; an informal association of around 30 conference delegates from various research institutions. This group formulated a ‘regulatory suggestion'' that they submitted to MOST, which stated the necessity and implications of supporting synthetic-biology research. In addition, leading scientists such as Chunting Zhang and Huanming Yang—President of the Beijing Genomic Institute (BGI), who co-chaired the Beijing Institutes of Life Science (BILS) conferences—have been active in communicating with government institutions. The initial results of this can be seen in the MOST 2010 Application Guidelines for the National Basic Research Program, in which synthetic biology was included for the first time, among ‘key supporting areas'' (MOST, 2010). Meanwhile, in 2010, NSFC allocated ¥1,500,000 (approximately £150,000) to synthetic-biology research, which is more than the total funding the area had received in the past three years.The search for funding further demonstrates the dynamics between national and transnational resources. Chinese R&D initiatives have to deal with the fact that scientific venture-capital and non-governmental research charities are underdeveloped in China. In contrast to the EU or the USA, government institutions in China, such as the NSFC and MOST, are the main and sometimes only domestic sources of funding. Yet, transnational funding opportunities facilitate the development of synthetic biology by alleviating local structural and financial constraints, and further integrate the Chinese scientific community into international research.This is not a linear ‘going-global'' process; it is important for Chinese scientists to secure and promote national and regional support. In addition, this alignment of national funding schemes with global research progress is similar to the iGEM experience, as it is being initiated through informal bottom-up associations between scientists, rather than by top-down institutional channels.As more institutions have joined iGEM training camps and participated in related conferences, a shared interest among the Chinese scientific community in developing synthetic biology has become visible. In late 2009, at the conference that founded the informal ‘coordination group'', the proposition of integrating national expertise through a big-question approach emerged. According to one professor in Beijing—who was a key participant in the discussion at the time—this proposition of a nationwide synergy was not so much about ‘national pride'' or an aim to develop a ‘Chinese'' synthetic biology, it was about research practicality. She explained, “synthetic biology is at the convergence of many disciplines, computer modelling, nano-technology, bioengineering, genomic research etc. Individual researchers like me can only operate on part of the production chain. But I myself would like to see where my findings would fit in a bigger picture as well. It just makes sense for a country the size of China to set up some collective and coordinated framework so as to seek scientific breakthrough.”From the first participation in the iGEM contest to the later exploration of funding opportunities and collective research plans, scientists have been keen to invite and incorporate domestic and international resources, to keep up with global research. Yet, there are still regulatory challenges to be met.…with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' mannerThe reputation of “the ‘wild East'' of biology” (Dennis, 2002) is associated with China'' previous inattention to ethical concerns about the life sciences, especially in embryonic-stem-cell research. Similarly, synthetic biology creates few social concerns in China. Public debate is minimal and most media coverage has been positive. Synthetic biology is depicted as “a core in the fourth wave of scientific development” (Pan, 2008) or “another scientific revolution” (Huang, 2009). Whilst recognizing its possible risks, mainstream media believe that “more people would be attracted to doing good while making a profit than doing evil” (Fang & He, 2010). In addition, biosecurity and biosafety training in China are at an early stage, with few mandatory courses for students (Barr & Zhang, 2010). The four leading synthetic-biology teams I visited regarded the general biosafety regulations that apply to microbiology laboratories as sufficient for synthetic biology. In short, with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' manner.Yet, fieldwork suggests that, in contrast to this previous insensitivity to global ethical concerns, the synthetic-biology community in China has taken a more proactive approach to engaging with international debates. It is important to note that there are still no synthetic-biology-specific administrative guidelines or professional codes of conduct in China. However, Chinese stakeholders participate in building a ‘mutual inclusiveness'' between global and domestic discussions.One of the most recent examples of this is a national conference about the ethical and biosafety implications of synthetic biology, which was jointly hosted by the China Association for Science and Technology, the Chinese Society of Biotechnology and the Beijing Institutes of Life Science CAS, in Suzhou in June 2010. The discussion was open to the mainstream media. The debate was not simply a recapitulation of Western worries, such as playing god, potential dual-use or ecological containment. It also focused on the particular concerns of developing countries about how to avoid further widening the developmental gap with advanced countries (Liu, 2010).In addition to general discussions, there are also sustained transnational communications. For example, one of the first three projects funded by the NSFC was a three-year collaboration on biosafety and risk-assessment frameworks between the Institute of Botany at CAS and the Austrian Organization for International Dialogue and Conflict Management (IDC).Chinese scientists are also keen to increase their involvement in the formulation of international regulations. The CAS and the Chinese Academy of Engineering are engaged with their peer institutions in the UK and the USA to “design more robust frameworks for oversight, intellectual property and international cooperation” (Royal Society, 2009). It is too early to tell what influence China will achieve in this field. Yet, the changing image of the country from an unconcerned wild East to a partner in lively discussions signals a new dynamic in the global development of synthetic biology.Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in ChinaFrom self-organized participation in iGEM to bottom-up funding and governance initiatives, two features are repeatedly exhibited in the emergence of synthetic biology in China: global resources and international perspectives complement national interests; and the national and cosmopolitan research strengths are mostly instigated at the grass-roots level. During the process of introducing, developing and reflecting on synthetic biology, many formal or informal, provisional or long-term alliances have been established from the bottom up. Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in China.However, the inputs of different social actors has not led to disintegration of the field into an array of individualized pursuits, but has transformed it into collective synergies, or the big-question approach. Underlying the diverse efforts of Chinese scientists is a sense of ‘inclusiveness'', or the idea of bringing together previously detached research expertise. Thus, the big-question strategy cannot be interpreted as just another nationally organized agenda in response to global scientific advancements. Instead, it represents a more intricate development path corresponding to how contemporary research evolves on the ground.In comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stageIn comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stage. Government input—such as the potential stewardship of the MOST in directing a big-question approach or long-term funding—remain important; the scientists who were interviewed expend a great deal of effort to attract governmental participation. Yet, China'' experience highlights that the key to comprehending regional scientific capacity lies not so much in what the government can do, but rather in what is taking place in laboratories. It is important to remember that Chinese iGEM victories, collaborative synthetic-biology projects and ethical discussions all took place before the government became involved. Thus, to appreciate fully the dynamics of an emerging science, it might be necessary to focus on what is formulated from the bottom up.The experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary researchThe experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary research. More specifically, it is a result of the commitment of Chinese scientists to incorporating national and international resources, actors and social concerns. For practical reasons, the national organization of research, such as through the big-question approach, might still have an important role. However, synthetic biology might be not only a mosaic of national agendas, but also shaped by transnational activities and scientific resources. What Chinese scientists will collectively achieve remains to be seen. Yet, the emergence of synthetic biology in China might be indicative of a new paradigm for how research practices can be introduced, normalized and regulated.  相似文献   

5.
6.
Paige Brown 《EMBO reports》2012,13(11):964-967
Many scientists blame the media for sensationalising scientific findings, but new research suggests that things can go awry at all levels, from the scientific report to the press officer to the journalist.Everything gives you cancer, at least if you believe what you read in the news or see on TV. Fortunately, everything also cures cancer, from red wine to silver nanoparticles. Of course the truth lies somewhere in between, and scientists might point out that these claims are at worst dangerous sensationalism and at best misjudged journalism. These kinds of media story, which inflate the risks and benefits of research, have led to a mistrust of the press among some scientists. But are journalists solely at fault when science reporting goes wrong, as many scientists believe [1]? New research suggests it is time to lay to rest the myth that the press alone is to blame. The truth is far more nuanced and science reporting can go wrong at many stages, from the researchers to the press officers to the diverse producers of news.Many science communication researchers suggest that science in the media is not as distorted as scientists believe, although they do admit that science reporting tends to under-represent risks and over-emphasize benefits [2]. “I think there is a lot less of this [misreported science] than some scientists presume. I actually think that there is a bit of laziness in the narrative around science and the media,” said Fiona Fox, Director of the UK Science Media Centre (London, UK), an independent press office that serves as a liaison between scientists and journalists. “My bottom line is that, certainly in the UK, a vast majority of journalists report science accurately in a measured way, and it''s certainly not a terrible story. Having said that, lots of things do go wrong for a number of reasons.”Fox said that the centre sees everything from fantastic press releases to those that completely misrepresent and sensationalize scientific findings. They have applauded news stories that beautifully reported the caveats and limitations of a particular scientific study, but they have also cringed as a radio talk show pitted a massive and influential body of research against a single non-scientist sceptic.“You ask, is it the press releases, is it the universities, is it the journalists? The truth is that it''s all three,” Fox said. “But even admitting that is admitting more complexity. So anyone who says that scientists and university press officers deliver perfectly accurate science and the media misrepresent it […] that really is not the whole story.”Scientists and scientific institutions today invest more time and effort into communicating with the media than they did a decade ago, especially given the modern emphasis on communicating scientific results to the public [3]. Today, there are considerable pressures on scientists to reach out and even ‘sell their work'' to public relations officers and journalists. “For every story that a journalist has hyped and sensationalized, there will be another example of that coming directly from a press release that we [scientists] hyped and sensationalized,” Fox said. “And for every time that that was a science press officer, there will also be a science press officer who will tell you, ‘I did a much more nuanced press release, but the academic wanted me to over claim for it''.”Although science public relations has helped to put scientific issues on the public agenda, there are also dangers inherent in the process of translation from original research to press release to media story. Previous research in the area of science communication has focused on conflicting scientific and media values, and the effects of science media on audiences. However, studies have raised awareness of the role of press releases in distorting information from the lab bench to published news [4].In a 2011 study of genetic research claims made in press releases and mainstream print media, science communication researcher Jean Brechman, who works at the US advertising and marketing research firm Gallup & Robinson, found evidence that scientific knowledge gets distorted as it is “filtered and translated for mass communication” with “slippages and inconsistencies” occurring along the way, such that the end message does not accurately represent the original science [4]. Although Brechman and colleagues found a concerning point of distortion in the transition between press release and news article, they also observed a misrepresentation of the original science in a significant portion of the press releases themselves.In a previous study, Brechman and his colleagues had also concluded that “errors commonly attributed to science journalists, such as lack of qualifying details and use of oversimplified language, originate in press releases.” Even more worrisome, as Fox told a Nature commentary author in 2009, public relations departments are increasingly filling the need of the media for quick content [5].Fox believes that a common characteristic of misrepresented science in press releases and the media is the over-claiming of preliminary studies. As such, the growing prevalence of rapid, short-format publications that publicize early results might be exacerbating the problem. Research has also revealed that over-emphasis on the beneficial effects of experimental medical treatments seen in press releases and news coverage, often called ‘spin'', can stem from bias in the abstract of the original scientific article itself [6]. Such findings warrant a closer examination of the language used in scientific articles and abstracts, as the wording and ‘spin'' of conclusions drawn by researchers in their peer-reviewed publications might have significant impacts on subsequent media coverage.Of course, some stories about scientific discoveries are just not easy to tell owing to their complexity. They are “messy, complicated, open to interpretation and ripe for misreporting,” as Fox wrote in a post on her blog On Science and the Media (fionafox.blogspot.com). They do not fit the single-page blog post or the short press release. Some scientific experiments and the peer-reviewed articles and media stories that flow from them are inherently full of caveats, contexts and conflicting results and cannot be communicated in a short format [7].In a 2012 issue of Perspectives on Psychological Science, Marco Bertamini at the University of Liverpool (UK) and Marcus R. Munafo at the University of Bristol (UK) suggested that a shift toward “bite-size” publications in areas of science such as psychology might be promoting more single-study models of research, fewer efforts to replicate initial findings, curtailed detailing of previous relevant work and bias toward “false alarm” or false-positive results [7]. The authors pointed out that larger, multi-experiment studies are typically published in longer papers with larger sample sizes and tend to be more accurate. They also suggested that this culture of brief, single-study reports based on small data sets will lead to the contamination of the scientific literature with false-positive findings. Unfortunately, false science far more easily enters the literature than leaves it [8].One famous example is that of Andrew Wakefield, whose 1998 publication in The Lancet claimed to link autism with the combined measles, mumps and rubella (MMR) vaccination. It took years of work by many scientists, and the aid of an exposé by British investigative reporter Brian Deer, to finally force retraction of the paper. However, significant damage had already been done and many parents continue to avoid immunizing their children out of fear. Deer claims that scientific journals were a large part of the problem: “[D]uring the many years in which I investigated the MMR vaccine controversy, the worst and most inexcusable reporting on the subject, apart from the original Wakefield claims in the Lancet, was published in Nature and republished in Scientific American,” he said. “There is an enormous amount of hypocrisy among those who accuse the media of misreporting science.”What factors are promoting this shift to bite-size science? One is certainly the increasing pressure and competition to publish many papers in high-impact journals, which prefer short articles with new, ground-breaking findings.“Bibliometrics is playing a larger role in academia in deciding who gets a job and who gets promoted,” Bertamini said. “In general, if things are measured by citations, there is pressure to publish as much and as often as possible, and also to focus on what is surprising; thus, we can see how this may lead to an inflation in the number of papers but also an increase in publication bias.”Bertamini points to the real possibility that measured effects emerging from a group of small samples can be much larger than the real effect in the total population. “This variability is bad enough, but it is even worse when you consider that what is more likely to be written up and accepted for publication are exactly the larger differences,” he explained.Alongside the endless pressure to publish, the nature of the peer-reviewed publication process itself prioritizes exciting and statistically impressive results. Fluke scientific discoveries and surprising results are often considered newsworthy, even if they end up being false-positives. The bite-size article aggravates this problem in what Bertamini fears is a growing similarity between academic writing and media reporting: “The general media, including blogs and newspapers, will of course focus on what is curious, funny, controversial, and so on. Academic papers must not do the same, and the quality control system is there to prevent that.”The real danger is that, with more than one million scientific papers published every year, journalists can tend to rely on only a few influential journals such as Science and Nature for science news [3]. Although the influence and reliability of these prestigious journals is well established, the risk that journalists and other media producers might be propagating the exciting yet preliminary results published in their pages is undeniable.Fox has personal experience of the consequences of hype surrounding surprising but preliminary science. Her sister has chronic fatigue syndrome (CFS), a debilitating medical condition with no known test or cure. When Science published an article in 2009 linking CFS with a viral agent, Fox was naturally both curious and sceptical [9]. “I thought even if I knew that this was an incredibly significant finding, the fact that nobody had ever found a biological link before also meant that it would have to be replicated before patients could get excited,” Fox explained. “And of course what happened was all the UK journalists were desperate to splash it on the front page because it was so surprising and so significant and could completely revolutionize the approach to CFS, the treatment and potential cure.”Fox observed that while some journalists placed the caveats of the study deep within their stories, others left them out completely. “I gather in the USA it was massive, it was front page news and patients were going online to try and find a test for this particular virus. But in the end, nobody could replicate it, literally nobody. A Dutch group tried, Imperial College London, lots of groups, but nobody could replicate it. And in the end, the paper has been withdrawn from Science.”For Fox, the fact that the paper was withdrawn, incidentally due to a finding of contamination in the samples, was less interesting than the way that the paper was reported by journalists. “We would want any journal press officer to literally in the first paragraph be highlighting the fact that this was such a surprising result that it shouldn''t be splashed on the front page,” she said. Of course to the journalist, waiting for the study to be replicated is anathema in a culture that values exciting and new findings. “To the scientific community, the fact that it is surprising and new means that we should calm down and wait until it is proved,” Fox warned.So, the media must also take its share of the blame when it comes to distorting science news. Indeed, research analysing science coverage in the media has shown that stories tend to exaggerate preliminary findings, use sensationalist terms, avoid complex issues, fail to mention financial conflicts of interest, ignore statistical limitations and transform inherent uncertainties into controversy [3,10].One concerning development within journalism is the ‘balanced treatment'' of controversial science, also called ‘false balance'' by many science communicators. This balanced treatment has helped supporters of pseudoscientific notions gain equal ground with scientific experts in media stories on issues such as climate change and biotechnology [11].“Almost every time the issue of creationism or intelligent design comes up, many newspapers and other media feel that they need to present ‘both sides'', even though one is clearly nonsensical, and indeed harmful to public education,” commented Massimo Pigliucci, author of Nonsense on Stilts: How to Tell Science from Bunk [12].Fox also criticizes false balance on issues such as global climate change. “On that one you can''t blame the scientific community, you can''t blame science press officers,” she said. “That is a real clashing of values. One of the values that most journalists have bred into them is about balance and impartiality, balancing the views of one person with an opponent when it''s controversial. So on issues like climate change, where there is a big controversy, their instinct as a journalist will be to make sure that if they have a climate scientist on the radio or on TV or quoted in the newspaper, they pick up the phone and make sure that they have a climate skeptic.” However, balanced viewpoints should not threaten years of rigorous scientific research embodied in a peer-reviewed publication. “We are not saying generally that we [scientists] want special treatment from journalists,” Fox said, “but we are saying that this whole principle of balance, which applies quite well in politics, doesn''t cross over to science…”Bertamini believes the situation could be made worse if publication standards are relaxed in favour of promoting a more public and open review process. “If today you were to research the issue of human contribution to global warming you would find a consensus in the scientific literature. Yet you would find no such consensus in the general media. In part this is due to the existence of powerful and well-funded lobbies that fill the media with unfounded skepticism. Now imagine if these lobbies had more access to publish their views in the scientific literature, maybe in the form of post publication feedback. This would be a dangerous consequence of blurring the line that separates scientific writing and the broader media.”In an age in which the way science is presented in the news can have significant impacts for audiences, especially when it comes to health news, what can science communicators and journalists do to keep audiences reading without having to distort, hype, trivialize, dramatize or otherwise misrepresent science?Pigliucci believes that many different sources—press releases, blogs, newspapers and investigative science journalism pieces—can cross-check reported science and challenge its accuracy, if necessary. “There are examples of bloggers pointing out technical problems with published scientific papers,” Pigliucci said. “Unfortunately, as we all know, the game can be played the other way around too, with plenty of bloggers, ‘twitterers'' and others actually obfuscating and muddling things even more.” Pigliucci hopes to see a cultural change take place in science reporting, one that emphasizes “more reflective shouting, less shouting of talking points,” he said.Fox believes that journalists still need to cover scientific developments more responsibly, especially given that scientists are increasingly reaching out to press officers and the public. Journalists can inform, intrigue and entertain whilst maintaining accurate representations of the original science, but need to understand that preliminary results must be replicated and validated before being splashed on the front page. They should also strive to interview experts who do not have financial ties or competing interests in the research, and they should put scientific stories in the context of a broader process of nonlinear discovery. According to Pigliucci, journalists can and should be educating themselves on the research process and the science of logical conclusion-making, giving themselves the tools to provide critical and investigative coverage when needed. At the same time, scientists should undertake proper media training so that they are comfortable communicating their work to journalists or press officers.“I don''t think there is any fundamental flaw in how we communicate science, but there is a systemic flaw in the sense that we simply do not educate people about logical fallacies and cognitive biases,” Pigliucci said, advising that scientists and communicators alike should be intimately familiar with the subjects of philosophy and psychology. “As for bunk science, it has always been with us, and it probably always will be, because human beings are naturally prone to all sorts of biases and fallacious reasoning. As Carl Sagan once put it, science (and reason) is like a candle in the dark. It needs constant protection and a lot of thankless work to keep it alive.”  相似文献   

7.
8.

Background

New marine invasions have been recorded in increasing numbers along the world''s coasts due in part to the warming of the oceans and the ability of many invasive marine species to tolerate a broader thermal range than native species. Several marine invertebrate species have invaded the U.S. southern and mid-Atlantic coast from the Caribbean and this poleward range expansion has been termed ‘Caribbean Creep’. While models have predicted the continued decline of global biodiversity over the next 100 years due to global climate change, few studies have examined the episodic impacts of prolonged cold events that could impact species range expansions.

Methodology/Principal Findings

A pronounced cold spell occurred in January 2010 in the U.S. southern and mid-Atlantic coast and resulted in the mortality of several terrestrial and marine species. To experimentally test whether cold-water temperatures may have caused the disappearance of one species of the ‘Caribbean Creep’ we exposed the non-native crab Petrolisthes armatus to different thermal treatments that mimicked abnormal and severe winter temperatures. Our findings indicate that Petrolisthes armatus cannot tolerate prolonged and extreme cold temperatures (4–6°C) and suggest that aperiodic cold winters may be a critical ‘reset’ mechanism that will limit the range expansion of other ‘Caribbean Creep’ species.

Conclusions/Significance

We suggest that temperature ‘aberrations’ such as ‘cold snaps’ are an important and overlooked part of climate change. These climate fluctuations should be accounted for in future studies and models, particularly with reference to introduced subtropical and tropical species and predictions of both rates of invasion and rates of unidirectional geographic expansion.  相似文献   

9.
10.
Lessons from science studies for the ongoing debate about ‘big'' versus ‘little'' research projectsDuring the past six decades, the importance of scientific research to the developed world and the daily lives of its citizens has led many industrialized countries to rebrand themselves as ‘knowledge-based economies''. The increasing role of science as a main driver of innovation and economic growth has also changed the nature of research itself. Starting with the physical sciences, recent decades have seen academic research increasingly conducted in the form of large, expensive and collaborative ‘big science'' projects that often involve multidisciplinary, multinational teams of scientists, engineers and other experts.Although laboratory biology was late to join the big science trend, there has nevertheless been a remarkable increase in the number, scope and complexity of research collaborations…Although laboratory biology was late to join the big science trend, there has nevertheless been a remarkable increase in the number, scope and complexity of research collaborations and projects involving biologists over the past two decades (Parker et al, 2010). The Human Genome Project (HGP) is arguably the most well known of these and attracted serious scientific, public and government attention to ‘big biology''. Initial exchanges were polarized and often polemic, as proponents of the HGP applauded the advent of big biology and argued that it would produce results unattainable through other means (Hood, 1990). Critics highlighted the negative consequences of massive-scale research, including the industrialization, bureaucratization and politicization of research (Rechsteiner, 1990). They also suggested that it was not suited to generating knowledge at all; Nobel laureate Sydney Brenner joked that sequencing was so boring it should be done by prisoners: “the more heinous the crime, the bigger the chromosome they would have to decipher” (Roberts, 2001).A recent Opinion in EMBO reports summarized the arguments against “the creeping hegemony” of ‘big science'' over ‘little science'' in biomedical research. First, many large research projects are of questionable scientific and practical value. Second, big science transfers the control of research topics and goals to bureaucrats, when decisions about research should be primarily driven by the scientific community (Petsko, 2009). Gregory Petsko makes a valid point in his Opinion about wasteful research projects and raises the important question of how research goals should be set and by whom. Here, we contextualize Petsko''s arguments by drawing on the history and sociology of science to expound the drawbacks and benefits of big science. We then advance an alternative to the current antipodes of ‘big'' and ‘little'' biology, which offers some of the benefits and avoids some of the adverse consequences.Big science is not a recent development. Among the first large, collaborative research projects were the Manhattan Project to develop the atomic bomb, and efforts to decipher German codes during the Second World War. The concept itself was put forward in 1961 by physicist Alvin Weinberg, and further developed by historian of science Derek De Solla Price in his pioneering book, Little Science, Big Science. “The large-scale character of modern science, new and shining and all powerful, is so apparent that the happy term ‘Big Science'' has been coined to describe it” (De Solla Price, 1963). Weinberg noted that science had become ‘big'' in two ways. First, through the development of elaborate research instrumentation, the use of which requires large research teams, and second, through the explosive growth of scientific research in general. More recently, big science has come to refer to a diverse but strongly related set of changes in the organization of scientific research. This includes expensive equipment and large research teams, but also the increasing industrialization of research activities, the escalating frequency of interdisciplinary and international collaborations, and the increasing manpower needed to achieve research goals (Galison & Hevly, 1992). Many areas of biological research have shifted in these directions in recent years and have radically altered the methods by which biologists generate scientific knowledge.Despite this long history of collaboration, laboratory biology remained ‘small-scale'' until the rising prominence of molecular biology changed the research landscapeUnderstanding the implications of this change begins with an appreciation of the history of collaborations in the life sciences—biology has long been a collaborative effort. Natural scientists accompanied the great explorers in the grand alliance between science and exploration during the sixteenth and seventeenth centuries (Capshew & Rader, 1992), which not only served to map uncharted territories, but also contributed enormously to knowledge of the fauna and flora discovered. These early expeditions gradually evolved into coordinated, multidisciplinary research programmes, which began with the International Polar Years, intended to concentrate international research efforts at the North and South Poles (1882–1883; 1932–1933). The Polar Years became exemplars of large-scale life science collaboration, begetting the International Geophysical Year (1957–1958) and the International Biological Programme (1968–1974).For Weinberg, the potentially negative consequences associated with big science were “adminstratitis, moneyitis, and journalitis”…Despite this long history of collaboration, laboratory biology remained ‘small-scale'' until the rising prominence of molecular biology changed the research landscape. During the late 1950s and early 1960s, many research organizations encouraged international collaboration in the life sciences, spurring the creation of, among other things, the European Molecular Biology Organization (1964) and the European Molecular Biology Laboratory (1974). In addition, international mapping and sequencing projects were developed around model organisms such as Drosophila and Caenorhabditis elegans, and scientists formed research networks, exchanged research materials and information, and divided labour across laboratories. These new ways of working set the stage for the HGP, which is widely acknowledged as the cornerstone of the current ‘post-genomics era''. As an editorial on ‘post-genomics cultures'' put it in the journal Nature, “Like it or not, big biology is here to stay” (Anon, 2001).Just as big science is not new, neither are concerns about its consequences. As early as 1948, the sociologist Max Weber worried that as equipment was becoming more expensive, scientists were losing autonomy and becoming more dependent on external funding (Weber, 1948). Similarly, although Weinberg and De Solla Price expressed wonder at the scope of the changes they were witnessing, they too offered critical evaluations. For Weinberg, the potentially negative consequences associated with big science were “adminstratitis, moneyitis, and journalitis”; meaning the dominance of science administrators over practitioners, the tendency to view funding increases as a panacea for solving scientific problems, and progressively blurry lines between scientific and popular writing in order to woo public support for big research projects (Weinberg, 1961). De Solla Price worried that the bureaucracy associated with big science would fail to entice the intellectual mavericks on which science depends (De Solla Price, 1963). These concerns remain valid and have been voiced time and again.As big science represents a major investment of time, money and manpower, it tends to determine and channel research in particular directions that afford certain possibilities and preclude others (Cook & Brown, 1999). In the worst case, this can result in entire scientific communities following false leads, as was the case in the 1940s and 1950s for Soviet agronomy. Huge investments were made to demonstrate the superiority of Lamarckian over Mendelian theories of heritability, which held back Russian biology for decades (Soyfer, 1994). Such worst-case scenarios are, however, rare. A more likely consequence is that big science can diminish the diversity of research approaches. For instance, plasma fusion scientists are now under pressure to design projects that are relevant to the large-scale International Thermonuclear Experimental Reactor, despite the potential benefits of a wide array of smaller-scale machines and approaches (Hackett et al, 2004). Big science projects can also involve coordination challenges, take substantial time to realize success, and be difficult to evaluate (Neal et al, 2008).Importantly, big science projects allow for the coordination and activation of diverse forms of expertise across disciplinary, national and professional boundariesAnother danger of big science is that researchers will lose the intrinsic satisfaction that arises from having personal control over their work. Dissatisfaction could lower research productivity (Babu & Singh, 1998) and might create the concomitant danger of losing talented young researchers to other, more engaging callings. Moreover, the alienation of scientists from their work as a result of big science enterprises can lead to a loss of personal responsibility for research. In turn, this can increase the likelihood of misconduct, as effective social control is eroded and “the satisfactions of science are overshadowed by organizational demands, economic calculations, and career strategies” (Hackett, 1994).Practicing scientists are aware of these risks. Yet, they remain engaged in large-scale projects because they must, but also because of the real benefits these projects offer. Importantly, big science projects allow for the coordination and activation of diverse forms of expertise across disciplinary, national and professional boundaries to solve otherwise intractable basic and applied problems. Although calling for international and interdisciplinary collaboration is popular, practicing it is notably less popular and much harder (Weingart, 2000). Big science projects can act as a focal point that allows researchers from diverse backgrounds to cooperate, and simultaneously advances different scientific specialties while forging interstitial connections among them. Another major benefit of big science is that it facilitates the development of common research standards and metrics, allowing for the rapid development of nascent research frontiers (Fujimura, 1996). Furthermore, the high profile of big science efforts such as the HGP and CERN draw public attention to science, potentially enhancing scientific literacy and the public''s willingness to support research.Rather than arguing for or against big science, molecular biology would best benefit from strategic investments in a diverse portfolio of big, little and ‘mezzo'' research projectsBig science can also ease some of the problems associated with scientific management. In terms of training, graduate students and junior researchers involved in big science projects can gain additional skills in problem-solving, communication and team working (Court & Morris, 1994). The bureaucratic structure and well-defined roles of big science projects also make leadership transitions and researcher attrition easier to manage compared with the informal, refractory organization of most small research projects. Big science projects also provide a visible platform for resource acquisition and the recruitment of new scientific talent. Moreover, through their sheer size, diversity and complexity, they can also increase the frequency of serendipitous social interactions and scientific discoveries (Hackett et al, 2008). Finally, large-scale research projects can influence scientific and public policy. Big science creates organizational structures in which many scientists share responsibility for, and expectations of, a scientific problem (Van Lente, 1993). This shared ownership and these shared futures help coordinate communication and enable researchers to present a united front when advancing the potential benefits of their projects to funding bodies.Given these benefits and pitfalls of big science, how might molecular biology best proceed? Petsko''s response is that, “[s]cientific priorities must, for the most part, be set by the free exchange of ideas in the scientific literature, at meetings and in review panels. They must be set from the bottom up, from the community of scientists, not by the people who control the purse strings.” It is certainly the case, as Petsko also acknowledges, that science has benefited from a combination of generous public support and professional autonomy. However, we are less sanguine about his belief that the scientific community alone has the capacity to ascertain the practical value of particular lines of inquiry, determine the most appropriate scale of research, and bring them to fruition. In fact, current mismatches between the production of scientific knowledge and the information needs of public policy-makers strongly suggest that the opposite is true (Sarewitz & Pielke, 2007).Instead, we maintain that these types of decision should be determined through collective decision-making that involves researchers, governmental funding agencies, science policy experts and the public. In fact, the highly successful HGP involved such collaborations (Lambright, 2002). Taking into account the opinions and attitudes of these stakeholders better links knowledge production to the public good (Cash et al, 2003)—a major justification for supporting big biology. We do agree with Petsko, however, that large-scale projects can develop pathological characteristics, and that all programmes should therefore undergo regular assessments to determine their continuing worth.Rather than arguing for or against big science, molecular biology would best benefit from strategic investments in a diverse portfolio of big, little and ‘mezzo'' research projects. Their size, duration and organizational structure should be determined by the research question, subject matter and intended goals (Westfall, 2003). Parties involved in making these decisions should, in turn, aim at striking a profitable balance between differently sized research projects to garner the benefits of each and allow practitioners the autonomy to choose among them.This will require new, innovative methods for supporting and coordinating research. An important first step is ensuring that funding is made available for all kinds of research at a range of scales. For this to happen, the current funding model needs to be modified. The practice of allocating separate funds for individual investigator-driven and collective research projects is a positive step in the right direction, but it does not discriminate between projects of different sizes at a sufficiently fine resolution. Instead, multiple funding pools should be made available for projects of different sizes and scales, allowing for greater accuracy in project planning, funding and evaluation.It is up to scientists and policymakers to discern how to benefit from the advantages that ‘bigness'' has to offer, while avoiding the pitfalls inherent in doing soSecond, science policy should consciously facilitate the ‘scaling up'', ‘scaling down'' and concatenation of research projects when needed. For instance, special funds might be established for supporting small-scale but potentially transformative research with the capacity to be scaled up in the future. Alternatively, small-scale satellite research projects that are more nimble, exploratory and risky, could complement big science initiatives or be generated by them. This is also in line with Petsko''s statement that “the best kind of big science is the kind that supports and generates lots of good little science.” Another potentially fruitful strategy we suggest would be to fund independent, small-scale research projects to work on co-relevant research with the later objective of consolidating them into a single project in a kind of building-block assembly. By using these and other mechanisms for organizing research at different scales, it could help to ameliorate some of the problems associated with big science, while also accruing its most important benefits.Within the life sciences, the field of ecology perhaps best exemplifies this strategy. Although it encompasses many small-scale laboratory and field studies, ecologists now collaborate in a variety of novel organizations that blend elements of big, little and mezzo science and that are designed to catalyse different forms of research. For example, the US National Center for Ecological Analysis and Synthesis brings together researchers and data from many smaller projects to synthesize their findings. The Long Term Ecological Research Network consists of dozens of mezzo-scale collaborations focused on specific sites, but also leverages big science through cross-site collaborations. While investments are made in classical big science projects, such as the National Ecological Observatory Network, no one project or approach has dominated—nor should it. In these ways, ecologists have been able to reap the benefits of big science whilst maintaining diverse research approaches and individual autonomy and still being able to enjoy the intrinsic satisfaction associated with scientific work.Big biology is here to stay and is neither a curse nor a blessing. It is up to scientists and policy-makers to discern how to benefit from the advantages that ‘bigness'' has to offer, while avoiding the pitfalls inherent in so doing. The challenge confronting molecular biology in the coming years is to decide which kind of research projects are best suited to getting the job done. Molecular biology itself arose, in part, from the migration of physicists to biology; as physics research projects and collaborations grew and became more dependent on expensive equipment, appreciating the saliency of one''s own work became increasingly difficult, which led some to seek refuge in the comparatively little science of biology (Dev, 1990). The current situation, which Petsko criticizes in his Opinion article, is thus the result of an organizational and intellectual cycle that began more than six decades ago. It would certainly behoove molecular biologists to heed his warnings and consider the best paths forward.? Open in a separate windowNiki VermeulenOpen in a separate windowJohn N. ParkerOpen in a separate windowBart Penders  相似文献   

11.
Samuel Caddick 《EMBO reports》2008,9(12):1174-1176
  相似文献   

12.
The temptation to silence dissenters whose non-mainstream views negatively affect public policies is powerful. However, silencing dissent, no matter how scientifically unsound it might be, can cause the public to mistrust science in general.Dissent is crucial for the advancement of science. Disagreement is at the heart of peer review and is important for uncovering unjustified assumptions, flawed methodologies and problematic reasoning. Enabling and encouraging dissent also helps to generate alternative hypotheses, models and explanations. Yet, despite the importance of dissent in science, there is growing concern that dissenting voices have a negative effect on the public perception of science, on policy-making and public health. In some cases, dissenting views are deliberately used to derail certain policies. For example, dissenting positions on climate change, environmental toxins or the hazards of tobacco smoke [1,2] seem to laypeople as equally valid conflicting opinions and thereby create or increase uncertainty. Critics often use legitimate scientific disagreements about narrow claims to reinforce the impression of uncertainty about general and widely accepted truths; for instance, that a given substance is harmful [3,4]. This impression of uncertainty about the evidence is then used to question particular policies [1,2,5,6].The negative effects of dissent on establishing public polices are present in cases in which the disagreements are scientifically well-grounded, but the significance of the dissent is misunderstood or blown out of proportion. A study showing that many factors affect the size of reef islands, to the effect that they will not necessarily be reduced in size as sea levels rise [7], was simplistically interpreted by the media as evidence that climate change will not have a negative impact on reef islands [8].In other instances, dissenting voices affect the public perception of and motivation to follow public-health policies or recommendations. For example, the publication of a now debunked link between the measles, mumps and rubella vaccine and autism [9], as well as the claim that the mercury preservative thimerosal, which was used in childhood vaccines, was a possible risk factor for autism [10,11], created public doubts about the safety of vaccinating children. Although later studies showed no evidence for these claims, doubts led many parents to reject vaccinations for their children, risking the herd immunity for diseases that had been largely eradicated from the industrialized world [12,13,14,15]. Many scientists have therefore come to regard dissent as problematic if it has the potential to affect public behaviour and policy-making. However, we argue that such concerns about dissent as an obstacle to public policy are both dangerous and misguided.Whether dissent is based on genuine scientific evidence or is unfounded, interested parties can use it to sow doubt, thwart public policies, promote problematic alternatives and lead the public to ignore sound advice. In response, scientists have adopted several strategies to limit these negative effects of dissent—masking dissent, silencing dissent and discrediting dissenters. The first strategy aims to present a united front to the public. Scientists mask existing disagreements among themselves by presenting only those claims or pieces of evidence about which they agree [16]. Although there is nearly universal agreement among scientists that average global temperatures are increasing, there are also legitimate disagreements about how much warming will occur, how quickly it will occur and the impact it might have [7,17,18,19]. As presenting these disagreements to the public probably creates more doubt and uncertainty than is warranted, scientists react by presenting only general claims [20].A second strategy is to silence dissenting views that might have negative consequences. This can take the form of self-censorship when scientists are reluctant to publish or publicly discuss research that might—incorrectly—be used to question existing scientific knowledge. For example, there are genuine disagreements about how best to model cloud formation, water vapour feedback and aerosols in general circulation paradigms, all of which have significant effects on the magnitude of global climate change predictions [17,19]. Yet, some scientists are hesitant to make these disagreements public, for fear that they will be accused of being denialists, faulted for confusing the public and policy-makers, censured for abating climate-change deniers, or criticized for undermining public policy [21,22,23,24].…there is growing concern that dissenting voices can have a negative effect on the public perception of science, on policy-making and public healthAnother strategy is to discredit dissenters, especially in cases in which the dissent seems to be ideologically motivated. This could involve publicizing the financial or political ties of the dissenters [2,6,25], which would call attention to their probable bias. In other cases, scientists might discredit the expertise of the dissenter. One such example concerns a 2007 study published in the Proceedings of the National Academy of Sciences USA, which claimed that cadis fly larvae consuming Bt maize pollen die at twice the rate of flies feeding on non-Bt maize pollen [26]. Immediately after publication, both the authors and the study itself became the target of relentless and sometimes scathing attacks from a group of scientists who were concerned that anti-GMO (genetically modified organism) interest groups would seize on the study to advance their agenda [27]. The article was criticized for its methodology and its conclusions, the Proceedings of the National Academy of Sciences USA was criticized for publishing the article and the US National Science Foundation was criticized for funding the study in the first place.Public policies, health advice and regulatory decisions should be based on the best available evidence and knowledge. As the public often lack the expertise to assess the quality of dissenting views, disagreements have the potential to cast doubt over the reliability of scientific knowledge and lead the public to question relevant policies. Strategies to block dissent therefore seem reasonable as a means to protect much needed or effective health policies, advice and regulations. However, even if the public were unable to evaluate the science appropriately, targeting dissent is not the most appropriate strategy to prevent negative side effects for several reasons. Chiefly, it contributes to the problems that the critics of dissent seek to address, namely increasing the cacophony of dissenting voices that only aim to create doubt. Focusing on dissent as a problematic activity sends the message to policy-makers and the public that any dissent undermines scientific knowledge. Reinforcing this false assumption further incentivizes those who seek merely to create doubt to thwart particular policies. Not surprisingly, think-tanks, industry and other organizations are willing to manufacture dissent simply to derail policies that they find economically or ideologically undesirable.Another danger of targeting dissent is that it probably stifles legitimate crucial voices that are needed for both advancing science and informing sound policy decisions. Attacking dissent makes scientists reluctant to voice genuine doubts, especially if they believe that doing so might harm their reputations, damage their careers and undermine prevailing theories or policies needed. For instance, a panel of scientists for the US National Academy of Sciences, when presenting a risk assessment of radiation in 1956, omitted wildly different predictions about the potential genetic harm of radiation [16]. They did not include this wide range of predictions in their final report precisely because they thought the differences would undermine confidence in their recommendations. Yet, this information could have been relevant to policy-makers. As such, targeting dissent as an obstacle to public policy might simply reinforce self-censorship and stifle legitimate and scientifically informed debate. If this happens, scientific progress is hindered.Second, even if the public has mistaken beliefs about science or the state of the knowledge of the science in question, focusing on dissent is not an effective way to protect public policy from false claims. It fails to address the presumed cause of the problem—the apparent lack of understanding of the science by the public. A better alternative would be to promote the public''s scientific literacy. If the public were educated to better assess the quality of the dissent and thus disregard instances of ideological, unsupported or unsound dissent, dissenting voices would not have such a negative effect. Of course, one might argue that educating the public would be costly and difficult, and that therefore, the public should simply listen to scientists about which dissent to ignore and which to consider. This is, however, a paternalistic attitude that requires the public to remain ignorant ‘for their own good''; a position that seems unjustified on many levels as there are better alternatives for addressing the problem.Moreover, silencing dissent, rather than promoting scientific literacy, risks undermining public trust in science even if the dissent is invalid. This was exemplified by the 2009 case of hacked e-mails from a computer server at the University of East Anglia''s Climate Research Unit (CRU). After the selective leaking of the e-mails, climate scientists at the CRU came under fire because some of the quotes, which were taken out of context, seemed to suggest that they were fudging data or suppressing dissenting views [28,29,30,31]. The stolen e-mails gave further ammunition to those opposing policies to reduce greenhouse emissions as they could use accusations of data ‘cover up'' as proof that climate scientists were not being honest with the public [29,30,31]. It also allowed critics to present climate scientists as conspirators who were trying to push a political agenda [32]. As a result, although there was nothing scientifically inappropriate revealed in the ‘climategate'' e-mails, it had the consequence of undermining the public''s trust in climate science [33,34,35,36].A significant amount of evidence shows that the ‘deficit model'' of public understanding of science, as described above, is too simplistic to account correctly for the public''s reluctance to accept particular policy decisions [37,38,39,40]. It ignores other important factors such as people''s attitudes towards science and technology, their social, political and ethical values, their past experiences and the public''s trust in governmental institutions [41,42,43,44]. The development of sound public policy depends not only on good science, but also on value judgements. One can agree with the scientific evidence for the safety of GMOs, for instance, but still disagree with the widespread use of GMOs because of social justice concerns about the developing world''s dependence on the interests of the global market. Similarly, one need not reject the scientific evidence about the harmful health effects of sugar to reject regulations on sugary drinks. One could rationally challenge such regulations on the grounds that informed citizens ought to be able to make free decisions about what they consume. Whether or not these value judgements are justified is an open question, but the focus on dissent hinders our ability to have that debate.Focusing on dissent as a problematic activity sends the message to policy-makers and the public that any dissent undermines scientific knowledgeAs such, targeting dissent completely fails to address the real issues. The focus on dissent, and the threat that it seems to pose to public policy, misdiagnoses the problem as one of the public misunderstanding science, its quality and its authority. It assumes that scientific or technological knowledge is the only relevant factor in the development of policy and it ignores the role of other factors, such as value judgements about social benefits and harms, and institutional trust and reliability [45,46]. The emphasis on dissent, and thus on scientific knowledge, as the only or main factor in public policy decisions does not give due attention to these legitimate considerations.Furthermore, by misdiagnosing the problem, targeting dissent also impedes more effective solutions and prevents an informed debate about the values that should guide public policy. By framing policy debates solely as debates over scientific facts, the normative aspects of public policy are hidden and neglected. Relevant ethical, social and political values fail to be publicly acknowledged and openly discussed.Controversies over GMOs and climate policies have called attention to the negative effects of dissent in the scientific community. Based on the assumption that the public''s reluctance to support particular policies is the result of their inability to properly understand scientific evidence, scientists have tried to limit dissenting views that create doubt. However, as outlined above, targeting dissent as an obstacle to public policy probably does more harm than good. It fails to focus on the real problem at stake—that science is not the only relevant factor in sound policy-making. Of course, we do not deny that scientific evidence is important to the develop.ment of public policy and behavioural decisions. Rather, our claim is that this role is misunderstood and often oversimplified in ways that actually contribute to problems in developing sound science-based policies.? Open in a separate windowInmaculada de Melo-MartínOpen in a separate windowKristen Intemann  相似文献   

13.

Background

Sea ice across the Arctic is declining and altering physical characteristics of marine ecosystems. Polar bears (Ursus maritimus) have been identified as vulnerable to changes in sea ice conditions. We use sea ice projections for the Canadian Arctic Archipelago from 2006 – 2100 to gain insight into the conservation challenges for polar bears with respect to habitat loss using metrics developed from polar bear energetics modeling.

Principal Findings

Shifts away from multiyear ice to annual ice cover throughout the region, as well as lengthening ice-free periods, may become critical for polar bears before the end of the 21st century with projected warming. Each polar bear population in the Archipelago may undergo 2–5 months of ice-free conditions, where no such conditions exist presently. We identify spatially and temporally explicit ice-free periods that extend beyond what polar bears require for nutritional and reproductive demands.

Conclusions/Significance

Under business-as-usual climate projections, polar bears may face starvation and reproductive failure across the entire Archipelago by the year 2100.  相似文献   

14.
《Ecological Complexity》2008,5(4):289-302
We address the three main issues raised by Stirling et al. [Stirling, I., Derocher, A.E., Gough, W.A., Rode, K., in press. Response to Dyck et al. (2007) on polar bears and climate change in western Hudson Bay. Ecol. Complexity]: (1) evidence of the role of climate warming in affecting the western Hudson Bay polar bear population, (2) responses to suggested importance of human–polar bear interactions, and (3) limitations on polar bear adaptation to projected climate change. We assert that our original paper did not provide any “alternative explanations [that] are largely unsupported by the data” or misrepresent the original claims by Stirling et al. [Stirling, I., Lunn, N.J., Iacozza, I., 1999. Long-term trends in the population ecology of polar bears in western Hudson Bay in relation to climate change. Arctic 52, 294–306], Derocher et al. [Derocher, A.E., Lunn, N.J., Stirling, I., 2004. Polar bears in a warming climate. Integr. Comp. Biol. 44, 163–176], and other peer-approved papers authored by Stirling and colleagues. In sharp contrast, we show that the conclusion of Stirling et al. [Stirling, I., Derocher, A.E., Gough, W.A., Rode, K., in press. Response to Dyck et al. (2007) on polar bears and climate change in western Hudson Bay. Ecol. Complexity] – suggesting warming temperatures (and other related climatic changes) are the predominant determinant of polar bear population status, not only in western Hudson (WH) Bay but also for populations elsewhere in the Arctic – is unsupportable by the current scientific evidence.The commentary by Stirling et al. [Stirling, I., Derocher, A.E., Gough, W.A., Rode, K., in press. Response to Dyck et al. (2007) on polar bears and climate change in western Hudson Bay. Ecol. Complexity] is an example of uni-dimensional, or reductionist thinking, which is not useful when assessing effects of climate change on complex ecosystems. Polar bears of WH are exposed to a multitude of environmental perturbations including human interference and factors (e.g., unknown seal population size, possible competition with polar bears from other populations) such that isolation of any single variable as the certain root cause (i.e., climate change in the form of warming spring air temperatures), without recognizing confounding interactions, is imprudent, unjustified and of questionable scientific utility. Dyck et al. [Dyck, M.G., Soon, W., Baydack, R.K., Legates, D.R., Baliunas, S., Ball, T.F., Hancock, L.O., 2007. Polar bears of western Hudson Bay and climate change: Are warming spring air temperatures the “ultimate” survival control factor? Ecol. Complexity, 4, 73–84. doi:10.1016/j.ecocom.2007.03.002] agree that some polar bear populations may be negatively impacted by future environmental changes; but an oversimplification of the complex ecosystem interactions (of which humans are a part) may not be beneficial in studying external effects on polar bears. Science evolves through questioning and proposing hypotheses that can be critically tested, in the absence of which, as Krebs and Borteaux [Krebs, C.J., Berteaux, D., 2006. Problems and pitfalls in relating climate variability to population dynamics. Clim. Res. 32, 143–149] observe, “we will be little more than storytellers.”  相似文献   

15.
L Bornmann 《EMBO reports》2012,13(8):673-676
The global financial crisis has changed how nations and agencies prioritize research investment. There has been a push towards science with expected benefits for society, yet devising reliable tools to predict and measure the social impact of research remains a major challenge.Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier, and it has become the underlying rationale for public support and funding of science.However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [1]. In fact, there is no direct link between the scientific quality of a research project and its societal value. As Paul Nightingale and Alister Scott of the University of Sussex''s Science and Technology Policy Research centre have pointed out: “research that is highly cited or published in top journals may be good for the academic discipline but not for society” [2]. Moreover, it might take years, or even decades, until a particular body of knowledge yields new products or services that affect society. By way of example, in an editorial on the topic in the British Medical Journal, editor Richard Smith cites the original research into apoptosis as work that is of high quality, but that has had “no measurable impact on health” [3]. He contrasts this with, for example, research into “the cost effectiveness of different incontinence pads”, which is certainly not seen as high value by the scientific community, but which has had an immediate and important societal impact.…the growth of scientific research during the past decades has outpaced the public resources available to fund itThe problem actually begins with defining the ‘societal impact of research''. A series of different concepts has been introduced: ‘third-stream activities'' [4], ‘societal benefits'' or ‘societal quality'' [5], ‘usefulness'' [6], ‘public values'' [7], ‘knowledge transfer'' [8] and ‘societal relevance'' [9, 10]. Yet, each of these concepts is ultimately concerned with measuring the social, cultural, environmental and economic returns from publicly funded research, be they products or ideas.In this context, ‘societal benefits'' refers to the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making. ‘Cultural benefits'' are those that add to the cultural capital of a nation, for example, by giving insight into how we relate to other societies and cultures, by providing a better understanding of our history and by contributing to cultural preservation and enrichment. ‘Environmental benefits'' benefit the natural capital of a nation, by reducing waste and pollution, and by increasing natural preserves or biodiversity. Finally, ‘economic benefits'' increase the economic capital of a nation by enhancing its skills base and by improving its productivity [11].Given the variability and the complexity of evaluating the societal impact of research, Barend van der Meulen at the Rathenau Institute for research and debate on science and technology in the Netherlands, and Arie Rip at the School of Management and Governance of the University of Twente, the Netherlands, have noted that “it is not clear how to evaluate societal quality, especially for basic and strategic research” [5]. There is no accepted framework with adequate datasets comparable to,for example, Thomson Reuters'' Web of Science, which enables the calculation of bibliometric values such as the h index [12] or journal impact factor [13]. There are also no criteria or methods that can be applied to the evaluation of societal impact, whilst conventional research and development (R&D) indicators have given little insight, with the exception of patent data. In fact, in many studies, the societal impact of research has been postulated rather than demonstrated [14]. For Benoît Godin at the Institut National de la Recherche Scientifique (INRS) in Quebec, Canada, and co-author Christian Doré, “systematic measurements and indicators [of the] impact on the social, cultural, political, and organizational dimensions are almost totally absent from the literature” [15]. Furthermore, they note, most research in this field is primarily concerned with economic impact.A presentation by Ben Martin from the Science and Technology Policy Research Unit at Sussex University, UK, cites four common problems that arise in the context of societal impact measurements [16]. The first is the causality problem—it is not clear which impact can be attributed to which cause. The second is the attribution problem, which arises because impact can be diffuse or complex and contingent, and it is not clear what should be attributed to research or to other inputs. The third is the internationality problem that arises as a result of the international nature of R&D and innovation, which makes attribution virtually impossible. Finally, the timescale problem arises because the premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impact.…in many studies, the societal impact of research has been postulated rather than demonstratedIn addition, there are four other problems. First, it is hard to find experts to assess societal impact that is based on peer evaluation. As Robert Frodeman and James Britt Holbrook at the University of North Texas, USA, have noted, “[s]cientists generally dislike impacts considerations” and evaluating research in terms of its societal impact “takes scientists beyond the bounds of their disciplinary expertise” [10]. Second, given that the scientific work of an engineer has a different impact than the work of a sociologist or historian, it will hardly be possible to have a single assessment mechanism [4, 17]. Third, societal impact measurement should take into account that there is not just one model of a successful research institution. As such, assessment should be adapted to the institution''s specific strengths in teaching and research, the cultural context in which it exists and national standards. Finally, the societal impact of research is not always going to be desirable or positive. For example, Les Rymer, graduate education policy advisor to the Australian Group of Eight (Go8) network of university vice-chancellors, noted in a report for the Go8 that, “environmental research that leads to the closure of a fishery might have an immediate negative economic impact, even though in the much longer term it will preserve a resource that might again become available for use. The fishing industry and conservationists might have very different views as to the nature of the initial impact—some of which may depend on their view about the excellence of the research and its disinterested nature” [18].Unlike scientific impact measurement, for which there are numerous established methods that are continually refined, research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishments. Even so, governments already conduct budget-relevant measurements, or plan to do so. The best-known national evaluation system is the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. Efforts are under way to set up the Research Excellence Framework (REF), which is set to replace the RAE in 2014 “to support the desire of modern research policy for promoting problem-solving research” [21]. In order to develop the new arrangements for the assessment and funding of research in the REF, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches for evaluating the impact of research [20]. The recommendation from this consultation is that impact should be measured in a quantifiable way, and expert panels should review narrative evidence in case studies supported by appropriate indicators [19,21].…premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impactMany of the studies that have carried out societal impact measurement chose to do so on the basis of case studies. Although this method is labour-intensive and a craft rather than a quantitative activity, it seems to be the best way of measuring the complex phenomenon that is societal impact. The HEFCE stipulates that “case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe” [22]. Claire Donovan at Brunel University, London, UK, considers the preference for a case-study approach in the REF to be “the ‘state of the art'' [for providing] the necessary evidence-base for increased financial support of university research across all fields” [23]. According to Finn Hansson from the Department of Leadership, Policy and Philosophy at the Copenhagen Business School, Denmark, and co-author Erik Ernø-Kjølhede, the new REF is “a clear political signal that the traditional model for assessing research quality based on a discipline-oriented Mode 1 perception of research, first and foremost in the form of publication in international journals, was no longer considered sufficient by the policy-makers” [19]. ‘Mode 1'' describes research governed by the academic interests of a specific community, whereas ‘Mode 2'' is characterized by collaboration—both within the scientific realm and with other stakeholders—transdisciplinarity and basic research that is being conducted in the context of application [19].The new REF will also entail changes in budget allocations. The evaluation of a research unit for the purpose of allocations will determine 20% of the societal influence dimension [19]. The final REF guidance contains lists of examples for different types of societal impact [24].Societal impact is much harder to measure than scientific impact, and there are probably no indicators that can be used across all disciplines and institutions for collation in databases [17]. Societal impact often takes many years to become apparent, and “[t]he routes through which research can influence individual behaviour or inform social policy are often very diffuse” [18].Yet, the practitioners of societal impact measurement should not conduct this exercise alone; scientists should also take part. According to Steve Hanney at Brunel University, an expert in assessing payback or impacts from health research, and his co-authors, many scientists see societal impact measurement as a threat to their scientific freedom and often reject it [25]. If the allocation of funds is increasingly oriented towards societal impact issues, it challenges the long-standing reward system in science whereby scientists receive credits—not only citations and prizes but also funds—for their contributions to scientific advancement. However, given that societal impact measurement is already important for various national evaluations—and other countries will follow probably—scientists should become more concerned with this aspect of their research. In fact, scientists are often unaware that their research has a societal impact. “The case study at BRASS [Centre for Business Relationships, Accountability, Sustainability and Society] uncovered activities that were previously ‘under the radar'', that is, researchers have been involved in activities they realised now can be characterized as productive interactions” [26] between them and societal stakeholders. It is probable that research in many fields already has a direct societal impact, or induces productive interactions, but that it is not yet perceived as such by the scientists conducting the work.…research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishmentsThe involvement of scientists is also necessary in the development of mechanisms to collect accurate and comparable data [27]. Researchers in a particular discipline will be able to identify appropriate indicators to measure the impact of their kind of work. If the approach to establishing measurements is not sufficiently broad in scope, there is a danger that readily available indicators will be used for evaluations, even if they do not adequately measure societal impact [16]. There is also a risk that scientists might base their research projects and grant applications on readily available and ultimately misleading indicators. As Hansson and Ernø-Kjølhede point out, “the obvious danger is that researchers and universities intensify their efforts to participate in activities that can be directly documented rather than activities that are harder to document but in reality may be more useful to society” [19]. Numerous studies have documented that scientists already base their activities on the criteria and indicators that are applied in evaluations [19, 28, 29].Until reliable and robust methods to assess impact are developed, it makes sense to use expert panels to qualitatively assess the societal relevance of research in the first instance. Rymer has noted that, “just as peer review can be useful in assessing the quality of academic work in an academic context, expert panels with relevant experience in different areas of potential impact can be useful in assessing the difference that research has made” [18].Whether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting the public funding and support of basic researchWhether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting public funding and support of basic research. This has always been the case, but new research into measures that can assess the societal impact of research would provide better qualitative and quantitative data on which funding agencies and politicians could base decisions. At the same time, such measurement should not come at the expense of basic, blue-sky research, given that it is and will remain near-impossible to predict the impact of certain research projects years or decades down the line.  相似文献   

16.
The scientific process requires a critical attitude towards existing hypotheses and obvious explanations. Teaching this mindset to students is both important and challenging.People who read about scientific discoveries might get the misleading impression that scientific research produces a few rare breakthroughs—once or twice per century—and a large body of ‘merely incremental'' studies. In reality, however, breakthrough discoveries are reported on a weekly basis, and one can cite many fields just in biology—brain imaging, non-coding RNAs and stem cell biology, to name a few—that have undergone paradigm shifts within the past decade.The truly surprising thing about discovery is not just that it happens at a regular pace, but that most significant discoveries occurred only after the scientific community had already accepted another explanation. It is not merely the accrual of new data that leads to a breakthrough, but a willingness to acknowledge that a problem that is already ‘solved'' might require an entirely different explanation. In the case of breakthroughs or paradigm shifts, this new explanation might seem far-fetched or nonsensical and not even worthy of serious consideration. It is as if new ideas are sitting right in front of everyone, but in their blind spots so that only those who use their peripheral vision can see them.Scientists do not all share any single method or way of working. Yet they tend to share certain prevalent attitudes: they accept ‘facts'' and ‘obvious'' explanations only provisionally, at arm''s length, as it were; they not only imagine alternatives, but—almost as a reflex—ask themselves what alternative explanations are possible.When teaching students, it is a challenge to convey this critical attitude towards seemingly obvious explanations. In the spring semester of 2009, I offered a seminar entitled The Process of Scientific Discovery to Honours undergraduate students at the University of Illinois-Chicago in the USA. I originally planned to cover aspects of discovery such as the impact of funding agencies, the importance of mentoring and hypothesis-driven as opposed to data-driven research. As the semester progressed, however, my sessions moved towards ‘teaching moments'' drawn from everyday life, which forced the students to look at familiar things in unfamiliar ways. These served as metaphors for certain aspects of the process by which scientists discover new paradigms.For the first seven weeks of the spring semester, the class read Everyday Practice of Science by Frederick Grinnell [1]. During the discussion of the first chapter, one of the students noted that Grinnell referred to a scientist generically as ‘she'' rather than ‘he'' or the neutral ‘he or she''. This use is unusual and made her vaguely uneasy: she wondered whether the author was making a sexist point. Before considering her hypothesis, I asked the class to make a list of assumptions that they took for granted when reading the chapter, together with the possible explanations for the use of ‘she'' in the first chapter, no matter how far-fetched or unlikely they might seem.For example, one might assume that Frederick Grinnell or ‘Fred'' is from a culture similar to our own. How would we interpret his behaviour and outlook if we knew that Fred came from an exotic foreign land? Another assumption is that Fred is male; how would we view the remark if we discover that Frederick is short for Fredericka? We have equally assumed that Fred, as with most humans, wants us to like him. Instead, perhaps he is being intentionally provocative in order to get our attention or move us out of our comfort zone. Perhaps he planted ‘she'' as a deliberate example for us to discuss, as he does later in the second chapter, in which he deliberately hides a strange item in plain sight within one of the illustrations in order to make a point about observing anomalies. Perhaps the book was written not by Fred but by a ghost writer? Perhaps the ‘she'' was a typo?The truly surprising thing about discovery is […] that most significant discoveries occurred only after the scientific community had already accepted another explanationLooking for patterns throughout the book, and in Fred''s other writing, might persuade us to discard some of the possible explanations: does ‘she'' appear just once? Does Fred use other unusual or provocative turns of phrase? Does Fred discuss gender bias or sexism explicitly? Has anyone written or complained about him? Of course, one could ask Fred directly what he meant, although without knowing him personally, it would be difficult to know how to interpret his answer or whether to take his remarks at face value. Notwithstanding the answer, the exercise is an important lesson about considering and weighing all possible explanations.Arguably, the most prominent term used in science studies is the notion of a ‘paradigm''. I use this term with reluctance, as it is extraordinarily ambiguous. For example, it could simply refer to a specific type of experimental design: a randomized, placebo-controlled clinical trial could be considered a paradigm. In the context of science studies, however, it most often refers to the idea of large-scale leaps in scientific world views, as promoted by Thomas Kuhn in The Structure of Scientific Revolutions [2]. Kuhn''s notion of a paradigm can lead one to believe—erroneously in my opinion—that paradigm shifts are the opposite of practical, everyday scientific problem-solving.A paradigm is recognized by the set of assumptions that an observer might not realize he or she is making…Instead, I propose here a definition of ‘paradigm'' that emphasizes not the nature of the problem, the type of discovery or the scope of its implications, but rather the psychology of the scientist. A scientist viewing a problem or phenomenon resides within a paradigm when he or she does not notice, and cannot imagine, that an alternative way of looking at things needs to be considered seriously. Importantly, a paradigm is not a viewpoint, model, interpretation, hypothesis or conclusion. A paradigm is not the object that is viewed but the lenses through which it is viewed. A paradigm is recognized by the set of assumptions that an observer might not realize he or she is making, but which imply many automatic expectations and simultaneously prevent the observer from seeing the issue in any other fashion.For example, the teacher–student paradigm feels natural and obvious, yet it is merely set up by habit and tradition. It implies lectures, assignments, grades, ways of addressing the professor and so on, all of which could be done differently, if we had merely thought to consider alternatives. What feels most natural in a paradigm is often the most arbitrary. When we have a birthday, we expect to have a cake with candles, yet there is no natural relationship at all between birthdays, cakes and candles. In fact, when something is arbitrary or conventional yet feels entirely natural, that is an important clue that a paradigm is present.It is certainly natural for people to colour their observations according to their expectations: “To a man with a hammer, everything looks like a nail,” as Mark Twain put it. However, this is a pitfall that scientists (and doctors) must try hard to avoid. When I was a first-year medical student at Albert Einstein College of Medicine in New York City, we took a class on how to approach patients. As part of this course, we attended a session in which a psychiatrist interviewed a ‘normal, healthy old person'' in order to understand better the lives and perspectives of the elderly.A man came in, and the psychiatrist began to ask him some benign questions. After about 10 minutes, however, the man began to pause before answering; then his answers became terse; then he said he did not feel well, excused himself and abruptly left the room. The psychiatrist continued to lecture to the students for another half-hour, analysing and interpreting the halting responses in terms of the emotional conflicts that the man was experiencing. ‘Repression'', ‘emotional blocks'', and ‘reaction formation'' were some of the terms bandied about.However, unbeknown to the class, the man had collapsed just on the other side of the classroom door. Two cardiologists happened to be walking by and instantly realized the man was having an acute heart attack. They instituted CPR on the spot, but the man died within a few minutes.The psychiatrist had been told that the man was healthy, and thus interpreted everything that he saw in psychological terms. It never entered his mind that the man might have been dying in front of his eyes. The cardiologists saw a man having a heart attack, and it never entered their minds that the man might have had psychological issues.The movie The Sixth Sense [3] resonated particularly well with my students and served as a platform for discussing attitudes that are helpful for scientific investigation, such as “keep an open mind”, “reality is much stranger than you can imagine” and “our conclusions are always provisional at best”. Best of all, The Sixth Sense demonstrates the tension that exists between different scientific paradigms in a clear and beautiful way. When Haley Joel Osment says, “I see dead people,” does he actually see ghosts? Or is he hallucinating?…when scientists reach a conclusion, it is merely a place to pause and rest for a moment, not a final destinationIt is important to emphasize that these are not merely different viewpoints, or different ways of defining terms. If we argued about which mountain is higher, Everest or K2, we might disagree about which kind of evidence is more reliable, but we would fundamentally agree on the notion of measurement. By contrast, in The Sixth Sense, the same evidence used by one paradigm to support its assertion is used with equal strength by the other paradigm as evidence in its favour. In the movie, Bruce Willis plays a psychologist who assumes that Osment must be a troubled youth. However, the fact that he says he sees ghosts is also evidence in favour of the existence of ghosts, if you do not reject out of hand the possibility of their existence. These two explanations are incommensurate. One cannot simply weigh all of the evidence because each side rejects the type of evidence that the other side accepts, and regards the alternative explanation not merely as wrong but as ridiculous or nonsensical. It is in this sense that a paradigm represents a failure of imagination—each side cannot imagine that the other explanation could possibly be true, or at least, plausible enough to warrant serious consideration.The failure of imagination means that each side fails to notice or to seek ‘objective'' evidence that would favour one explanation over the other. For example, during the episodes when Osment saw ghosts, the thermostat in the room fell precipitously and he could see his own breath. This certainly would seem to constitute objective evidence to favour the ghost explanation, and the fact that his mother had noticed that the heating in her apartment was erratic suggests that the temperature change was not simply another imagined symptom. But the mother assumed that the problem was in the heating system and did not even conceive that this might be linked to ghosts—so the ‘objective'' evidence certainly was not compelling or even suggestive on its own.Osment did succeed eventually in convincing his mother that he saw ghosts, and he did it in the same way that any scientist would convince his colleagues: namely, he produced evidence that made perfect sense in the context of one, and only one, explanation. First, he told his mother a secret that he said her dead mother had told him. This secret was about an incident that had occurred before he was born, and presumably she had never spoken of it, so there was no obvious way that he could have learned about it. Next, he told her that the grandmother had heard her say “every day” when standing near her grave. Again, the mother had presumably visited the grave alone and had not told anyone about the visit or about what was said. So, the mother was eventually convinced that Osment must have spoken with the dead grandmother after all. No other explanation seemed to fit all the facts.Is this the end of the story? We, the audience, realize that it is possible that Osment had merely guessed about the incidents, heard them second-hand from another relative or (as with professional psychics) might have retold his anecdotes whilst looking for validation from his mother. The evidence seems compelling only because these alternatives seem even less likely. It is in this same sense that when scientists reach a conclusion, it is merely a place to pause and rest for a moment, not a final destination.Near the end of the course, I gave a pop-quiz asking each student to give a ‘yes'' or ‘no'' answer, plus a short one-sentence explanation, to the following question: Donald Trump seems to be a wealthy businessman. He dresses like one, he has a TV show in which he acts like one, he gives seminars on wealth building and so on. Everything we know about him says that he is wealthy as a direct result of his business activities. On the basis of this evidence, are we justified in concluding that he is, in fact, a wealthy businessman?About half the class said that yes, if all the evidence points in one direction, that suffices. About half the class said ‘no'', the stated evidence is circumstantial and we do not know, for example, what his bank balance is or whether he has more debt than equity. All the evidence we know about points in one direction, but we might not know all the facts.Even when looked at carefully, not every anomaly is attractive enough or ‘ripe'' enough to be pursued when first noticedHow do we know whether or not we know all the facts? Again, it is a matter of imagination. Let us review a few possible alternatives. Maybe his wealth comes from inheritance rather than business acumen; or from silent partners; or from drug running. Maybe he is dangerously over-extended and living on borrowed money; maybe his wealth is more apparent than real. Maybe Trump Casinos made up the role of Donald Trump as its symbol, the way McDonald''s made up the role of Ronald McDonald?Several students complained that this was a ridiculous question. Yet I had posed this just after Bernard Madoff''s arrest was blanketing the news. Madoff was known as a billionaire investor genius for decades and had even served as the head of the Securities and Exchange Commission. As it turned out, his money was obtained by a massive Ponzi scheme. Why was Madoff able to succeed for so long? Because it was inconceivable that such a famous public figure could be a common con man and the people around him could not imagine the possibility that his livelihood needed to be scrutinized.To this point, I have emphasized the benefits of paying attention to anomalous, strange or unwelcome observations. Yet paradoxically, scientists often make progress by (provisionally) putting aside anomalous or apparently negative findings that seem to invalidate or distract from their hypothesis. When Rita Levi-Montalcini was assaying the neurite-promoting effects of tumour tissue, she had predicted that this was a property of tumours and was devastated to find that normal tissue had the same effects. Only by ‘ignoring'' this apparent failure could she move forward to characterize nerve growth factor and eventually understand its biology [4].Another classic example is Huntington disease—a genetic disorder in which an inherited alteration in the gene that encodes a protein, huntingtin, leads to toxicity within certain types of neuron and causes a progressive movement disorder associated with cognitive decline and psychiatric symptoms. Clinicians observed that the offspring of Huntington disease patients sometimes showed symptoms at an earlier age than their parents, and this phenomenon, called ‘genetic anticipation'', could affect successive generations at earlier and earlier ages of onset. This observation was met with scepticism and sometimes ridicule, as everything that was known about genetics at the time indicated that genes do not change across generations. Ascertainment bias was suggested as a much more probable explanation; in other words, once a patient is diagnosed with Huntington disease, their doctors will look at their offspring much more closely and will thus tend to identify the onset of symptoms at an earlier age. Eventually, once the detailed genetics of the disease were understood at the molecular level, it was shown that the structure of the altered huntingtin gene does change. Genetic anticipation is now an accepted phenomenon.…in fact, schools teach a lot about how to test hypotheses but little about how to find good hypotheses in the first placeWhat does this teach us about discovery? Even when looked at carefully, not every anomaly is attractive enough or ‘ripe'' enough to be pursued when first noticed. The biologists who identified the structure of the abnormal huntingtin gene did eventually explain genetic anticipation, although they set aside the puzzling clinical observations and proceeded pragmatically according to their (wrong) initial best-guess as to the genetics. The important thing is to move forward.Finally, let us consider the case of Grigori Perelman, an outstanding mathematician who solved the Poincaré Conjecture a few years ago. He did not tell anyone he was working on the problem, lest their ‘helpful advice'' discourage him; he posted his historic proof online, bypassing peer-reviewed journals altogether; he turned down both the Fields Medal and a million dollar prize; and he has refused professorial posts at prestigious universities. Having made a deliberate decision to eschew the external incentives associated with science as a career, his choices have been written off as examples of eccentric anti-social behaviour. I suggest, however, that he might have simply recognized that the usual rules for success and the usual reward structure of the scientific community can create roadblocks, which had to be avoided if he was to solve a supposedly unsolvable problem.If we cannot imagine new paradigms, then how can they ever be perceived, much less tested? It should be clear by now that the ‘process of scientific discovery'' can proceed by many different paths. However, here is one cognitive exercise that can be applied to almost any situation. (i) Notice a phenomenon, even if (especially if) it is familiar and regarded as a solved problem; regard it as if it is new and strange. In particular, look hard for anomalous and strange aspects of the phenomenon that are ignored by scientists in the field. (ii) Look for the hidden assumptions that guide scientists'' thinking about the phenomenon, and ask what kinds of explanation would be possible if the assumptions were false (or reversed). (iii) Make a list of possible alternative explanations, no matter how unlikely they seem to be. (iv) Ask if one of these explanations has particular appeal (for example, if it is the most elegant theoretically; if it can generalize to new domains; and if it would have great practical impact). (v) Ask what kind of evidence would allow one to favour that hypothesis over the others, and carry out experiments to test the hypothesis.The process just outlined is not something that is taught in graduate school; in fact, schools teach a lot about how to test hypotheses but little about how to find good hypotheses in the first place. Consequently, this cognitive exercise is not often carried out within the brain of an individual scientist. Yet this creative tension happens naturally when investigators from two different fields, who have different assumptions, methods and ways of working, meet to discuss a particular problem. This is one reason why new paradigms so often emerge in the cross-fertilization of different disciplines.There are of course other, more systematic ways of searching for hypotheses by bringing together seemingly unrelated evidence. The Arrowsmith two-node search strategy [5], for instance, is based on distinct searches of the biomedical literature to retrieve articles on two different areas of science that have not been studied in relation to each other, but that the investigator suspects might be related in some fashion. The software identifies common words or phrases, which might point to meaningful links between them. This is but one example of ‘literature-based discovery'' as a heuristic technique [6], and in turn, is part of the larger data-driven approach of ‘text mining'' or ‘data mining'', which looks for unusual, new or unexpected patterns within large amounts of observational data. Regardless of whether one follows hypothesis-driven or data-driven models of investigation, let us teach our students to repeat the mantra: ‘odd is good''!? Open in a separate windowNeil R Smalheiser  相似文献   

17.
Martinson BC 《EMBO reports》2011,12(8):758-762
Universities have been churning out PhD students to reap financial and other rewards for training biomedical scientists. This deluge of cheap labour has created unhealthy competition, which encourages scientific misconduct.Most developed nations invest a considerable amount of public money in scientific research for a variety of reasons: most importantly because research is regarded as a motor for economic progress and development, and to train a research workforce for both academia and industry. Not surprisingly, governments are occasionally confronted with questions about whether the money invested in research is appropriate and whether taxpayers are getting the maximum value for their investments.…questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientistsThe training and maintenance of the research workforce is a large component of these investments. Yet discussions in the USA about the appropriate size of this workforce have typically been contentious, owing to an apparent lack of reliable data to tell us whether the system yields academic ‘reproduction rates'' that are above, below or at replacement levels. In the USA, questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientists. As Donald Kennedy, then Editor-in-Chief of Science, noted several years ago, leaders in prestigious academic institutions have repeatedly rung alarm bells about shortages in the science workforce. Less often does one see questions raised about whether too many scientists are being produced or concerns about unintended consequences that may result from such overproduction. Yet recognizing that resources are finite, it seems reasonable to ask what level of competition for resources is productive, and at what level does competition become counter-productive.Finding a proper balance between the size of the research workforce and the resources available to sustain it has other important implications. Unhealthy competition—too many people clamouring for too little money and too few desirable positions—creates its own problems, most notably research misconduct and lower-quality, less innovative research. If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edge. Moreover, many in the science community worry that every publicized case of research misconduct could jeopardize those resources, if politicians and taxpayers become unwilling to invest in a research system that seems to be riddled with fraud and misconduct.The biomedical research enterprise in the USA provides a useful context in which to examine the level of competition for resources among academic scientists. My thesis is that the system of publicly funded research in the USA as it is currently configured supports a feedback system of institutional incentives that generate excessive competition for resources in biomedical research. These institutional incentives encourage universities to overproduce graduate students and postdoctoral scientists, who are both trainees and a cheap source of skilled labour for research while in training. However, once they have completed their training, they become competitors for money and positions, thereby exacerbating competitive pressures.Questions raised about whether too many scientists are being produced or concerns about the unintended consequences of such overproduction are less commonThe resulting scarcity of resources, partly through its effect on peer review, leads to a shunting of resources away from both younger researchers and the most innovative ideas, which undermines the effectiveness of the research enterprise as a whole. Faced with an increasing number of grant applications and the consequent decrease in the percentage of projects that can be funded, reviewers tend to ‘play it safe'' and favour projects that have a higher likelihood of yielding results, even if the research is conservative in the sense that it does not explore new questions. Resource scarcity can also introduce unwanted randomness to the process of determining which research gets funded. A large group of scientists, led by a cancer biologist, has recently mounted a campaign against a change in a policy of the National Institutes of Health (NIH) to allow only one resubmission of an unfunded grant proposal (Wadman, 2011). The core of their argument is that peer reviewers are likely able to distinguish the top 20% of research applications from the rest, but that within that top 20%, distinguishing the top 5% or 10% means asking peer reviewers for a level of precision that is simply not possible. With funding levels in many NIH institutes now within that 5–10% range, the argument is that reviewers are being forced to choose at random which excellent applications do and do not get funding. In addition to the inefficiency of overproduction and excessive competition in terms of their costs to society and opportunity costs to individuals, these institutional incentives might undermine the integrity and quality of science, and reduce the likelihood of breakthroughs.My colleagues and I have expressed such concerns about workforce dynamics and related issues in several publications (Martinson, 2007; Martinson et al, 2005, 2006, 2009, 2010). Early on, we observed that, “missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures” (Martinson et al, 2005). Our more recent publications have been more specific about the institutional and systemic structures concerned. It seems that at least a few important leaders in science share these concerns.In April 2009, the NIH, through the National Institute of General Medical Sciences (NIGMS), issued a request for applications (RFA) calling for proposals to develop computational models of the research workforce (http://grants.nih.gov/grants/guide/rfa-files/RFA-GM-10-003.html). Although such an initiative might be premature given the current level of knowledge, the rationale behind the RFA seems irrefutable: “there is a need to […] pursue a systems-based approach to the study of scientific workforce dynamics.” Roughly four decades after the NIH appeared on the scene, this is, to my knowledge, the first official, public recognition that the biomedical workforce tends not to conform nicely to market forces of supply and demand, despite the fact that others have previously made such arguments.Early last year, Francis Collins, Director of the NIH, published a PolicyForum article in Science, voicing many of the concerns I have expressed about specific influences that have led to growth rates in the science workforce that are undermining the effectiveness of research in general, and biomedical research in particular. He notes the increasing stress in the biomedical research community after the end of the NIH “budget doubling” between 1998 and 2003, and the likelihood of further disruptions when the American Recovery and Reinvestment Act of 2009 (ARRA) funding ends in 2011. Arguing that innovation is crucial to the future success of biomedical research, he notes the tendency towards conservatism of the NIH peer-review process, and how this worsens in fiscally tight times. Collins further highlights the ageing of the NIH workforce—as grants increasingly go to older scientists—and the increasing time that researchers are spending in itinerant and low-paid postdoctoral positions as they stack up in a holding pattern, waiting for faculty positions that may or may not materialize. Having noted these challenging trends, and echoing the central concerns of a 2007 Nature commentary (Martinson, 2007), he concludes that “…it is time for NIH to develop better models to guide decisions about the optimum size and nature of the US workforce for biomedical research. A related issue that needs attention, though it will be controversial, is whether institutional incentives in the current system that encourage faculty to obtain up to 100% of their salary from grants are the best way to encourage productivity.”Similarly, Bruce Alberts, Editor-in-Chief of Science, writing about incentives for innovation, notes that the US biomedical research enterprise includes more than 100,000 graduate students and postdoctoral fellows. He observes that “only a select few will go on to become independent research scientists in academia”, and argues that “assuming that the system supporting this career path works well, these will be the individuals with the most talent and interest in such an endeavor” (Alberts, 2009).His editorial is not concerned with what happens to the remaining majority, but argues that even among the select few who manage to succeed, the funding process for biomedical research “forces them to avoid risk-taking and innovation”. The primary culprit, in his estimation, is the conservatism of the traditional peer-review system for federal grants, which values “research projects that are almost certain to ‘work''”. He continues, “the innovation that is essential for keeping science exciting and productive is replaced by […] research that has little chance of producing the breakthroughs needed to improve human health.”If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edgeAlthough I believe his assessment of the symptoms is correct, I think he has misdiagnosed the cause, in part because he has failed to identify which influence he is concerned with from the network of influences in biomedical research. To contextualize the influences of concern to Alberts, we must consider the remaining majority of doctorally trained individuals so easily dismissed in his editorial, and further examine what drives the dynamics of the biomedical research workforce.Labour economists might argue that market forces will always balance the number of individuals with doctorates with the number of appropriate jobs for them in the long term. Such arguments would ignore, however, the typical information asymmetry between incoming graduate students, whose knowledge about their eventual job opportunities and career options is by definition far more limited than that of those who run the training programmes. They would also ignore the fact that universities are generally not confronted with the externalities resulting from overproduction of PhDs, and have positive financial incentives that encourage overproduction. During the past 40 years, NIH ‘extramural'' funding has become crucial for graduate student training, faculty salaries and university overheads. For their part, universities have embraced NIH extramural funding as a primary revenue source that, for a time, allowed them to implement a business model based on the interconnected assumptions that, as one of the primary ‘outputs'' or ‘products'' of the university, more doctorally trained individuals are always better than fewer, and because these individuals are an excellent source of cheap, skilled labour during their training, they help to contain the real costs of faculty research.“…the current system has succeeded in maximizing the amount of research […] it has also degraded the quality of graduate training and led to an overproduction of PhDs…”However, it has also made universities increasingly dependent on NIH funding. As recently documented by the economist Paula Stephan, most faculty growth in graduate school programmes during the past decade has occurred in medical colleges, with the majority—more than 70%—in non-tenure-track positions. Arguably, this represents a shift of risk away from universities and onto their faculty. Despite perennial cries of concern about shortages in the research workforce (Butz et al, 2003; Kennedy et al, 2004; National Academy of Sciences et al, 2005) a number of commentators have recently expressed concerns that the current system of academic research might be overbuilt (Cech, 2005; Heinig et al, 2007; Martinson, 2007; Stephan, 2007). Some explicitly connect this to structural arrangements between the universities and NIH funding (Cech, 2005; Collins, 2007; Martinson, 2007; Stephan, 2007).In 1995, David Korn pointed out what he saw as some problematic aspects of the business model employed by Academic Medical Centers (AMCs) in the USA during the past few decades (Korn, 1995). He noted the reliance of AMCs on the relatively low-cost, but highly skilled labour represented by postdoctoral fellows, graduate students and others—who quickly start to compete with their own professors and mentors for resources. Having identified the economic dependence of the AMCs on these inexpensive labour pools, he noted additional problems with the graduate training programmes themselves. “These programs are […] imbued with a value system that clearly indicates to all participants that true success is only marked by the attainment of a faculty position in a high-profile research institution and the coveted status of principal investigator on NIH grants.” Pointing to “more than 10 years of severe supply/demand imbalance in NIH funds”, Korn concluded that, “considering the generative nature of each faculty mentor, this enterprise could only sustain itself in an inflationary environment, in which the society''s investment in biomedical research and clinical care was continuously and sharply expanding.” From 1994 to 2003, total funding for biomedical research in the USA increased at an annual rate of 7.8%, after adjustment for inflation. The comparable rate of growth between 2003 and 2007 was 3.4% (Dorsey et al, 2010). These observations resonate with the now classic observation by Derek J. de Solla Price, from more than 30 years before, that growth in science frequently follows an exponential pattern that cannot continue indefinitely; the enterprise must eventually come to a plateau (de Solla Price, 1963).In May 2009, echoing some of Korn''s observations, Nobel laureate Roald Hoffmann caused a stir in the US science community when he argued for a “de-coupling” of the dual roles of graduate students as trainees and cheap labour (Hoffmann, 2009). His suggestion was to cease supporting graduate students with faculty research grants, and to use the money instead to create competitive awards for which graduate students could apply, making them more similar to free agents. During the ensuing discussion, Shirley Tilghman, president of Princeton University, argued that “although the current system has succeeded in maximizing the amount of research performed […] it has also degraded the quality of graduate training and led to an overproduction of PhDs in some areas. Unhitching training from research grants would be a much-needed form of professional ‘birth control''” (Mervis, 2009).The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientistsAlthough the issue of what I will call the ‘academic birth rate'' is the central concern of this analysis, the ‘academic end-of-life'' also warrants some attention. The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientists. A 2008 news item in Science quoted then 70-year-old Robert Wells, a molecular geneticist at Texas A&M University, “‘if I and other old birds continue to land the grants, the [young scientists] are not going to get them.” He worries that the budget will not be able to support “the 100 people ‘I''ve trained […] to replace me''” (Kaiser, 2008). While his claim of 100 trainees might be astonishing, it might be more astonishing that his was the outlying perspective. The majority of senior scientists interviewed for that article voiced intentions to keep doing science—and going after NIH grants—until someone forced them to stop or they died.Some have looked at the current situation with concern, primarily because of the threats it poses to the financial and academic viability of universities (Korn, 1995; Heinig et al, 2007; Korn & Heinig, 2007), although most of those who express such concerns have been distinctly reticent to acknowledge the role of universities in creating and maintaining the situation. Others have expressed concerns about the differential impact of extreme competition and meagre job prospects on the recruitment, development and career survival of young and aspiring scientists (Freeman et al, 2001; Kennedy et al, 2004; Martinson et al, 2006; Anderson et al, 2007a; Martinson, 2007; Stephan, 2007). There seems to be little disagreement, however, that the system has generated excessively high competition for federal research funding, and that this threatens to undermine the very innovation and production of knowledge that is its raison d''etre.The production of knowledge in science, particularly of the ‘revolutionary'' variety, is generally not a linear input–output process with predictable returns on investment, clear timelines and high levels of certainty (Lane, 2009). On the contrary, it is arguable that “revolutionary science is a high risk and long-term endeavour which usually fails” (Charlton & Andras, 2008). Predicting where, when and by whom breakthroughs in understanding will be produced has proven to be an extremely difficult task. In the face of such uncertainty, and denying the realities of finite resources, some have argued that the best bet is to maximize the number of scientists, using that logic to justify a steady-state production of new PhDs, regardless of whether the labour market is sending signals of increasing or decreasing demand for that supply. Only recently have we begun to explore the effects of the current arrangement on the process of knowledge production, and on innovation in particular (Charlton & Andras, 2008; Kolata, 2009).…most of those who express such concerns have been reticent to acknowledge the role of universities themselves in creating and maintaining the situationBruce Alberts, in the above-mentioned editorial, points to several initiatives launched by the NIH that aim to get a larger share of NIH funding into the hands of young scientists with particularly innovative ideas. These include the “New Innovator Award,” the “Pioneer Award” and the “Transformational R01 Awards”. The proportion of NIH funding dedicated to these awards, however, amounts to “only 0.27% of the NIH budget” (Alberts, 2009). Such a small proportion of the NIH budget does not seem likely to generate a large amount of more innovative science. Moreover, to the extent that such initiatives actually succeed in enticing more young investigators to become dependent on NIH funds, any benefit these efforts have in terms of innovation may be offset by further increases in competition for resources that will come when these new ‘innovators'' reach the end of this specialty funding and add to the rank and file of those scrapping for funds through the standard mechanisms.Our studies on research integrity have been mostly oriented towards understanding how the influences within which academic scientists work might affect their behaviour, and thus the quality of the science they produce (Anderson et al, 2007a, 2007b; Martinson et al, 2009, 2010). My colleagues and I have focused on whether biomedical researchers perceive fairness in the various exchange relationships within their work systems. I am persuaded by the argument that expectations of fairness in exchange relationships have been hard-wired into us through evolution (Crockett et al, 2008; Hsu et al, 2008; Izuma et al, 2008; Pennisi, 2009), with the advent of modern markets being a primary manifestation of this. Thus, violations of these expectations strike me as potentially corrupting influences. Such violations might be prime motivators for ill will, possibly engendering bad-faith behaviour among those who perceive themselves to have been slighted, and therefore increasing the risk of research misconduct. They might also corrupt the enterprise by signalling to talented young people that biomedical research is an inhospitable environment in which to develop a career, possibly chasing away some of the most talented individuals, and encouraging a selection of characteristics that might not lead to optimal effectiveness, in terms of scientific innovation and productivity (Charlton, 2009).To the extent that we have an ecology with steep competition that is fraught with high risks of career failure for young scientists after they incur large costs of time, effort and sometimes financial resources to obtain a doctoral degree, why would we expect them to take on the additional, substantial risks involved in doing truly innovative science and asking risky research questions? And why, in such a cut-throat setting, would we not anticipate an increase in corner-cutting, and a corrosion of good scientific practice, collegiality, mentoring and sociability? Would we not also expect a reduction in high-risk, innovative science, and a reversion to a more career-safe type of ‘normal'' science? Would this not reduce the effectiveness of the institution of biomedical research? I do not claim to know the conditions needed to maximize the production of research that is novel, innovative and conducted with integrity. I am fairly certain, however, that putting scientists in tenuous positions in which their careers and livelihoods would be put at risk by pursuing truly revolutionary research is one way to insure against it.  相似文献   

18.
Rinaldi A 《EMBO reports》2012,13(4):303-307
Scientists and journalists try to engage the public with exciting stories, but who is guilty of overselling research and what are the consequences?Scientists love to hate the media for distorting science or getting the facts wrong. Even as they do so, they court publicity for their latest findings, which can bring a slew of media attention and public interest. Getting your research into the national press can result in great boons in terms of political and financial support. Conversely, when scientific discoveries turn out to be wrong, or to have been hyped, the negative press can have a damaging effect on careers and, perhaps more importantly, the image of science itself. Walking the line between ‘selling'' a story and ‘hyping'' it far beyond the evidence is no easy task. Professional science communicators work carefully with scientists and journalists to ensure that the messages from research are translated for the public accurately and appropriately. But when things do go wrong, is it always the fault of journalists, or are scientists and those they employ to communicate sometimes equally to blame?Walking the line between ‘selling'' a story and ‘hyping'' it far beyond the evidence is no easy taskHyping in science has existed since the dawn of research itself. When scientists relied on the money of wealthy benefactors with little expertise to fund their research, the temptation to claim that they could turn lead into gold, or that they could discover the secret of eternal life, must have been huge. In the modern era, hyping of research tends to make less exuberant claims, but it is no less damaging and no less deceitful, even if sometimes unintentionally so. A few recent cases have brought this problem to the surface again.The most frenzied of these was the report in Science last year that a newly isolated bacterial strain could replace phosphate with arsenate in cellular constituents such as nucleic acids and proteins [1]. The study, led by NASA astrobiologist Felisa Wolfe-Simon, showed that a new strain of the Halomonadaceae family of halofilic proteobacteria, isolated from the alkaline and hypersaline Mono Lake in California (Fig 1), could not only survive in arsenic-rich conditions, such as those found in its original environment, but even thrive by using arsenic entirely in place of phosphorus. “The definition of life has just expanded. As we pursue our efforts to seek signs of life in the solar system, we have to think more broadly, more diversely and consider life as we do not know it,” commented Ed Weiler, NASA''s associate administrator for the Science Mission Directorate at the agency''s Headquarters in Washington, in the original press release [2].Open in a separate windowFigure 1Sunrise at Mono Lake. Mono Lake, located in eastern California, is bounded to the west by the Sierra Nevada mountains. This ancient alkaline lake is known for unusual tufa (limestone) formations rising from the water''s surface (shown here), as well as for its hypersalinity and high concentrations of arsenic. See Wolfe-Simon et al [1]. Credit: Henry Bortman.The accompanying “search for life beyond Earth” and “alternative biochemistry makeup” hints contained in the same release were lapped up by the media, which covered the breakthrough with headlines such as “Arsenic-loving bacteria may help in hunt for alien life” (BBC News), “Arsenic-based bacteria point to new life forms” (New Scientist), “Arsenic-feeding bacteria find expands traditional notions of life” (CNN). However, it did not take long for criticism to manifest, with many scientists openly questioning whether background levels of phosphorus could have fuelled the bacteria''s growth in the cultures, whether arsenate compounds are even stable in aqueous solution, and whether the tests the authors used to prove that arsenic atoms were replacing phosphorus ones in key biomolecules were accurate. The backlash was so bitter that Science published the concerns of several research groups commenting on the technical shortcomings of the study and went so far as to change its original press release for reporters, adding a warning note that reads “Clarification: this paper describes a bacterium that substitutes arsenic for a small percentage of its phosphorus, rather than living entirely off arsenic.”Microbiologists Simon Silver and Le T. Phung, from the University of Illinois, Chicago, USA, were heavily critical of the study, voicing their concern in one of the journals of the Federation of European Microbiological Societies, FEMS Microbiology Letters. “The recent online report in Science […] either (1) wonderfully expands our imaginations as to how living cells might function […] or (2) is just the newest example of how scientist-authors can walk off the plank in their imaginations when interpreting their results, how peer reviewers (if there were any) simply missed their responsibilities and how a press release from the publisher of Science can result in irresponsible publicity in the New York Times and on television. We suggest the latter alternative is the case, and that this report should have been stopped at each of several stages” [3]. Meanwhile, Wolfe-Simon is looking for another chance to prove she was right about the arsenic-loving bug, and Silver and colleagues have completed the bacterium''s genome shotgun sequencing and found 3,400 genes in its 3.5 million bases (www.ncbi.nlm.nih.gov/Traces/wgs/?val=AHBC01).“I can only comment that it would probably be best if one had avoided a flurry of press conferences and speculative extrapolations. The discovery, if true, would be similarly impressive without any hype in the press releases,” commented John Ioannidis, Professor of Medicine at Stanford University School of Medicine in the USA. “I also think that this is the kind of discovery that can definitely wait for a validation by several independent teams before stirring the world. It is not the type of research finding that one cannot wait to trumpet as if thousands and millions of people were to die if they did not know about it,” he explained. “If validated, it may be material for a Nobel prize, but if not, then the claims would backfire on the credibility of science in the public view.”Another instructive example of science hyping was sparked by a recent report of fossil teeth, dating to between 200,000 and 400,000 years ago, which were unearthed in the Qesem Cave near Tel Aviv by Israeli and Spanish scientists [4]. Although the teeth cannot yet be conclusively ascribed to Homo sapiens, Homo neanderthalensis, or any other species of hominid, the media coverage and the original press release from Tel Aviv University stretched the relevance of the story—and the evidence—proclaiming that the finding demonstrates humans lived in Israel 400,000 years ago, which should force scientists to rewrite human history. Were such evidence of modern humans in the Middle East so long ago confirmed, it would indeed clash with the prevailing view of human origin in Africa some 200,000 years ago and the dispersal from the cradle continent that began about 70,000 years ago. But, as freelance science writer Brian Switek has pointed out, “The identity of the Qesem Cave humans cannot be conclusively determined. All the grandiose statements about their relevance to the origin of our species reach beyond what the actual fossil material will allow” [5].An example of sensationalist coverage? “It has long been believed that modern man emerged from the continent of Africa 200,000 years ago. Now Tel Aviv University archaeologists have uncovered evidence that Homo sapiens roamed the land now called Israel as early as 400,000 years ago—the earliest evidence for the existence of modern man anywhere in the world,” reads a press release from the New York-based organization, American Friends of Tel Aviv University [6].“The extent of hype depends on how people interpret facts and evidence, and their intent in the claims they are making. Hype in science can range from ‘no hype'', where predictions of scientific futures are 100% fact based, to complete exaggeration based on no facts or evidence,” commented Zubin Master, a researcher in science ethics at the University of Alberta in Edmonton, Canada. “Intention also plays a role in hype and the prediction of scientific futures, as making extravagant claims, for example in an attempt to secure funds, could be tantamount to lying.”Are scientists more and more often indulging in creative speculation when interpreting their results, just to get extraordinary media coverage of their discoveries? Is science journalism progressively shifting towards hyping stories to attract readers?“The vast majority of scientific work can wait for some independent validation before its importance is trumpeted to the wider public. Over-interpretation of results is common and as scientists we are continuously under pressure to show that we make big discoveries,” commented Ioannidis. “However, probably our role [as scientists] is more important in making sure that we provide balanced views of evidence and in identifying how we can question more rigorously the validity of our own discoveries.”“The vast majority of scientific work can wait for some independent validation before its importance is trumpeted to the wider public”Stephanie Suhr, who is involved in the management of the European XFEL—a facility being built in Germany to generate intense X-ray flashes for use in many disciplines—notes in her introduction to a series of essays on the ethics of science journalism that, “Arguably, there may also be an increasing temptation for scientists to hype their research and ‘hit the headlines''” [7]. In her analysis, Suhr quotes at least one instance—the discovery in 2009 of the Darwinius masillae fossil, presented as the missing link in human evolution [8]—in which the release of a ‘breakthrough'' scientific publication seems to have been coordinated with simultaneous documentaries and press releases, resulting in what can be considered a study case for science hyping [7].Although there is nothing wrong in principle with a broad communication strategy aimed at the rapid dissemination of a scientific discovery, some caveats exist. “[This] strategy […] might be better applied to a scientific subject or body of research. When applied to a single study, there [is] a far greater likelihood of engaging in unmerited hype with the risk of diminishing public trust or at least numbing the audience to claims of ‘startling new discoveries'',” wrote science communication expert Matthew Nisbet in his Age of Engagement blog (bigthink.com/blogs/age-of-engagement) about how media communication was managed in the Darwinius affair. “[A]ctivating the various channels and audiences was the right strategy but the language and metaphor used strayed into the realm of hype,” Nisbet, who is an Associate Professor in the School of Communication at American University, Washington DC, USA, commented in his post [9]. “We are ethically bound to think carefully about how to go beyond the very small audience that follows traditional science coverage and think systematically about how to reach a wider, more diverse audience via multiple media platforms. But in engaging with these new media platforms and audiences, we are also ethically bound to avoid hype and maintain accuracy and context” [9].But the blame for science hype cannot be laid solely at the feet of scientists and press officers. Journalists must take their fair share of reproach. “As news online comes faster and faster, there is an enormous temptation for media outlets and journalists to quickly publish topics that will grab the readers'' attention, sometimes at the cost of accuracy,” Suhr wrote [7]. Of course, the media landscape is extremely varied, as science blogger and writer Bora Zivkovic pointed out. “There is no unified thing called ‘Media''. There are wonderful specialized science writers out there, and there are beat reporters who occasionally get assigned a science story as one of several they have to file every day,” he explained. “There are careful reporters, and there are those who tend to hype. There are media outlets that value accuracy above everything else; others that put beauty of language above all else; and there are outlets that value speed, sexy headlines and ad revenue above all.”…the blame for science hype cannot be laid solely at the feet of scientists and press officers. Journalists must take their fair share of reproachOne notable example of media-sourced hype comes from J. Craig Venter''s announcement in the spring of 2010 of the first self-replicating bacterial cell controlled by a synthetic genome (Fig 2). A major media buzz ensued, over-emphasizing and somewhat distorting an anyway remarkable scientific achievement. Press coverage ranged from the extremes of announcing ‘artificial life'' to saying that Venter was playing God, adding to cultural and bioethical tension the warning that synthetic organisms could be turned into biological weapons or cause environmental disasters.Open in a separate windowFigure 2Schematic depicting the assembly of a synthetic Mycoplasma mycoides genome in yeast. For details of the construction of the genome, please see the original article. From Gibson et al [13] Science 329, 52–56. Reprinted with permission from AAAS.“The notion that scientists might some day create life is a fraught meme in Western culture. One mustn''t mess with such things, we are told, because the creation of life is the province of gods, monsters, and practitioners of the dark arts. Thus, any hint that science may be on the verge of putting the power of creation into the hands of mere mortals elicits a certain discomfort, even if the hint amounts to no more than distorted gossip,” remarked Rob Carlson, who writes on the future role of biology as a human technology, about the public reaction and the media frenzy that arose from the news [10].Yet the media can also behave responsibly when faced with extravagant claims in press releases. Fiona Fox, Chief Executive of the Science Media Centre in the UK, details such an example in her blog, On Science and the Media (fionafox.blogspot.com). The Science Media Centre''s role is to facilitate communication between scientists and the press, so they often receive calls from journalists asking to be put in touch with an expert. In this case, the journalist asked for an expert to comment on a story about silver being more effective against cancer than chemotherapy. A wild claim; yet, as Fox points out in her blog, the hype came directly from the institution''s press office: “Under the heading ‘A silver bullet to beat cancer?'' the top line of the press release stated that ‘Lab tests have shown that it (silver) is as effective as the leading chemotherapy drug—and may have far fewer side effects.'' Far from including any caveats or cautionary notes up front, the press office even included an introductory note claiming that the study ‘has confirmed the quack claim that silver has cancer-killing properties''” [11]. Fox praises the majority of the UK national press that concluded that this was not a big story to cover, pointing out that, “We''ve now got to the stage where not only do the best science journalists have to fight the perverse news values of their news editors but also to try to read between the lines of overhyped press releases to get to the truth of what a scientific study is really claiming.”…the concern is that hype inflates public expectations, resulting in a loss of trust in a given technology or research avenue if promises are not kept; however, the premise is not fully provenYet, is hype detrimental to science? In many instances, the concern is that hype inflates public expectations, resulting in a loss of trust in a given technology or research avenue if promises are not kept; however, the premise is not fully proven (Sidebar A). “There is no empirical evidence to suggest that unmet promises due to hype in biotechnology, and possibly other scientific fields, will lead to a loss of public trust and, potentially, a loss of public support for science. Thus, arguments made on hype and public trust must be nuanced to reflect this understanding,” Master pointed out.

Sidebar A | Up and down the hype cycle

AlthoughAlthough hype is usually considered a negative and largely unwanted aspect of scientific and technological communication, it cannot be denied that emphasizing, at least initially, the benefits of a given technology can further its development and use. From this point of view, hype can be seen as a normal stage of technological development, within certain limits. The maturity, adoption and application of specific technologies apparently follow a common trend pattern, described by the information technology company, Gartner, Inc., as the ‘hype cycle''. The idea is based on the observation that, after an initial trigger phase, novel technologies pass through a peak of over-excitement (or hype), often followed by a subsequent general disenchantment, before eventually coming under the spotlight again and reaching a stable plateau of productivity. Thus, hype cycles “[h]ighlight overhyped areas against those that are high impact, estimate how long technologies and trends will take to reach maturity, and help organizations decide when to adopt” (www.gartner.com).“Science is a human endeavour and as such it is inevitably shaped by our subjective responses. Scientists are not immune to these same reactions and it might be valuable to evaluate the visibility of different scientific concepts or technologies using the hype cycle,” commented Pedro Beltrao, a cellular biologist at the University of California San Francisco, USA, who runs the Public Rambling blog (pbeltrao.blogspot.com) about bioinformatics science and technology. The exercise of placing technologies in the context of the hype cycle can help us to distinguish between their real productive value and our subjective level of excitement, Beltrao explained. “As an example, I have tried to place a few concepts and technologies related to systems biology along the cycle''s axis of visibility and maturity [see illustration]. Using this, one could suggest that technologies like gene-expression arrays or mass-spectrometry have reached a stable productivity level, while the potential of concepts like personalized medicine or genome-wide association studies (GWAS) might be currently over-valued.”Together with bioethicist colleague David Resnik, Master has recently highlighted the need for empirical research that examines the relationships between hype, public trust, and public enthusiasm and/or support [12]. Their argument proposes that studies on the effect of hype on public trust can be undertaken by using both quantitative and qualitative methods: “Research can be designed to measure hype through a variety of sources including websites, blogs, movies, billboards, magazines, scientific publications, and press releases,” the authors write. “Semi-structured interviews with several specific stakeholders including genetics researchers, media representatives, patient advocates, other academic researchers (that is, ethicists, lawyers, and social scientists), physicians, ethics review board members, patients with genetic diseases, government spokespersons, and politicians could be performed. Also, members of the general public would be interviewed” [12]. They also point out that such an approach to estimate hype and its effect on public enthusiasm and support should carefully define the public under study, as different publics might have different expectations of scientific research, and will therefore have different baseline levels of trust.Increased awareness of the underlying risks of over-hyping research should help to balance the scientific facts with speculation on the enticing truths and possibilities they revealUltimately, exaggerating, hyping or outright lying is rarely a good thing. Hyping science is detrimental to various degrees to all science communication stakeholders—scientists, institutions, journalists, writers, newspapers and the public. It is important that scientists take responsibility for their share of the hyping done and do not automatically blame the media for making things up or getting things wrong. Such discipline in science communication is increasingly important as science searches for answers to the challenges of this century. Increased awareness of the underlying risks of over-hyping research should help to balance the scientific facts with speculation on the enticing truths and possibilities they reveal. The real challenge lies in favouring such an evolved approach to science communication in the face of a rolling 24-hour news cycle, tight science budgets and the uncontrolled and uncontrollable world of the Internet.? Open in a separate windowThe hype cycle for the life sciences. Pedro Beltrao''s view of the excitement–disappointment–maturation cycle of bioscience-related technologies and/or ideas. GWAS: genome-wide association studies. Credit: Pedro Beltrao.  相似文献   

19.
Some view social constructivism as a threat to the unique objectivity of science in describing the world. But social constructivism merely observes the process of science and can offer ways for science to regain public esteem.Political groups, civil organizations, the media and private citizens increasingly question the validity of scientific findings about challenging issues such as global climate change, and actively resist the application of new technologies, such as GM crops. By using new communication technologies, these actors can reach out to many people in real time, which gives them a huge advantage over the traditional, specialist and slow communication of scientific research through peer-reviewed publications. They use emotive stories with a narrow focus, facts and accessible language, making them often, at least in the eyes of the public, more credible than scientific experts. The resulting strength of public opinion means that scientific expertise and validated facts are not always the primary basis for decision-making by policy-makers about issues that affect society and the environment.The scientific community has decried this situation not only as a crisis of public trust in experts but more so as a loss of trust in scientific objectivity. The reason for this development, some claim, is a postmodernist perception of science as a social construction [1]. This view claims that context—in other words society—determines the acceptance of a scientific theory and the reliability of scientific facts. This is in conflict with the more traditional view held by most scientists, that experimental evidence, analysis and validation by scientific means are the instruments to determine truth. ‘Social constructivism'', as this postmodernist view on science has been called, challenges the ‘traditional'' view of science: that it is an objective, experiment-based approach to collect evidence that results in a linear accumulation of knowledge, leading to reliable, scientifically proven facts and trust in the role of experts.However, constructivists maintain that society and science have always influenced one another, thereby challenging the notion that science is objective and only interested in uncovering the truth. Moderate social constructivism merely acknowledges a controversy and attempts to provide answers. The extreme interpretation of this approach sustains that all facts and all parties—no matter how absurd or unproven their ‘facts'' and claims—should be treated equally, without any consideration for their interests [2].…scientific expertise and validated facts are not always the primary basis for decision-making by policy-makers about issues that affect society and the environmentThe truth might actually be somewhere in the middle, between taking scientific results as absolute truths at one extreme, and requiring that all facts and all actors should be given equal attention and consideration at the other. What is needed, however, is a closer connection and mutual appreciation between science and society, especially when it comes to science policy and making decisions that require scientific expertise. To claim that all perspectives are equally important when there is a lack of absolute facts—leading to an ‘all truths are equal'' approach to decision-making—is surely ridiculous. Nonetheless, societies are highly complex and sufficient facts are often not available when policy-makers and regulatory bodies have to make a decision. The aim of this essay is to argue that social construction and scientific objectivity can coexist and even benefit from one another.The question is whether social constructivism really caused a crisis of objectivity and a change in the traditional view of science? A main characteristic of the traditional view is that science progresses in isolation from any societal influences. However, there are historical and contemporary examples of how social mores influence the acceptability of certain areas of research, the direction of scientific research and even the formation of a scientific consensus—or in the words of Thomas Kuhn, of a scientific paradigm.Arrival at a scientific consensus driven by non-scientific factors will probably happen in a new research field when there is insufficient scientific information or knowledge to make precise claims. As such, societal factors can become determinants in settling disputes, at least until more information emerges. Religious and ethical beliefs have had such an impact on science throughout history. One could argue, for example, that the focus on research into induced pluripotent stem cells and the potency of adult stem cells is driven, at least in part, by religious and ethical objections to using human embryonic stem cells. Similarly, the near universal consensus that scientists should not clone humans is not based on scientific reason, but on social, religious and ethical arguments.Another example of the influence of non-scientific values on the establishment of a scientific consensus comes from the field of artificial intelligence. In the 1960s, a controversy erupted between the proponents of symbolic processing—led by Marvin Minsky—and the proponents of neural nets—who had been led by the charismatic Frank Rosenblatt. The publication of a book by Minsky and Seymour Papert, which concluded that progress in neural networks faced insurmountable limitations, coincided with the unfortunate death of Rosenblatt and massive funding from the US Department of Defense through the Defense Advanced Research Projects Agency (DARPA) for projects on symbolic processing. DARPA''s decision to ignore neural networks—because they could not foresee any immediate military applications—convinced other funding agencies to avoid the field and blocked research on neural nets for a decade. This has become known as the first artificial intelligence winter [3]. The military, in particular, has often had a major influence on setting the direction of scientific research. The atomic bomb, radar and the first computers are just some examples of how military interests drove scientific progress and its application.The traditional perception of science also supposes a gradual and linear accumulation of scientific knowledge. Whilst the gradual part remains undisputed, scientific progress is not linear. Theories are proposed, discussed, rejected, accepted, sometimes forgotten, rediscovered and reborn with modifications as part of an ever-changing network of scientific facts and knowledge. Gregor Mendel discovered the laws of inheritance in 1865, but his finding received scant attention until their rediscovery in the early 1900s by Carl Correns and Erich von Tschermak. Ignaz Semmelweis, a Hungarian obstetrician, developed the theory that puerperal fever or childbed fever is mainly transmitted by the poor hygiene of doctors before assisting in births. He observed that when doctors washed their hands with a chlorine solution before obstetric consultations, deaths in obstetrics wards were drastically reduced. The medical community ridiculed Semmelweis at the time, but the development of Louis Pasteur''s germ theory of disease eventually vindicated him [4].Another challenge to the traditional view of science is the claim that scientific facts are constructed. This does not necessarily imply that they are false: it acknowledges the process of independently conducted experiments, ‘trial and error'' approaches, collaborations and discussions, to establish a final consensus that then becomes scientific fact. Critics of constructivism claim that viewing scientific discovery this way opens the gate to non-scientific influences and arguments, thereby undermining factuality. However, without consensus on the importance of a discovery, no fact is sufficient to change or establish a scientific theory. In fact, classical peer review treats scientific discoveries as constructions essentially by taking apart the proposed fact, analysing the process of its determination and, based on the evidence, accepting or rejecting it.‘Social constructivism'' […] challenges the ‘traditional'' view of science: that it is an objective, experiment-based approach to collect evidence…Ultimately, then, it seems that social constructivism itself is not the sole or most important factor for changing the traditional view of science. Social, religious and ethical values have always influenced human endeavours, and science is no exception. Yet, there is one aspect of traditional science for which constructivism only has the role of an observer: public trust in scientific experts. Societies can resist the introduction of new technologies owing to their potential risks. Traditionally, the potential victims of such hazards—consumers, affected communities and the environment—had no input into either the risk-assessment process, or the decisions that were made on the basis of the assessment.The difficulty is that postmodern societies tend to perceive certain risks as greater compared with how they were viewed by modern or premodern societies, ostensibly and partly because of globalization and better communication [5]. As a result, the evaluation of risk increasingly takes into account political considerations. Each stakeholder inevitably defines risks and their acceptability according to their own agenda, and brings their own cadre of experts and evidence to support their claims. As such, the role of unbiased experts is undermined not only because they are similarly accused of having their own agenda, but also because the line between experts and non-experts is redrawn [5]. In addition, the internet and other communication technologies have unprecedentedly empowered non-expert users to broadcast their opinions. The emergence of so-called ‘pseudo-experts'', enabled by “the cult of the amateur” [6], further challenges the position of scientific experts. Trust is no longer a given for anyone, and even when people trust science, it is not lasting, and has to be earned for new information. This erosion of trust cannot be blamed entirely on the “cult of the amateur”. The German sociologist Ulrich Beck argued that when scientists make recommendations to society on how to deal with risks, they inevitably make assumptions that are embedded in cultural values, moving into a social and cultural sphere without assessing the public view of those values. Scientists thus presuppose a certain set of social and cultural values and judge everything that comes against that set as irrational [5].…without consensus on the importance of a discovery, no fact is sufficient to change or establish a scientific theoryRegardless of how trust in expertise was eroded, and how pseudo-experts have filled the gap, the main issue is how to assess the implications of scientific results and new technologies, and how to manage any risks that they entail. To gain and maintain trust, decision-making must consider stakeholder involvement and public opinion. However, when public participation attempts to accommodate an increasing number of stakeholders, it raises the difficult issue of who should be involved, either as part of the administrative process or as producers of knowledge [7,8]. An increasing number of participants in decision-making and an increasing amount of information can result in conflicting perspectives, different perceptions of facts and even half-truths or half-lies when information is not available, missing or not properly explained. There is no dominant perspective and all evidence seems subjective. This seems to be the nightmare scenario when ‘all truths are equal''.It is important to point out that the constructivist perspective of looking at the interactions between science and society is not an attempt to impose a particular world-view; it is merely an attempt to understand the mechanisms of these interactions. It attempts to explain why, for example, anti-GMO activists destroy experimental field trials without any scientific proof regarding the harm of such experiments. In addition, constructivism does not attempt to destroy the credibility of science, nor to overemphasize alternative knowledge, but to offer possibilities for wider participation in policy-making, especially in contentious cases when the lines between the public and experts are no longer clear [8]. In this situation, expert knowledge is not meant to be replaced by non-expert knowledge, but to be enriched by it.Nonetheless, the main question is whether scientific objectivity can prevail when science meets society. The answer should be yes. Even when several seemingly valid perspectives persist, objective facts are and should be the foundation of decisions taken. Scientific facts do matter and there are objective frameworks in place to prove or disprove the validity of information. Yet, in settling disputes, the decision must also be accountable to prevent loss of trust. By establishing frameworks for inclusive discussions and acknowledging the role of non-expert knowledge, either by indicating areas of public concern or by improving the communication of scientific facts, consent and thus support for the decision can be achieved.Moreover, scientific facts are important, but they are only part of an informational package. In particular, the choice of words and the style of writing can become more important than the factual content of a message. Scientists cannot communicate to the wider public using scientific jargon and then expect unconditional trust. People tend to mistrust things they cannot understand. To be part of a decision-making process, members of the public need access to scientific information presented in an understandable manner. The core issue is communication, or more specifically, translation: explaining facts and findings by considering the receiver and context, and adapting the message and language accordingly. Scientists must therefore translate their work. Equally important, they must do this proactively to take advantage of social constructivism and its view of science. By understanding how controversies around new scientific discoveries and scientific expertise arise, they can devise better communication strategies.…the internet and other communication technologies have unprecedentedly empowered non-expert users to broadcast their opinionsSome examples show how better interaction between science and society—such as the involvement of more stakeholders and the use of appropriate language in communication—can raise awareness and acceptability of previously contentious technologies. In Burkina Faso in 1999, Monsanto partnered with Africare to provide farmers with GM cotton to address pest resistance to pesticides and to increase yields. The plan was originally met with suspicion from the public and public research institutes, but the partners managed to build trust among the different stakeholders by providing transparent and correct information. The project started with a public–private partnership. By being open about their motives, including profit-making, and acknowledging and discussing any potential risks, the project gradually achieved the full support of the main partners [9]. Another challenge was the relationship between scientists and journalists. By using scientific communicators that were both open to dialogue and careful to maintain the discussion within scientific boundaries, the relationship with the press improved [10]. In this case, efforts to translate scientific knowledge included transparency of information and contextualizing its delivery, as well as an increasingly wider participation of stakeholders in the development and commercialization of GM cotton.…scientists[…]should consider proactively translating their research for a wider audience […] in an inclusive and contextualized mannerWhen the Philippines, the first Asian country to adopt a GM food, approved Bt maize, environmental NGOs and the Catholic Church opposed the crop with regular protests. These slowly dissipated as farmers gradually adopted Bt maize [11] and the reporting media focused less on sensationalist stories [12]. Between 2000 and 2009, media coverage contributed substantially to a mostly positive (41%) or neutral (38%) public perception of biotechnology in the Philippines [12]. Most newspaper reports focused on the public accountability of biotechnology governance and analysed the validity of scientific information, together with the way in which conflicts in biotechnology research were managed. Science writers translated scientific facts into language that the wider public could understand. In addition, sources in which the public placed trust—either scientists or environmentalists—were cited in the media, which helped to facilitate public discussion [12]. In this case, the efforts of science writers to provide balanced, well-informed coverage, as well as a platform for public discussions, effectively translated the scientific facts and improved public opinion of Bt maize.Constructivism is not a threat to science. It is a concept that looks at the components and the processes through which a scientific theory or fact emerges; it is not an alternative to these processes. In fact, scientists should consider embracing constructivism, not only to understand what happens with the products of their labour beyond the laboratory, but also to understand the forces that determine the fate of scientific developments. We live in a complex world in which individual actors are empowered through modern communication tools. This might make it more challenging to prove and maintain scientific objectivity, but it does not make it unnecessary. Public decision-making requires an objective fact base for all decisions concerning the use of scientific discoveries in society. If scientists want to prevent their messages from being misunderstood or hijacked for political purposes, they should consider proactively translating their research for a wider audience themselves, in an inclusive and contextualized manner.? Open in a separate windowMonica Racovita  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号