首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Martinson BC 《EMBO reports》2011,12(8):758-762
Universities have been churning out PhD students to reap financial and other rewards for training biomedical scientists. This deluge of cheap labour has created unhealthy competition, which encourages scientific misconduct.Most developed nations invest a considerable amount of public money in scientific research for a variety of reasons: most importantly because research is regarded as a motor for economic progress and development, and to train a research workforce for both academia and industry. Not surprisingly, governments are occasionally confronted with questions about whether the money invested in research is appropriate and whether taxpayers are getting the maximum value for their investments.…questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientistsThe training and maintenance of the research workforce is a large component of these investments. Yet discussions in the USA about the appropriate size of this workforce have typically been contentious, owing to an apparent lack of reliable data to tell us whether the system yields academic ‘reproduction rates'' that are above, below or at replacement levels. In the USA, questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientists. As Donald Kennedy, then Editor-in-Chief of Science, noted several years ago, leaders in prestigious academic institutions have repeatedly rung alarm bells about shortages in the science workforce. Less often does one see questions raised about whether too many scientists are being produced or concerns about unintended consequences that may result from such overproduction. Yet recognizing that resources are finite, it seems reasonable to ask what level of competition for resources is productive, and at what level does competition become counter-productive.Finding a proper balance between the size of the research workforce and the resources available to sustain it has other important implications. Unhealthy competition—too many people clamouring for too little money and too few desirable positions—creates its own problems, most notably research misconduct and lower-quality, less innovative research. If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edge. Moreover, many in the science community worry that every publicized case of research misconduct could jeopardize those resources, if politicians and taxpayers become unwilling to invest in a research system that seems to be riddled with fraud and misconduct.The biomedical research enterprise in the USA provides a useful context in which to examine the level of competition for resources among academic scientists. My thesis is that the system of publicly funded research in the USA as it is currently configured supports a feedback system of institutional incentives that generate excessive competition for resources in biomedical research. These institutional incentives encourage universities to overproduce graduate students and postdoctoral scientists, who are both trainees and a cheap source of skilled labour for research while in training. However, once they have completed their training, they become competitors for money and positions, thereby exacerbating competitive pressures.Questions raised about whether too many scientists are being produced or concerns about the unintended consequences of such overproduction are less commonThe resulting scarcity of resources, partly through its effect on peer review, leads to a shunting of resources away from both younger researchers and the most innovative ideas, which undermines the effectiveness of the research enterprise as a whole. Faced with an increasing number of grant applications and the consequent decrease in the percentage of projects that can be funded, reviewers tend to ‘play it safe'' and favour projects that have a higher likelihood of yielding results, even if the research is conservative in the sense that it does not explore new questions. Resource scarcity can also introduce unwanted randomness to the process of determining which research gets funded. A large group of scientists, led by a cancer biologist, has recently mounted a campaign against a change in a policy of the National Institutes of Health (NIH) to allow only one resubmission of an unfunded grant proposal (Wadman, 2011). The core of their argument is that peer reviewers are likely able to distinguish the top 20% of research applications from the rest, but that within that top 20%, distinguishing the top 5% or 10% means asking peer reviewers for a level of precision that is simply not possible. With funding levels in many NIH institutes now within that 5–10% range, the argument is that reviewers are being forced to choose at random which excellent applications do and do not get funding. In addition to the inefficiency of overproduction and excessive competition in terms of their costs to society and opportunity costs to individuals, these institutional incentives might undermine the integrity and quality of science, and reduce the likelihood of breakthroughs.My colleagues and I have expressed such concerns about workforce dynamics and related issues in several publications (Martinson, 2007; Martinson et al, 2005, 2006, 2009, 2010). Early on, we observed that, “missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures” (Martinson et al, 2005). Our more recent publications have been more specific about the institutional and systemic structures concerned. It seems that at least a few important leaders in science share these concerns.In April 2009, the NIH, through the National Institute of General Medical Sciences (NIGMS), issued a request for applications (RFA) calling for proposals to develop computational models of the research workforce (http://grants.nih.gov/grants/guide/rfa-files/RFA-GM-10-003.html). Although such an initiative might be premature given the current level of knowledge, the rationale behind the RFA seems irrefutable: “there is a need to […] pursue a systems-based approach to the study of scientific workforce dynamics.” Roughly four decades after the NIH appeared on the scene, this is, to my knowledge, the first official, public recognition that the biomedical workforce tends not to conform nicely to market forces of supply and demand, despite the fact that others have previously made such arguments.Early last year, Francis Collins, Director of the NIH, published a PolicyForum article in Science, voicing many of the concerns I have expressed about specific influences that have led to growth rates in the science workforce that are undermining the effectiveness of research in general, and biomedical research in particular. He notes the increasing stress in the biomedical research community after the end of the NIH “budget doubling” between 1998 and 2003, and the likelihood of further disruptions when the American Recovery and Reinvestment Act of 2009 (ARRA) funding ends in 2011. Arguing that innovation is crucial to the future success of biomedical research, he notes the tendency towards conservatism of the NIH peer-review process, and how this worsens in fiscally tight times. Collins further highlights the ageing of the NIH workforce—as grants increasingly go to older scientists—and the increasing time that researchers are spending in itinerant and low-paid postdoctoral positions as they stack up in a holding pattern, waiting for faculty positions that may or may not materialize. Having noted these challenging trends, and echoing the central concerns of a 2007 Nature commentary (Martinson, 2007), he concludes that “…it is time for NIH to develop better models to guide decisions about the optimum size and nature of the US workforce for biomedical research. A related issue that needs attention, though it will be controversial, is whether institutional incentives in the current system that encourage faculty to obtain up to 100% of their salary from grants are the best way to encourage productivity.”Similarly, Bruce Alberts, Editor-in-Chief of Science, writing about incentives for innovation, notes that the US biomedical research enterprise includes more than 100,000 graduate students and postdoctoral fellows. He observes that “only a select few will go on to become independent research scientists in academia”, and argues that “assuming that the system supporting this career path works well, these will be the individuals with the most talent and interest in such an endeavor” (Alberts, 2009).His editorial is not concerned with what happens to the remaining majority, but argues that even among the select few who manage to succeed, the funding process for biomedical research “forces them to avoid risk-taking and innovation”. The primary culprit, in his estimation, is the conservatism of the traditional peer-review system for federal grants, which values “research projects that are almost certain to ‘work''”. He continues, “the innovation that is essential for keeping science exciting and productive is replaced by […] research that has little chance of producing the breakthroughs needed to improve human health.”If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edgeAlthough I believe his assessment of the symptoms is correct, I think he has misdiagnosed the cause, in part because he has failed to identify which influence he is concerned with from the network of influences in biomedical research. To contextualize the influences of concern to Alberts, we must consider the remaining majority of doctorally trained individuals so easily dismissed in his editorial, and further examine what drives the dynamics of the biomedical research workforce.Labour economists might argue that market forces will always balance the number of individuals with doctorates with the number of appropriate jobs for them in the long term. Such arguments would ignore, however, the typical information asymmetry between incoming graduate students, whose knowledge about their eventual job opportunities and career options is by definition far more limited than that of those who run the training programmes. They would also ignore the fact that universities are generally not confronted with the externalities resulting from overproduction of PhDs, and have positive financial incentives that encourage overproduction. During the past 40 years, NIH ‘extramural'' funding has become crucial for graduate student training, faculty salaries and university overheads. For their part, universities have embraced NIH extramural funding as a primary revenue source that, for a time, allowed them to implement a business model based on the interconnected assumptions that, as one of the primary ‘outputs'' or ‘products'' of the university, more doctorally trained individuals are always better than fewer, and because these individuals are an excellent source of cheap, skilled labour during their training, they help to contain the real costs of faculty research.“…the current system has succeeded in maximizing the amount of research […] it has also degraded the quality of graduate training and led to an overproduction of PhDs…”However, it has also made universities increasingly dependent on NIH funding. As recently documented by the economist Paula Stephan, most faculty growth in graduate school programmes during the past decade has occurred in medical colleges, with the majority—more than 70%—in non-tenure-track positions. Arguably, this represents a shift of risk away from universities and onto their faculty. Despite perennial cries of concern about shortages in the research workforce (Butz et al, 2003; Kennedy et al, 2004; National Academy of Sciences et al, 2005) a number of commentators have recently expressed concerns that the current system of academic research might be overbuilt (Cech, 2005; Heinig et al, 2007; Martinson, 2007; Stephan, 2007). Some explicitly connect this to structural arrangements between the universities and NIH funding (Cech, 2005; Collins, 2007; Martinson, 2007; Stephan, 2007).In 1995, David Korn pointed out what he saw as some problematic aspects of the business model employed by Academic Medical Centers (AMCs) in the USA during the past few decades (Korn, 1995). He noted the reliance of AMCs on the relatively low-cost, but highly skilled labour represented by postdoctoral fellows, graduate students and others—who quickly start to compete with their own professors and mentors for resources. Having identified the economic dependence of the AMCs on these inexpensive labour pools, he noted additional problems with the graduate training programmes themselves. “These programs are […] imbued with a value system that clearly indicates to all participants that true success is only marked by the attainment of a faculty position in a high-profile research institution and the coveted status of principal investigator on NIH grants.” Pointing to “more than 10 years of severe supply/demand imbalance in NIH funds”, Korn concluded that, “considering the generative nature of each faculty mentor, this enterprise could only sustain itself in an inflationary environment, in which the society''s investment in biomedical research and clinical care was continuously and sharply expanding.” From 1994 to 2003, total funding for biomedical research in the USA increased at an annual rate of 7.8%, after adjustment for inflation. The comparable rate of growth between 2003 and 2007 was 3.4% (Dorsey et al, 2010). These observations resonate with the now classic observation by Derek J. de Solla Price, from more than 30 years before, that growth in science frequently follows an exponential pattern that cannot continue indefinitely; the enterprise must eventually come to a plateau (de Solla Price, 1963).In May 2009, echoing some of Korn''s observations, Nobel laureate Roald Hoffmann caused a stir in the US science community when he argued for a “de-coupling” of the dual roles of graduate students as trainees and cheap labour (Hoffmann, 2009). His suggestion was to cease supporting graduate students with faculty research grants, and to use the money instead to create competitive awards for which graduate students could apply, making them more similar to free agents. During the ensuing discussion, Shirley Tilghman, president of Princeton University, argued that “although the current system has succeeded in maximizing the amount of research performed […] it has also degraded the quality of graduate training and led to an overproduction of PhDs in some areas. Unhitching training from research grants would be a much-needed form of professional ‘birth control''” (Mervis, 2009).The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientistsAlthough the issue of what I will call the ‘academic birth rate'' is the central concern of this analysis, the ‘academic end-of-life'' also warrants some attention. The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientists. A 2008 news item in Science quoted then 70-year-old Robert Wells, a molecular geneticist at Texas A&M University, “‘if I and other old birds continue to land the grants, the [young scientists] are not going to get them.” He worries that the budget will not be able to support “the 100 people ‘I''ve trained […] to replace me''” (Kaiser, 2008). While his claim of 100 trainees might be astonishing, it might be more astonishing that his was the outlying perspective. The majority of senior scientists interviewed for that article voiced intentions to keep doing science—and going after NIH grants—until someone forced them to stop or they died.Some have looked at the current situation with concern, primarily because of the threats it poses to the financial and academic viability of universities (Korn, 1995; Heinig et al, 2007; Korn & Heinig, 2007), although most of those who express such concerns have been distinctly reticent to acknowledge the role of universities in creating and maintaining the situation. Others have expressed concerns about the differential impact of extreme competition and meagre job prospects on the recruitment, development and career survival of young and aspiring scientists (Freeman et al, 2001; Kennedy et al, 2004; Martinson et al, 2006; Anderson et al, 2007a; Martinson, 2007; Stephan, 2007). There seems to be little disagreement, however, that the system has generated excessively high competition for federal research funding, and that this threatens to undermine the very innovation and production of knowledge that is its raison d''etre.The production of knowledge in science, particularly of the ‘revolutionary'' variety, is generally not a linear input–output process with predictable returns on investment, clear timelines and high levels of certainty (Lane, 2009). On the contrary, it is arguable that “revolutionary science is a high risk and long-term endeavour which usually fails” (Charlton & Andras, 2008). Predicting where, when and by whom breakthroughs in understanding will be produced has proven to be an extremely difficult task. In the face of such uncertainty, and denying the realities of finite resources, some have argued that the best bet is to maximize the number of scientists, using that logic to justify a steady-state production of new PhDs, regardless of whether the labour market is sending signals of increasing or decreasing demand for that supply. Only recently have we begun to explore the effects of the current arrangement on the process of knowledge production, and on innovation in particular (Charlton & Andras, 2008; Kolata, 2009).…most of those who express such concerns have been reticent to acknowledge the role of universities themselves in creating and maintaining the situationBruce Alberts, in the above-mentioned editorial, points to several initiatives launched by the NIH that aim to get a larger share of NIH funding into the hands of young scientists with particularly innovative ideas. These include the “New Innovator Award,” the “Pioneer Award” and the “Transformational R01 Awards”. The proportion of NIH funding dedicated to these awards, however, amounts to “only 0.27% of the NIH budget” (Alberts, 2009). Such a small proportion of the NIH budget does not seem likely to generate a large amount of more innovative science. Moreover, to the extent that such initiatives actually succeed in enticing more young investigators to become dependent on NIH funds, any benefit these efforts have in terms of innovation may be offset by further increases in competition for resources that will come when these new ‘innovators'' reach the end of this specialty funding and add to the rank and file of those scrapping for funds through the standard mechanisms.Our studies on research integrity have been mostly oriented towards understanding how the influences within which academic scientists work might affect their behaviour, and thus the quality of the science they produce (Anderson et al, 2007a, 2007b; Martinson et al, 2009, 2010). My colleagues and I have focused on whether biomedical researchers perceive fairness in the various exchange relationships within their work systems. I am persuaded by the argument that expectations of fairness in exchange relationships have been hard-wired into us through evolution (Crockett et al, 2008; Hsu et al, 2008; Izuma et al, 2008; Pennisi, 2009), with the advent of modern markets being a primary manifestation of this. Thus, violations of these expectations strike me as potentially corrupting influences. Such violations might be prime motivators for ill will, possibly engendering bad-faith behaviour among those who perceive themselves to have been slighted, and therefore increasing the risk of research misconduct. They might also corrupt the enterprise by signalling to talented young people that biomedical research is an inhospitable environment in which to develop a career, possibly chasing away some of the most talented individuals, and encouraging a selection of characteristics that might not lead to optimal effectiveness, in terms of scientific innovation and productivity (Charlton, 2009).To the extent that we have an ecology with steep competition that is fraught with high risks of career failure for young scientists after they incur large costs of time, effort and sometimes financial resources to obtain a doctoral degree, why would we expect them to take on the additional, substantial risks involved in doing truly innovative science and asking risky research questions? And why, in such a cut-throat setting, would we not anticipate an increase in corner-cutting, and a corrosion of good scientific practice, collegiality, mentoring and sociability? Would we not also expect a reduction in high-risk, innovative science, and a reversion to a more career-safe type of ‘normal'' science? Would this not reduce the effectiveness of the institution of biomedical research? I do not claim to know the conditions needed to maximize the production of research that is novel, innovative and conducted with integrity. I am fairly certain, however, that putting scientists in tenuous positions in which their careers and livelihoods would be put at risk by pursuing truly revolutionary research is one way to insure against it.  相似文献   

2.
L Bornmann 《EMBO reports》2012,13(8):673-676
The global financial crisis has changed how nations and agencies prioritize research investment. There has been a push towards science with expected benefits for society, yet devising reliable tools to predict and measure the social impact of research remains a major challenge.Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier, and it has become the underlying rationale for public support and funding of science.However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [1]. In fact, there is no direct link between the scientific quality of a research project and its societal value. As Paul Nightingale and Alister Scott of the University of Sussex''s Science and Technology Policy Research centre have pointed out: “research that is highly cited or published in top journals may be good for the academic discipline but not for society” [2]. Moreover, it might take years, or even decades, until a particular body of knowledge yields new products or services that affect society. By way of example, in an editorial on the topic in the British Medical Journal, editor Richard Smith cites the original research into apoptosis as work that is of high quality, but that has had “no measurable impact on health” [3]. He contrasts this with, for example, research into “the cost effectiveness of different incontinence pads”, which is certainly not seen as high value by the scientific community, but which has had an immediate and important societal impact.…the growth of scientific research during the past decades has outpaced the public resources available to fund itThe problem actually begins with defining the ‘societal impact of research''. A series of different concepts has been introduced: ‘third-stream activities'' [4], ‘societal benefits'' or ‘societal quality'' [5], ‘usefulness'' [6], ‘public values'' [7], ‘knowledge transfer'' [8] and ‘societal relevance'' [9, 10]. Yet, each of these concepts is ultimately concerned with measuring the social, cultural, environmental and economic returns from publicly funded research, be they products or ideas.In this context, ‘societal benefits'' refers to the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making. ‘Cultural benefits'' are those that add to the cultural capital of a nation, for example, by giving insight into how we relate to other societies and cultures, by providing a better understanding of our history and by contributing to cultural preservation and enrichment. ‘Environmental benefits'' benefit the natural capital of a nation, by reducing waste and pollution, and by increasing natural preserves or biodiversity. Finally, ‘economic benefits'' increase the economic capital of a nation by enhancing its skills base and by improving its productivity [11].Given the variability and the complexity of evaluating the societal impact of research, Barend van der Meulen at the Rathenau Institute for research and debate on science and technology in the Netherlands, and Arie Rip at the School of Management and Governance of the University of Twente, the Netherlands, have noted that “it is not clear how to evaluate societal quality, especially for basic and strategic research” [5]. There is no accepted framework with adequate datasets comparable to,for example, Thomson Reuters'' Web of Science, which enables the calculation of bibliometric values such as the h index [12] or journal impact factor [13]. There are also no criteria or methods that can be applied to the evaluation of societal impact, whilst conventional research and development (R&D) indicators have given little insight, with the exception of patent data. In fact, in many studies, the societal impact of research has been postulated rather than demonstrated [14]. For Benoît Godin at the Institut National de la Recherche Scientifique (INRS) in Quebec, Canada, and co-author Christian Doré, “systematic measurements and indicators [of the] impact on the social, cultural, political, and organizational dimensions are almost totally absent from the literature” [15]. Furthermore, they note, most research in this field is primarily concerned with economic impact.A presentation by Ben Martin from the Science and Technology Policy Research Unit at Sussex University, UK, cites four common problems that arise in the context of societal impact measurements [16]. The first is the causality problem—it is not clear which impact can be attributed to which cause. The second is the attribution problem, which arises because impact can be diffuse or complex and contingent, and it is not clear what should be attributed to research or to other inputs. The third is the internationality problem that arises as a result of the international nature of R&D and innovation, which makes attribution virtually impossible. Finally, the timescale problem arises because the premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impact.…in many studies, the societal impact of research has been postulated rather than demonstratedIn addition, there are four other problems. First, it is hard to find experts to assess societal impact that is based on peer evaluation. As Robert Frodeman and James Britt Holbrook at the University of North Texas, USA, have noted, “[s]cientists generally dislike impacts considerations” and evaluating research in terms of its societal impact “takes scientists beyond the bounds of their disciplinary expertise” [10]. Second, given that the scientific work of an engineer has a different impact than the work of a sociologist or historian, it will hardly be possible to have a single assessment mechanism [4, 17]. Third, societal impact measurement should take into account that there is not just one model of a successful research institution. As such, assessment should be adapted to the institution''s specific strengths in teaching and research, the cultural context in which it exists and national standards. Finally, the societal impact of research is not always going to be desirable or positive. For example, Les Rymer, graduate education policy advisor to the Australian Group of Eight (Go8) network of university vice-chancellors, noted in a report for the Go8 that, “environmental research that leads to the closure of a fishery might have an immediate negative economic impact, even though in the much longer term it will preserve a resource that might again become available for use. The fishing industry and conservationists might have very different views as to the nature of the initial impact—some of which may depend on their view about the excellence of the research and its disinterested nature” [18].Unlike scientific impact measurement, for which there are numerous established methods that are continually refined, research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishments. Even so, governments already conduct budget-relevant measurements, or plan to do so. The best-known national evaluation system is the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. Efforts are under way to set up the Research Excellence Framework (REF), which is set to replace the RAE in 2014 “to support the desire of modern research policy for promoting problem-solving research” [21]. In order to develop the new arrangements for the assessment and funding of research in the REF, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches for evaluating the impact of research [20]. The recommendation from this consultation is that impact should be measured in a quantifiable way, and expert panels should review narrative evidence in case studies supported by appropriate indicators [19,21].…premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impactMany of the studies that have carried out societal impact measurement chose to do so on the basis of case studies. Although this method is labour-intensive and a craft rather than a quantitative activity, it seems to be the best way of measuring the complex phenomenon that is societal impact. The HEFCE stipulates that “case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe” [22]. Claire Donovan at Brunel University, London, UK, considers the preference for a case-study approach in the REF to be “the ‘state of the art'' [for providing] the necessary evidence-base for increased financial support of university research across all fields” [23]. According to Finn Hansson from the Department of Leadership, Policy and Philosophy at the Copenhagen Business School, Denmark, and co-author Erik Ernø-Kjølhede, the new REF is “a clear political signal that the traditional model for assessing research quality based on a discipline-oriented Mode 1 perception of research, first and foremost in the form of publication in international journals, was no longer considered sufficient by the policy-makers” [19]. ‘Mode 1'' describes research governed by the academic interests of a specific community, whereas ‘Mode 2'' is characterized by collaboration—both within the scientific realm and with other stakeholders—transdisciplinarity and basic research that is being conducted in the context of application [19].The new REF will also entail changes in budget allocations. The evaluation of a research unit for the purpose of allocations will determine 20% of the societal influence dimension [19]. The final REF guidance contains lists of examples for different types of societal impact [24].Societal impact is much harder to measure than scientific impact, and there are probably no indicators that can be used across all disciplines and institutions for collation in databases [17]. Societal impact often takes many years to become apparent, and “[t]he routes through which research can influence individual behaviour or inform social policy are often very diffuse” [18].Yet, the practitioners of societal impact measurement should not conduct this exercise alone; scientists should also take part. According to Steve Hanney at Brunel University, an expert in assessing payback or impacts from health research, and his co-authors, many scientists see societal impact measurement as a threat to their scientific freedom and often reject it [25]. If the allocation of funds is increasingly oriented towards societal impact issues, it challenges the long-standing reward system in science whereby scientists receive credits—not only citations and prizes but also funds—for their contributions to scientific advancement. However, given that societal impact measurement is already important for various national evaluations—and other countries will follow probably—scientists should become more concerned with this aspect of their research. In fact, scientists are often unaware that their research has a societal impact. “The case study at BRASS [Centre for Business Relationships, Accountability, Sustainability and Society] uncovered activities that were previously ‘under the radar'', that is, researchers have been involved in activities they realised now can be characterized as productive interactions” [26] between them and societal stakeholders. It is probable that research in many fields already has a direct societal impact, or induces productive interactions, but that it is not yet perceived as such by the scientists conducting the work.…research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishmentsThe involvement of scientists is also necessary in the development of mechanisms to collect accurate and comparable data [27]. Researchers in a particular discipline will be able to identify appropriate indicators to measure the impact of their kind of work. If the approach to establishing measurements is not sufficiently broad in scope, there is a danger that readily available indicators will be used for evaluations, even if they do not adequately measure societal impact [16]. There is also a risk that scientists might base their research projects and grant applications on readily available and ultimately misleading indicators. As Hansson and Ernø-Kjølhede point out, “the obvious danger is that researchers and universities intensify their efforts to participate in activities that can be directly documented rather than activities that are harder to document but in reality may be more useful to society” [19]. Numerous studies have documented that scientists already base their activities on the criteria and indicators that are applied in evaluations [19, 28, 29].Until reliable and robust methods to assess impact are developed, it makes sense to use expert panels to qualitatively assess the societal relevance of research in the first instance. Rymer has noted that, “just as peer review can be useful in assessing the quality of academic work in an academic context, expert panels with relevant experience in different areas of potential impact can be useful in assessing the difference that research has made” [18].Whether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting the public funding and support of basic researchWhether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting public funding and support of basic research. This has always been the case, but new research into measures that can assess the societal impact of research would provide better qualitative and quantitative data on which funding agencies and politicians could base decisions. At the same time, such measurement should not come at the expense of basic, blue-sky research, given that it is and will remain near-impossible to predict the impact of certain research projects years or decades down the line.  相似文献   

3.
Lessons from science studies for the ongoing debate about ‘big'' versus ‘little'' research projectsDuring the past six decades, the importance of scientific research to the developed world and the daily lives of its citizens has led many industrialized countries to rebrand themselves as ‘knowledge-based economies''. The increasing role of science as a main driver of innovation and economic growth has also changed the nature of research itself. Starting with the physical sciences, recent decades have seen academic research increasingly conducted in the form of large, expensive and collaborative ‘big science'' projects that often involve multidisciplinary, multinational teams of scientists, engineers and other experts.Although laboratory biology was late to join the big science trend, there has nevertheless been a remarkable increase in the number, scope and complexity of research collaborations…Although laboratory biology was late to join the big science trend, there has nevertheless been a remarkable increase in the number, scope and complexity of research collaborations and projects involving biologists over the past two decades (Parker et al, 2010). The Human Genome Project (HGP) is arguably the most well known of these and attracted serious scientific, public and government attention to ‘big biology''. Initial exchanges were polarized and often polemic, as proponents of the HGP applauded the advent of big biology and argued that it would produce results unattainable through other means (Hood, 1990). Critics highlighted the negative consequences of massive-scale research, including the industrialization, bureaucratization and politicization of research (Rechsteiner, 1990). They also suggested that it was not suited to generating knowledge at all; Nobel laureate Sydney Brenner joked that sequencing was so boring it should be done by prisoners: “the more heinous the crime, the bigger the chromosome they would have to decipher” (Roberts, 2001).A recent Opinion in EMBO reports summarized the arguments against “the creeping hegemony” of ‘big science'' over ‘little science'' in biomedical research. First, many large research projects are of questionable scientific and practical value. Second, big science transfers the control of research topics and goals to bureaucrats, when decisions about research should be primarily driven by the scientific community (Petsko, 2009). Gregory Petsko makes a valid point in his Opinion about wasteful research projects and raises the important question of how research goals should be set and by whom. Here, we contextualize Petsko''s arguments by drawing on the history and sociology of science to expound the drawbacks and benefits of big science. We then advance an alternative to the current antipodes of ‘big'' and ‘little'' biology, which offers some of the benefits and avoids some of the adverse consequences.Big science is not a recent development. Among the first large, collaborative research projects were the Manhattan Project to develop the atomic bomb, and efforts to decipher German codes during the Second World War. The concept itself was put forward in 1961 by physicist Alvin Weinberg, and further developed by historian of science Derek De Solla Price in his pioneering book, Little Science, Big Science. “The large-scale character of modern science, new and shining and all powerful, is so apparent that the happy term ‘Big Science'' has been coined to describe it” (De Solla Price, 1963). Weinberg noted that science had become ‘big'' in two ways. First, through the development of elaborate research instrumentation, the use of which requires large research teams, and second, through the explosive growth of scientific research in general. More recently, big science has come to refer to a diverse but strongly related set of changes in the organization of scientific research. This includes expensive equipment and large research teams, but also the increasing industrialization of research activities, the escalating frequency of interdisciplinary and international collaborations, and the increasing manpower needed to achieve research goals (Galison & Hevly, 1992). Many areas of biological research have shifted in these directions in recent years and have radically altered the methods by which biologists generate scientific knowledge.Despite this long history of collaboration, laboratory biology remained ‘small-scale'' until the rising prominence of molecular biology changed the research landscapeUnderstanding the implications of this change begins with an appreciation of the history of collaborations in the life sciences—biology has long been a collaborative effort. Natural scientists accompanied the great explorers in the grand alliance between science and exploration during the sixteenth and seventeenth centuries (Capshew & Rader, 1992), which not only served to map uncharted territories, but also contributed enormously to knowledge of the fauna and flora discovered. These early expeditions gradually evolved into coordinated, multidisciplinary research programmes, which began with the International Polar Years, intended to concentrate international research efforts at the North and South Poles (1882–1883; 1932–1933). The Polar Years became exemplars of large-scale life science collaboration, begetting the International Geophysical Year (1957–1958) and the International Biological Programme (1968–1974).For Weinberg, the potentially negative consequences associated with big science were “adminstratitis, moneyitis, and journalitis”…Despite this long history of collaboration, laboratory biology remained ‘small-scale'' until the rising prominence of molecular biology changed the research landscape. During the late 1950s and early 1960s, many research organizations encouraged international collaboration in the life sciences, spurring the creation of, among other things, the European Molecular Biology Organization (1964) and the European Molecular Biology Laboratory (1974). In addition, international mapping and sequencing projects were developed around model organisms such as Drosophila and Caenorhabditis elegans, and scientists formed research networks, exchanged research materials and information, and divided labour across laboratories. These new ways of working set the stage for the HGP, which is widely acknowledged as the cornerstone of the current ‘post-genomics era''. As an editorial on ‘post-genomics cultures'' put it in the journal Nature, “Like it or not, big biology is here to stay” (Anon, 2001).Just as big science is not new, neither are concerns about its consequences. As early as 1948, the sociologist Max Weber worried that as equipment was becoming more expensive, scientists were losing autonomy and becoming more dependent on external funding (Weber, 1948). Similarly, although Weinberg and De Solla Price expressed wonder at the scope of the changes they were witnessing, they too offered critical evaluations. For Weinberg, the potentially negative consequences associated with big science were “adminstratitis, moneyitis, and journalitis”; meaning the dominance of science administrators over practitioners, the tendency to view funding increases as a panacea for solving scientific problems, and progressively blurry lines between scientific and popular writing in order to woo public support for big research projects (Weinberg, 1961). De Solla Price worried that the bureaucracy associated with big science would fail to entice the intellectual mavericks on which science depends (De Solla Price, 1963). These concerns remain valid and have been voiced time and again.As big science represents a major investment of time, money and manpower, it tends to determine and channel research in particular directions that afford certain possibilities and preclude others (Cook & Brown, 1999). In the worst case, this can result in entire scientific communities following false leads, as was the case in the 1940s and 1950s for Soviet agronomy. Huge investments were made to demonstrate the superiority of Lamarckian over Mendelian theories of heritability, which held back Russian biology for decades (Soyfer, 1994). Such worst-case scenarios are, however, rare. A more likely consequence is that big science can diminish the diversity of research approaches. For instance, plasma fusion scientists are now under pressure to design projects that are relevant to the large-scale International Thermonuclear Experimental Reactor, despite the potential benefits of a wide array of smaller-scale machines and approaches (Hackett et al, 2004). Big science projects can also involve coordination challenges, take substantial time to realize success, and be difficult to evaluate (Neal et al, 2008).Importantly, big science projects allow for the coordination and activation of diverse forms of expertise across disciplinary, national and professional boundariesAnother danger of big science is that researchers will lose the intrinsic satisfaction that arises from having personal control over their work. Dissatisfaction could lower research productivity (Babu & Singh, 1998) and might create the concomitant danger of losing talented young researchers to other, more engaging callings. Moreover, the alienation of scientists from their work as a result of big science enterprises can lead to a loss of personal responsibility for research. In turn, this can increase the likelihood of misconduct, as effective social control is eroded and “the satisfactions of science are overshadowed by organizational demands, economic calculations, and career strategies” (Hackett, 1994).Practicing scientists are aware of these risks. Yet, they remain engaged in large-scale projects because they must, but also because of the real benefits these projects offer. Importantly, big science projects allow for the coordination and activation of diverse forms of expertise across disciplinary, national and professional boundaries to solve otherwise intractable basic and applied problems. Although calling for international and interdisciplinary collaboration is popular, practicing it is notably less popular and much harder (Weingart, 2000). Big science projects can act as a focal point that allows researchers from diverse backgrounds to cooperate, and simultaneously advances different scientific specialties while forging interstitial connections among them. Another major benefit of big science is that it facilitates the development of common research standards and metrics, allowing for the rapid development of nascent research frontiers (Fujimura, 1996). Furthermore, the high profile of big science efforts such as the HGP and CERN draw public attention to science, potentially enhancing scientific literacy and the public''s willingness to support research.Rather than arguing for or against big science, molecular biology would best benefit from strategic investments in a diverse portfolio of big, little and ‘mezzo'' research projectsBig science can also ease some of the problems associated with scientific management. In terms of training, graduate students and junior researchers involved in big science projects can gain additional skills in problem-solving, communication and team working (Court & Morris, 1994). The bureaucratic structure and well-defined roles of big science projects also make leadership transitions and researcher attrition easier to manage compared with the informal, refractory organization of most small research projects. Big science projects also provide a visible platform for resource acquisition and the recruitment of new scientific talent. Moreover, through their sheer size, diversity and complexity, they can also increase the frequency of serendipitous social interactions and scientific discoveries (Hackett et al, 2008). Finally, large-scale research projects can influence scientific and public policy. Big science creates organizational structures in which many scientists share responsibility for, and expectations of, a scientific problem (Van Lente, 1993). This shared ownership and these shared futures help coordinate communication and enable researchers to present a united front when advancing the potential benefits of their projects to funding bodies.Given these benefits and pitfalls of big science, how might molecular biology best proceed? Petsko''s response is that, “[s]cientific priorities must, for the most part, be set by the free exchange of ideas in the scientific literature, at meetings and in review panels. They must be set from the bottom up, from the community of scientists, not by the people who control the purse strings.” It is certainly the case, as Petsko also acknowledges, that science has benefited from a combination of generous public support and professional autonomy. However, we are less sanguine about his belief that the scientific community alone has the capacity to ascertain the practical value of particular lines of inquiry, determine the most appropriate scale of research, and bring them to fruition. In fact, current mismatches between the production of scientific knowledge and the information needs of public policy-makers strongly suggest that the opposite is true (Sarewitz & Pielke, 2007).Instead, we maintain that these types of decision should be determined through collective decision-making that involves researchers, governmental funding agencies, science policy experts and the public. In fact, the highly successful HGP involved such collaborations (Lambright, 2002). Taking into account the opinions and attitudes of these stakeholders better links knowledge production to the public good (Cash et al, 2003)—a major justification for supporting big biology. We do agree with Petsko, however, that large-scale projects can develop pathological characteristics, and that all programmes should therefore undergo regular assessments to determine their continuing worth.Rather than arguing for or against big science, molecular biology would best benefit from strategic investments in a diverse portfolio of big, little and ‘mezzo'' research projects. Their size, duration and organizational structure should be determined by the research question, subject matter and intended goals (Westfall, 2003). Parties involved in making these decisions should, in turn, aim at striking a profitable balance between differently sized research projects to garner the benefits of each and allow practitioners the autonomy to choose among them.This will require new, innovative methods for supporting and coordinating research. An important first step is ensuring that funding is made available for all kinds of research at a range of scales. For this to happen, the current funding model needs to be modified. The practice of allocating separate funds for individual investigator-driven and collective research projects is a positive step in the right direction, but it does not discriminate between projects of different sizes at a sufficiently fine resolution. Instead, multiple funding pools should be made available for projects of different sizes and scales, allowing for greater accuracy in project planning, funding and evaluation.It is up to scientists and policymakers to discern how to benefit from the advantages that ‘bigness'' has to offer, while avoiding the pitfalls inherent in doing soSecond, science policy should consciously facilitate the ‘scaling up'', ‘scaling down'' and concatenation of research projects when needed. For instance, special funds might be established for supporting small-scale but potentially transformative research with the capacity to be scaled up in the future. Alternatively, small-scale satellite research projects that are more nimble, exploratory and risky, could complement big science initiatives or be generated by them. This is also in line with Petsko''s statement that “the best kind of big science is the kind that supports and generates lots of good little science.” Another potentially fruitful strategy we suggest would be to fund independent, small-scale research projects to work on co-relevant research with the later objective of consolidating them into a single project in a kind of building-block assembly. By using these and other mechanisms for organizing research at different scales, it could help to ameliorate some of the problems associated with big science, while also accruing its most important benefits.Within the life sciences, the field of ecology perhaps best exemplifies this strategy. Although it encompasses many small-scale laboratory and field studies, ecologists now collaborate in a variety of novel organizations that blend elements of big, little and mezzo science and that are designed to catalyse different forms of research. For example, the US National Center for Ecological Analysis and Synthesis brings together researchers and data from many smaller projects to synthesize their findings. The Long Term Ecological Research Network consists of dozens of mezzo-scale collaborations focused on specific sites, but also leverages big science through cross-site collaborations. While investments are made in classical big science projects, such as the National Ecological Observatory Network, no one project or approach has dominated—nor should it. In these ways, ecologists have been able to reap the benefits of big science whilst maintaining diverse research approaches and individual autonomy and still being able to enjoy the intrinsic satisfaction associated with scientific work.Big biology is here to stay and is neither a curse nor a blessing. It is up to scientists and policy-makers to discern how to benefit from the advantages that ‘bigness'' has to offer, while avoiding the pitfalls inherent in so doing. The challenge confronting molecular biology in the coming years is to decide which kind of research projects are best suited to getting the job done. Molecular biology itself arose, in part, from the migration of physicists to biology; as physics research projects and collaborations grew and became more dependent on expensive equipment, appreciating the saliency of one''s own work became increasingly difficult, which led some to seek refuge in the comparatively little science of biology (Dev, 1990). The current situation, which Petsko criticizes in his Opinion article, is thus the result of an organizational and intellectual cycle that began more than six decades ago. It would certainly behoove molecular biologists to heed his warnings and consider the best paths forward.? Open in a separate windowNiki VermeulenOpen in a separate windowJohn N. ParkerOpen in a separate windowBart Penders  相似文献   

4.
Master Z  Resnik DB 《EMBO reports》2011,12(10):992-995
Stem-cell tourism exploits the hope of patients desperate for therapies and cures. Scientists have both a special responsibility and a unique role to play in addressing this problem.During the past decade, thousands of patients with a variety of diseases unresponsive to conventional treatment have gone abroad to receive stem-cell therapies. This phenomenon, commonly referred to as ‘stem-cell tourism'', raises significant ethical concerns, because patients often receive treatments that are not only unproven, but also unregulated, potentially dangerous or even fraudulent (Kiatpongsan & Sipp, 2009; Lindvall & Hyun, 2009). Stem-cell clinics have sprung up in recent years to take advantage of desperate patients who have exhausted other alternatives (Ryan et al, 2010). These clinics usually advertise their services directly to consumers through the Internet, make extravagant claims about the benefits, downplay the risks involved and charge hefty fees of US $20,000 or more for treatments (Lau et al, 2008; Regenberg et al, 2009).Stem-cell tourism is regarded as ethically problematic because patients receive unproven therapies from untrustworthy sourcesWith a few exceptions—such as the use of bone-marrow haematopoietic cells to treat leukaemia—novel stem-cell therapies are often unproven in clinical trials (Lindvall & Hyun, 2009). Even well-proven therapies can lead to tumour formation, tissue rejection, autoimmunity, permanent disability and death (Gallagher & Forrest, 2007; Murphy & Blazar, 1999). The risks of unproven and unregulated therapies are potentially much worse (Barclay, 2009).In this commentary, we argue that stem-cell scientists have a unique and important role to play in addressing the problem of stem-cell tourism. Stem-cell scientists should carefully examine all requests to provide cell lines and other materials, and share them only with responsible investigators or clinicians. They should require recipients of stem cells to sign material transfer agreements (MTAs) that describe how the cells may be used, and to provide documentation about their scientific or medical qualifications.In discussing these ethical and regulatory issues, it is important to distinguish between stem-cell tourism and other types of travel to receive medical treatment including stem-cell therapy. Stem-cell tourism is regarded as ethically problematic because patients receive unproven therapies from untrustworthy sources. Other forms of travel usually do not raise troubling ethical issues (Lindvall & Hyun, 2009). Many patients go to other countries to receive proven stem-cell therapies—such as haematopoietic cells to treat leukaemia—from responsible physicians. Other patients obtain unproven stem-cell treatments by participating in scientifically valid, legally sanctioned clinical trials, or by receiving ethically responsible, innovative medical care (Lindvall & Hyun, 2009). In some cases, patients need to travel because the therapy is approved in only some countries; by way of example, on 1 July, Korea was the first country that approved the clinical use of adult stem cells to treat heart attack victims (Heejung & Yi, 2011).…even when regulations are in place, unscrupulous individuals might still evade these rulesAny medical innovation is ethically responsible when it is based on animal studies or other research that guarantee evidence of safety and clinical efficacy. Adequate measures must also be taken to protect patients from harm, such as clinical monitoring, follow-up, exclusion of individuals who are likely to be harmed or are unlikely to benefit, use of only clinical-grade stem cells, careful attention to dosing strategies and informed consent (Lindvall & Hyun, 2009).Many of the articles examining the ethics of stem-cell tourism have focused on the need for more regulatory oversight and education to prevent harm (Lindvall & Hyun, 2009; Caplan & Levine, 2010; Cohen & Cohen, 2010; Zarzeczny & Caulfield, 2010). We agree that additional regulations are needed, as there is little oversight of stem-cell research or therapy at present. Although most countries have regulations for conducting research with human subjects, as well as medical malpractice and licensing laws, these provide general guidance and do not directly address stem-cell therapy.Regulations have significant limitations, however. First, regulations apply intra-nationally, not internationally. If a country passes laws designed to oversee therapy and research, these laws would not apply in another nation. Physicians and investigators who do not want to adhere to these rules can simply move to another country that has a permissive legal environment. International agreements can help to close this regulatory gap, but there will still be countries that do not accept or abide by these agreements. Second, even when regulations are in place, unscrupulous individuals might still evade these rules (Resnik, 1999).Educating patients about the risks of unproven therapies can also help to address the problem of stem-cell tourism. However, education too has significant limitations, since many people will remain ignorant of the dangers of unproven therapies, or they will simply ignore warnings and prudent advice. For many years, cancer patients have travelled to foreign countries to receive unconventional and unproven treatments, despite educational campaigns and media reports discussing the dangers of these therapies. Since the 1970s, thousands of patients have travelled to cancer clinics in Mexico to receive medical treatments not available in the USA (Moss, 2005).Education for physicians on the dangers of unproven stem-cell therapies can be helpful, but this strategy also has limitations, since many will not receive this education or will choose to ignore it. Additionally, responsible physicians might still find it difficult to persuade their patients not to receive an unproven therapy, especially when conventional treatments have failed. The history of cancer treatment offers important lessons here, since many oncologists have tried, unsuccessfully, to convince their patients not travel to foreign countries to receive questionable treatments (Moss, 2005).Since regulation and education have significant shortcomings, it is worth considering another strategy for dealing with the problem of stem-cell tourism, one that focuses on the social responsibilities of stem-cell scientists.Many codes of ethics adopted by scientific associations include provisions relating to social responsibilities (Shamoo & Resnik, 2009). For example, the Code of Ethics of the American Society for Biochemistry and Molecular Biology states that “investigators will promote and follow practices that enhance the public interest or well-being” (American Society of Microbiology, 2011). Social responsibilities in science include an obligation to avoid causing harm and an obligation to benefit the public (Shamoo & Resnik, 2009).…education too has significant limitations, since many people will remain ignorant of the dangers of unproven therapies, or they will simply ignore warnings and prudent adviceThere are two distinct rationales for social responsibility. First, scientists should be accountable to the public since the public provides scientists with funding, facilities and staff (Shamoo & Resnik, 2009). Second, stem-cell scientists are uniquely positioned to exercise their social responsibilities and take effective action pertaining to stem-cell tourism. They understand the science behind stem-cell research, including the potential for harm and the likely clinical efficacy. This knowledge can be used to evaluate the scientific validity of the different uses of stem cells, especially clinical uses. Stem-cell scientists also have control over cell lines and other materials that they may or may not choose to share with other researchers or physicians.Many of the private clinics that offer stem-cell treatments are relatively small and often depend on acquiring resources from scientists working in the field. The materials they might require could include adult, embryonic and fetal stem-cell lines; vectors that can be used to induce pluripotency in isolated adult cells; genes, DNA and RNA sequences; antibodies; purified protein products, such as growth factors; and special cocktails, media or extracellular matrices to culture specific stem-cell types.Social responsibilities in science include an obligation to avoid causing harm and an obligation to benefit the publicOne way in which stem-cell scientists can help to address the problem of stem-cell tourism is to refuse to share cell lines or other materials with physicians or investigators whom they believe might be behaving irresponsibly. To decide whether someone who requests materials is a responsible individual, stem-cell scientists should ask recipients to supply documentation, such as a CV, website, a research or clinical protocol, or clinical trial number, as evidence of their work and expertise in stem cells. This would ensure that the stem cells and other materials are going to be used in the course of responsible biomedical research, a legally sanctioned clinical trial, or in responsible medical innovation. If the recipients provide insufficient documentation, scientists should refuse to honour their requests for materials.Stem-cell scientists should also require recipients to sign MTAs that describe what will be done with the material supplied. MTAs are contracts governing the transfer of materials between organizations and typically include a variety of terms and conditions, such as the purposes for which the materials may be used—commercial or academic research, for example—modification of the materials, transfers to third parties, intellectual property rights, and compliance with legal, regulatory and other policies (Rodriguez, 2005).To help address the problem of stem-cell tourism, MTAs should state whether the materials will be used in humans, and under what conditions. If the stem cells are not clinical grade, the MTA should state that they will not be transplanted into humans, unless the recipients have a well-developed and legally sanctioned procedure—approved by the Food and Drug Administration or other relevant agency—for verifying the quality of the cells and performing the necessary changes to make them acceptable for human use. For example, the recipients could test the cells for viral and bacterial infections, mutations, chemical impurities or other factors that would compromise their clinical utility in an attempt to develop clinical grade cell lines.In addition, the MTA could stipulate that scientists must follow the ethical Guidelines for Clinical Translation of Stem Cells set forth by the International Society for Stem Cell Research (Hyun et al, 2008). These guidelines set forth various preclinical and clinical conditions for stem-cell interventions. Describing such conditions might help to deter unscrupulous individuals from using stem cells for scientifically and ethically questionable practices. By evaluating a recipient''s qualifications and intended uses of stem-cell lines and other reagents, scientists demonstrate social responsibility and uphold public trust when sharing materials.Stem-cell scientists also have control over cell lines and other materials that they may or may not choose to share with other researchers or physiciansSince an MTA is a type of contract between institutions, there is legal recourse if it is broken. A plaintiff could sue a defendant that violates an MTA for breach of contract. Also, if the aggrieved party is a funding agency, it could withhold research funding from the offending party. The onus is on the plaintiff—the scientist and scientific organization providing the materials—to file a lawsuit against the defendants for breach of contract and this requires the scientist or others in the organization to follow-up and ensure that the materials transferred are being used in compliance with the conditions set forth in the MTA.Some might object to our proposal because it violates the principle of scientific openness, which is an integral part of the ethos of science (Shamoo & Resnik, 2009). Scientists have an obligation to share data, reagents, cell lines, methods and other research tools because sharing is vital to the progress of science. Many granting agencies and journals also have policies that require scientists to make data and materials available to other scientists on request (Shamoo & Resnik, 2009).Although openness is vital to the ethical practice of science, it can be superseded by other important factors, such as protecting the privacy and confidentiality of human research subjects, safeguarding proprietary or classified research, securing intellectual property or scientific priority, or preventing bioterrorism (Shamoo & Resnik, 2009). We consider tackling the problem of stem-cell tourism to be a sufficiently important reason for refusing to share research materials in some situations.Although openness is vital to the ethical practice of science, it can be superseded by other important factors…Some might also object to our proposal on the grounds that it places unnecessary burdens on already overworked scientists, or that unscrupulous scientists and physicians will find alternative ways to obtain stem cells, even if investigators refuse to share them.We recognize the need to avoid burdening researchers unnecessarily with administrative work, but we think that verifying the qualifications of a recipient and reviewing a protocol is a reasonable burden. If principal investigators do not wish to shoulder this responsibility, they can ask a postdoctoral fellow or another senior member of the laboratory or faculty to help them. Far from being a waste of time and effort, taking some simple steps to determine whether requests for stem cells come from responsible physicians or investigators can be an important part of the scientific community''s response to stem-cell tourism.A month before his death in 1963, former US President John F. Kennedy (1917-1963) made an address at the Centennial Convocation of the National Academy of Sciences in which he said: “If scientific discovery has not been an unalloyed blessing, if it has conferred on mankind the power not only to create but also to annihilate, it has at the same time provided humanity with a supreme challenge and a supreme testing.” Stem-cell scientists can rise to this challenge and address the problem of stem-cell tourism by ensuring that the products of their research are controlled responsibly and shared wisely with genuine investigators or clinicians through the use of MTAs. Doing so should help to deter fraudulent scientists or physicians from exploiting patients who travel to foreign countries in their desperate search for cures.? Open in a separate windowZubin MasterOpen in a separate windowDavid B Resnik  相似文献   

5.
The temptation to silence dissenters whose non-mainstream views negatively affect public policies is powerful. However, silencing dissent, no matter how scientifically unsound it might be, can cause the public to mistrust science in general.Dissent is crucial for the advancement of science. Disagreement is at the heart of peer review and is important for uncovering unjustified assumptions, flawed methodologies and problematic reasoning. Enabling and encouraging dissent also helps to generate alternative hypotheses, models and explanations. Yet, despite the importance of dissent in science, there is growing concern that dissenting voices have a negative effect on the public perception of science, on policy-making and public health. In some cases, dissenting views are deliberately used to derail certain policies. For example, dissenting positions on climate change, environmental toxins or the hazards of tobacco smoke [1,2] seem to laypeople as equally valid conflicting opinions and thereby create or increase uncertainty. Critics often use legitimate scientific disagreements about narrow claims to reinforce the impression of uncertainty about general and widely accepted truths; for instance, that a given substance is harmful [3,4]. This impression of uncertainty about the evidence is then used to question particular policies [1,2,5,6].The negative effects of dissent on establishing public polices are present in cases in which the disagreements are scientifically well-grounded, but the significance of the dissent is misunderstood or blown out of proportion. A study showing that many factors affect the size of reef islands, to the effect that they will not necessarily be reduced in size as sea levels rise [7], was simplistically interpreted by the media as evidence that climate change will not have a negative impact on reef islands [8].In other instances, dissenting voices affect the public perception of and motivation to follow public-health policies or recommendations. For example, the publication of a now debunked link between the measles, mumps and rubella vaccine and autism [9], as well as the claim that the mercury preservative thimerosal, which was used in childhood vaccines, was a possible risk factor for autism [10,11], created public doubts about the safety of vaccinating children. Although later studies showed no evidence for these claims, doubts led many parents to reject vaccinations for their children, risking the herd immunity for diseases that had been largely eradicated from the industrialized world [12,13,14,15]. Many scientists have therefore come to regard dissent as problematic if it has the potential to affect public behaviour and policy-making. However, we argue that such concerns about dissent as an obstacle to public policy are both dangerous and misguided.Whether dissent is based on genuine scientific evidence or is unfounded, interested parties can use it to sow doubt, thwart public policies, promote problematic alternatives and lead the public to ignore sound advice. In response, scientists have adopted several strategies to limit these negative effects of dissent—masking dissent, silencing dissent and discrediting dissenters. The first strategy aims to present a united front to the public. Scientists mask existing disagreements among themselves by presenting only those claims or pieces of evidence about which they agree [16]. Although there is nearly universal agreement among scientists that average global temperatures are increasing, there are also legitimate disagreements about how much warming will occur, how quickly it will occur and the impact it might have [7,17,18,19]. As presenting these disagreements to the public probably creates more doubt and uncertainty than is warranted, scientists react by presenting only general claims [20].A second strategy is to silence dissenting views that might have negative consequences. This can take the form of self-censorship when scientists are reluctant to publish or publicly discuss research that might—incorrectly—be used to question existing scientific knowledge. For example, there are genuine disagreements about how best to model cloud formation, water vapour feedback and aerosols in general circulation paradigms, all of which have significant effects on the magnitude of global climate change predictions [17,19]. Yet, some scientists are hesitant to make these disagreements public, for fear that they will be accused of being denialists, faulted for confusing the public and policy-makers, censured for abating climate-change deniers, or criticized for undermining public policy [21,22,23,24].…there is growing concern that dissenting voices can have a negative effect on the public perception of science, on policy-making and public healthAnother strategy is to discredit dissenters, especially in cases in which the dissent seems to be ideologically motivated. This could involve publicizing the financial or political ties of the dissenters [2,6,25], which would call attention to their probable bias. In other cases, scientists might discredit the expertise of the dissenter. One such example concerns a 2007 study published in the Proceedings of the National Academy of Sciences USA, which claimed that cadis fly larvae consuming Bt maize pollen die at twice the rate of flies feeding on non-Bt maize pollen [26]. Immediately after publication, both the authors and the study itself became the target of relentless and sometimes scathing attacks from a group of scientists who were concerned that anti-GMO (genetically modified organism) interest groups would seize on the study to advance their agenda [27]. The article was criticized for its methodology and its conclusions, the Proceedings of the National Academy of Sciences USA was criticized for publishing the article and the US National Science Foundation was criticized for funding the study in the first place.Public policies, health advice and regulatory decisions should be based on the best available evidence and knowledge. As the public often lack the expertise to assess the quality of dissenting views, disagreements have the potential to cast doubt over the reliability of scientific knowledge and lead the public to question relevant policies. Strategies to block dissent therefore seem reasonable as a means to protect much needed or effective health policies, advice and regulations. However, even if the public were unable to evaluate the science appropriately, targeting dissent is not the most appropriate strategy to prevent negative side effects for several reasons. Chiefly, it contributes to the problems that the critics of dissent seek to address, namely increasing the cacophony of dissenting voices that only aim to create doubt. Focusing on dissent as a problematic activity sends the message to policy-makers and the public that any dissent undermines scientific knowledge. Reinforcing this false assumption further incentivizes those who seek merely to create doubt to thwart particular policies. Not surprisingly, think-tanks, industry and other organizations are willing to manufacture dissent simply to derail policies that they find economically or ideologically undesirable.Another danger of targeting dissent is that it probably stifles legitimate crucial voices that are needed for both advancing science and informing sound policy decisions. Attacking dissent makes scientists reluctant to voice genuine doubts, especially if they believe that doing so might harm their reputations, damage their careers and undermine prevailing theories or policies needed. For instance, a panel of scientists for the US National Academy of Sciences, when presenting a risk assessment of radiation in 1956, omitted wildly different predictions about the potential genetic harm of radiation [16]. They did not include this wide range of predictions in their final report precisely because they thought the differences would undermine confidence in their recommendations. Yet, this information could have been relevant to policy-makers. As such, targeting dissent as an obstacle to public policy might simply reinforce self-censorship and stifle legitimate and scientifically informed debate. If this happens, scientific progress is hindered.Second, even if the public has mistaken beliefs about science or the state of the knowledge of the science in question, focusing on dissent is not an effective way to protect public policy from false claims. It fails to address the presumed cause of the problem—the apparent lack of understanding of the science by the public. A better alternative would be to promote the public''s scientific literacy. If the public were educated to better assess the quality of the dissent and thus disregard instances of ideological, unsupported or unsound dissent, dissenting voices would not have such a negative effect. Of course, one might argue that educating the public would be costly and difficult, and that therefore, the public should simply listen to scientists about which dissent to ignore and which to consider. This is, however, a paternalistic attitude that requires the public to remain ignorant ‘for their own good''; a position that seems unjustified on many levels as there are better alternatives for addressing the problem.Moreover, silencing dissent, rather than promoting scientific literacy, risks undermining public trust in science even if the dissent is invalid. This was exemplified by the 2009 case of hacked e-mails from a computer server at the University of East Anglia''s Climate Research Unit (CRU). After the selective leaking of the e-mails, climate scientists at the CRU came under fire because some of the quotes, which were taken out of context, seemed to suggest that they were fudging data or suppressing dissenting views [28,29,30,31]. The stolen e-mails gave further ammunition to those opposing policies to reduce greenhouse emissions as they could use accusations of data ‘cover up'' as proof that climate scientists were not being honest with the public [29,30,31]. It also allowed critics to present climate scientists as conspirators who were trying to push a political agenda [32]. As a result, although there was nothing scientifically inappropriate revealed in the ‘climategate'' e-mails, it had the consequence of undermining the public''s trust in climate science [33,34,35,36].A significant amount of evidence shows that the ‘deficit model'' of public understanding of science, as described above, is too simplistic to account correctly for the public''s reluctance to accept particular policy decisions [37,38,39,40]. It ignores other important factors such as people''s attitudes towards science and technology, their social, political and ethical values, their past experiences and the public''s trust in governmental institutions [41,42,43,44]. The development of sound public policy depends not only on good science, but also on value judgements. One can agree with the scientific evidence for the safety of GMOs, for instance, but still disagree with the widespread use of GMOs because of social justice concerns about the developing world''s dependence on the interests of the global market. Similarly, one need not reject the scientific evidence about the harmful health effects of sugar to reject regulations on sugary drinks. One could rationally challenge such regulations on the grounds that informed citizens ought to be able to make free decisions about what they consume. Whether or not these value judgements are justified is an open question, but the focus on dissent hinders our ability to have that debate.Focusing on dissent as a problematic activity sends the message to policy-makers and the public that any dissent undermines scientific knowledgeAs such, targeting dissent completely fails to address the real issues. The focus on dissent, and the threat that it seems to pose to public policy, misdiagnoses the problem as one of the public misunderstanding science, its quality and its authority. It assumes that scientific or technological knowledge is the only relevant factor in the development of policy and it ignores the role of other factors, such as value judgements about social benefits and harms, and institutional trust and reliability [45,46]. The emphasis on dissent, and thus on scientific knowledge, as the only or main factor in public policy decisions does not give due attention to these legitimate considerations.Furthermore, by misdiagnosing the problem, targeting dissent also impedes more effective solutions and prevents an informed debate about the values that should guide public policy. By framing policy debates solely as debates over scientific facts, the normative aspects of public policy are hidden and neglected. Relevant ethical, social and political values fail to be publicly acknowledged and openly discussed.Controversies over GMOs and climate policies have called attention to the negative effects of dissent in the scientific community. Based on the assumption that the public''s reluctance to support particular policies is the result of their inability to properly understand scientific evidence, scientists have tried to limit dissenting views that create doubt. However, as outlined above, targeting dissent as an obstacle to public policy probably does more harm than good. It fails to focus on the real problem at stake—that science is not the only relevant factor in sound policy-making. Of course, we do not deny that scientific evidence is important to the develop.ment of public policy and behavioural decisions. Rather, our claim is that this role is misunderstood and often oversimplified in ways that actually contribute to problems in developing sound science-based policies.? Open in a separate windowInmaculada de Melo-MartínOpen in a separate windowKristen Intemann  相似文献   

6.
7.
Zhang JY 《EMBO reports》2011,12(4):302-306
How can grass-roots movements evolve into a national research strategy? The bottom-up emergence of synthetic biology in China could give some pointers.Given its potential to aid developments in renewable energy, biosensors, sustainable chemical industries, microbial drug factories and biomedical devices, synthetic biology has enormous implications for economic development. Many countries are therefore implementing strategies to promote progress in this field. Most notably, the USA is considered to be the leader in exploring the industrial potential of synthetic biology (Rodemeyer, 2009). Synthetic biology in Europe has benefited from several cross-border studies, such as the ‘New and Emerging Science and Technology'' programme (NEST, 2005) and the ‘Towards a European Strategy for Synthetic Biology'' project (TESSY; Gaisser et al, 2008). Yet, little is known in the West about Asia''s role in this ‘new industrial revolution'' (Kitney, 2009). In particular, China is investing heavily in scientific research for future developments, and is therefore likely to have an important role in the development of synthetic biology.Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework…In 2010, as part of a study of the international governance of synthetic biology, the author visited four leading research teams in three Chinese cities (Beijing, Tianjin and Hefei). The main aims of the visits were to understand perspectives in China on synthetic biology, to identify core themes among its scientific community, and to address questions such as ‘how did synthetic biology emerge in China?'', ‘what are the current funding conditions?'', ‘how is synthetic biology generally perceived?'' and ‘how is it regulated?''. Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework; one that is more dynamic and comprises more options than existing national or international research and development (R&D) strategies. Such findings might contribute to Western knowledge of Chinese R&D, but could also expose European and US policy-makers to alternative forms and patterns of research governance that have emerged from a grass-roots level.…the process of developing a framework is at least as important to research governance as the big question it might eventually addressA dominant narrative among the scientists interviewed is the prospect of a ‘big-question'' strategy to promote synthetic-biology research in China. This framework is at a consultation stage and key questions are still being discussed. Yet, fieldwork indicates that the process of developing a framework is at least as important to research governance as the big question it might eventually address. According to several interviewees, this approach aims to organize dispersed national R&D resources into one grand project that is essential to the technical development of the field, preferably focusing on an industry-related theme that is economically appealling to the Chinese public.Chinese scientists have a pragmatic vision for research; thinking of science in terms of its ‘instrumentality'' has long been regarded as characteristic of modern China (Schneider, 2003). However, for a country in which the scientific community is sometimes described as an “uncoordinated ‘bunch of loose ends''” (Cyranoski, 2001) “with limited synergies between them” (OECD, 2007), the envisaged big-question approach implies profound structural and organizational changes. Structurally, the approach proposes that the foundational (industry-related) research questions branch out into various streams of supporting research and more specific short-term research topics. Within such a framework, a variety of Chinese universities and research institutions can be recruited and coordinated at different levels towards solving the big question.It is important to note that although this big-question strategy is at a consultation stage and supervised by the Ministry of Science and Technology (MOST), the idea itself has emerged in a bottom-up manner. One academic who is involved in the ongoing ministerial consultation recounted that, “It [the big-question approach] was initially conversations among we scientists over the past couple of years. We saw this as an alternative way to keep up with international development and possibly lead to some scientific breakthrough. But we are happy to see that the Ministry is excited and wants to support such an idea as well.” As many technicalities remain to be addressed, there is no clear time-frame yet for when the project will be launched. Yet, this nationwide cooperation among scientists with an emerging commitment from MOST seems to be largely welcomed by researchers. Some interviewees described the excitement it generated among the Chinese scientific community as comparable with the establishment of “a new ‘moon-landing'' project”.Of greater significance than the time-frame is the development process that led to this proposition. On the one hand, the emergence of synthetic biology in China has a cosmopolitan feel: cross-border initiatives such as international student competitions, transnational funding opportunities and social debates in Western countries—for instance, about biosafety—all have an important role. On the other hand, the development of synthetic biology in China has some national particularities. Factors including geographical proximity, language, collegial familiarity and shared interests in economic development have all attracted Chinese scientists to the national strategy, to keep up with their international peers. Thus, to some extent, the development of synthetic biology in China is an advance not only in the material synthesis of the ‘cosmos''—the physical world—but also in the social synthesis of aligning national R&D resources and actors with the global scientific community.To comprehend how Chinese scientists have used national particularities and global research trends as mutually constructive influences, and to identify the implications of this for governance, this essay examines the emergence of synthetic biology in China from three perspectives: its initial activities, the evolution of funding opportunities, and the ongoing debates about research governance.China''s involvement in synthetic biology was largely promoted by the participation of students in the International Genetically Engineered Machine (iGEM) competition, an international contest for undergraduates initiated by the Massachusetts Institute of Technology (MIT) in the USA. Before the iGEM training workshop that was hosted by Tianjin University in the Spring of 2007, there were no research records and only two literature reviews on synthetic biology in Chinese scientific databases (Zhao & Wang, 2007). According to Chunting Zhang of Tianjin University—a leading figure in the promotion of synthetic biology in China—it was during these workshops that Chinese research institutions joined their efforts for the first time (Zhang, 2008). From the outset, the organization of the workshop had a national focus, while it engaged with international networks. Synthetic biologists, including Drew Endy from MIT and Christina Smolke from Stanford University, USA, were invited. Later that year, another training camp designed for iGEM tutors was organized in Tianjin and included delegates from Australia and Japan (Zhang, 2008).Through years of organizing iGEM-related conferences and workshops, Chinese universities have strengthened their presence at this international competition; in 2007, four teams from China participated. During the 2010 competition, 11 teams from nine universities in six provinces/municipalities took part. Meanwhile, recruiting, training and supervising iGEM teams has become an important institutional programme at an increasing number of universities.…training for iGEM has grown beyond winning the student awards and become a key component of exchanges between Chinese researchers and the international communityIt might be easy to interpret the enthusiasm for the iGEM as a passion for winning gold medals, as is conventionally the case with other international scientific competitions. This could be one motive for participating. Yet, training for iGEM has grown beyond winning the student awards and has become a key component of exchanges between Chinese researchers and the international community (Ding, 2010). Many of the Chinese scientists interviewed recounted the way in which their initial involvement in synthetic biology overlapped with their tutoring of iGEM teams. One associate professor at Tianjin University, who wrote the first undergraduate textbook on synthetic biology in China, half-jokingly said, “I mainly learnt [synthetic biology] through tutoring new iGEM teams every year.”Participation in such contests has not only helped to popularize synthetic biology in China, but has also influenced local research culture. One example of this is that the iGEM competition uses standard biological parts (BioBricks), and new BioBricks are submitted to an open registry for future sharing. A corresponding celebration of open-source can also be traced to within the Chinese synthetic-biology community. In contrast to the conventional perception that the Chinese scientific sector consists of a “very large number of ‘innovative islands''” (OECD, 2007; Zhang, 2010), communication between domestic teams is quite active. In addition to the formally organized national training camps and conferences, students themselves organize a nationwide, student-only workshop at which to informally test their ideas.More interestingly, when the author asked one team whether there are any plans to set up a ‘national bank'' for hosting designs from Chinese iGEM teams, in order to benefit domestic teams, both the tutor and team members thought this proposal a bit “strange”. The team leader responded, “But why? There is no need. With BioBricks, we can get any parts we want quite easily. Plus, it directly connects us with all the data produced by iGEM teams around the world, let alone in China. A national bank would just be a small-scale duplicate.”From the beginning, interest in the development of synthetic biology in China has been focused on collective efforts within and across national borders. In contrast to conventional critiques on the Chinese scientific community''s “inclination toward competition and secrecy, rather than openness” (Solo & Pressberg, 2007; OECD, 2007; Zhang, 2010), there seems to be a new outlook emerging from the participation of Chinese universities in the iGEM contest. Of course, that is not to say that the BioBricks model is without problems (Rai & Boyle, 2007), or to exclude inputs from other institutional channels. Yet, continuous grass-roots exchanges, such as the undergraduate-level competition, might be as instrumental as formal protocols in shaping research culture. The indifference of Chinese scientists to a ‘national bank'' seems to suggest that the distinction between the ‘national'' and ‘international'' scientific communities has become blurred, if not insignificant.However, frequent cross-institutional exchanges and the domestic organization of iGEM workshops seem to have nurtured the development of a national synthetic-biology community in China, in which grass-roots scientists are comfortable relying on institutions with a cosmopolitan character—such as the BioBricks Foundation—to facilitate local research. To some extent, one could argue that in the eyes of Chinese scientists, national and international resources are one accessible global pool. This grass-roots interest in incorporating local and global advantages is not limited to student training and education, but also exhibited in evolving funding and regulatory debates.In the development of research funding for synthetic biology, a similar bottom-up consolidation of national and global resources can also be observed. As noted earlier, synthetic-biology research in China is in its infancy. A popular view is that China has the potential to lead this field, as it has strong support from related disciplines. In terms of genome sequencing, DNA synthesis, genetic engineering, systems biology and bioinformatics, China is “almost at the same level as developed countries” (Pan, 2008), but synthetic-biology research has only been carried out “sporadically” (Pan, 2008; Huang, 2009). There are few nationally funded projects and there is no discernible industrial involvement (Yang, 2010). Most existing synthetic-biology research is led by universities or institutions that are affiliated with the Chinese Academy of Science (CAS). As one CAS academic commented, “there are many Chinese scientists who are keen on conducting synthetic-biology research. But no substantial research has been launched nor has long-term investment been committed.”The initial undertaking of academic research on synthetic biology in China has therefore benefited from transnational initiatives. The first synthetic-biology project in China, launched in October 2006, was part of the ‘Programmable Bacteria Catalyzing Research'' (PROBACTYS) project, funded by the Sixth Framework Programme of the European Union (Yang, 2010). A year later, another cross-border collaborative effort led to the establishment of the first synthetic-biology centre in China: the Edinburgh University–Tianjing University Joint Research Centre for Systems Biology and Synthetic Biology (Zhang, 2008).There is also a comparable commitment to national research coordination. A year after China''s first participation in iGEM, the 2008 Xiangshan conference focused on domestic progress. From 2007 to 2009, only five projects in China received national funding, all of which came from the National Natural Science Foundation of China (NSFC). This funding totalled ¥1,330,000 (approximately £133,000; www.nsfc.org), which is low in comparison to the £891,000 funding that was given in the UK for seven Networks in Synthetic Biology in 2007 alone (www.bbsrc.ac.uk).One of the primary challenges in obtaining funding identified by the interviewees is that, as an emerging science, synthetic biology is not yet appreciated by Chinese funding agencies. After the Xiangshan conference, the CAS invited scientists to a series of conferences in late 2009. According to the interviewees, one of the main outcomes was the founding of a ‘China Synthetic Biology Coordination Group''; an informal association of around 30 conference delegates from various research institutions. This group formulated a ‘regulatory suggestion'' that they submitted to MOST, which stated the necessity and implications of supporting synthetic-biology research. In addition, leading scientists such as Chunting Zhang and Huanming Yang—President of the Beijing Genomic Institute (BGI), who co-chaired the Beijing Institutes of Life Science (BILS) conferences—have been active in communicating with government institutions. The initial results of this can be seen in the MOST 2010 Application Guidelines for the National Basic Research Program, in which synthetic biology was included for the first time, among ‘key supporting areas'' (MOST, 2010). Meanwhile, in 2010, NSFC allocated ¥1,500,000 (approximately £150,000) to synthetic-biology research, which is more than the total funding the area had received in the past three years.The search for funding further demonstrates the dynamics between national and transnational resources. Chinese R&D initiatives have to deal with the fact that scientific venture-capital and non-governmental research charities are underdeveloped in China. In contrast to the EU or the USA, government institutions in China, such as the NSFC and MOST, are the main and sometimes only domestic sources of funding. Yet, transnational funding opportunities facilitate the development of synthetic biology by alleviating local structural and financial constraints, and further integrate the Chinese scientific community into international research.This is not a linear ‘going-global'' process; it is important for Chinese scientists to secure and promote national and regional support. In addition, this alignment of national funding schemes with global research progress is similar to the iGEM experience, as it is being initiated through informal bottom-up associations between scientists, rather than by top-down institutional channels.As more institutions have joined iGEM training camps and participated in related conferences, a shared interest among the Chinese scientific community in developing synthetic biology has become visible. In late 2009, at the conference that founded the informal ‘coordination group'', the proposition of integrating national expertise through a big-question approach emerged. According to one professor in Beijing—who was a key participant in the discussion at the time—this proposition of a nationwide synergy was not so much about ‘national pride'' or an aim to develop a ‘Chinese'' synthetic biology, it was about research practicality. She explained, “synthetic biology is at the convergence of many disciplines, computer modelling, nano-technology, bioengineering, genomic research etc. Individual researchers like me can only operate on part of the production chain. But I myself would like to see where my findings would fit in a bigger picture as well. It just makes sense for a country the size of China to set up some collective and coordinated framework so as to seek scientific breakthrough.”From the first participation in the iGEM contest to the later exploration of funding opportunities and collective research plans, scientists have been keen to invite and incorporate domestic and international resources, to keep up with global research. Yet, there are still regulatory challenges to be met.…with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' mannerThe reputation of “the ‘wild East'' of biology” (Dennis, 2002) is associated with China'' previous inattention to ethical concerns about the life sciences, especially in embryonic-stem-cell research. Similarly, synthetic biology creates few social concerns in China. Public debate is minimal and most media coverage has been positive. Synthetic biology is depicted as “a core in the fourth wave of scientific development” (Pan, 2008) or “another scientific revolution” (Huang, 2009). Whilst recognizing its possible risks, mainstream media believe that “more people would be attracted to doing good while making a profit than doing evil” (Fang & He, 2010). In addition, biosecurity and biosafety training in China are at an early stage, with few mandatory courses for students (Barr & Zhang, 2010). The four leading synthetic-biology teams I visited regarded the general biosafety regulations that apply to microbiology laboratories as sufficient for synthetic biology. In short, with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' manner.Yet, fieldwork suggests that, in contrast to this previous insensitivity to global ethical concerns, the synthetic-biology community in China has taken a more proactive approach to engaging with international debates. It is important to note that there are still no synthetic-biology-specific administrative guidelines or professional codes of conduct in China. However, Chinese stakeholders participate in building a ‘mutual inclusiveness'' between global and domestic discussions.One of the most recent examples of this is a national conference about the ethical and biosafety implications of synthetic biology, which was jointly hosted by the China Association for Science and Technology, the Chinese Society of Biotechnology and the Beijing Institutes of Life Science CAS, in Suzhou in June 2010. The discussion was open to the mainstream media. The debate was not simply a recapitulation of Western worries, such as playing god, potential dual-use or ecological containment. It also focused on the particular concerns of developing countries about how to avoid further widening the developmental gap with advanced countries (Liu, 2010).In addition to general discussions, there are also sustained transnational communications. For example, one of the first three projects funded by the NSFC was a three-year collaboration on biosafety and risk-assessment frameworks between the Institute of Botany at CAS and the Austrian Organization for International Dialogue and Conflict Management (IDC).Chinese scientists are also keen to increase their involvement in the formulation of international regulations. The CAS and the Chinese Academy of Engineering are engaged with their peer institutions in the UK and the USA to “design more robust frameworks for oversight, intellectual property and international cooperation” (Royal Society, 2009). It is too early to tell what influence China will achieve in this field. Yet, the changing image of the country from an unconcerned wild East to a partner in lively discussions signals a new dynamic in the global development of synthetic biology.Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in ChinaFrom self-organized participation in iGEM to bottom-up funding and governance initiatives, two features are repeatedly exhibited in the emergence of synthetic biology in China: global resources and international perspectives complement national interests; and the national and cosmopolitan research strengths are mostly instigated at the grass-roots level. During the process of introducing, developing and reflecting on synthetic biology, many formal or informal, provisional or long-term alliances have been established from the bottom up. Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in China.However, the inputs of different social actors has not led to disintegration of the field into an array of individualized pursuits, but has transformed it into collective synergies, or the big-question approach. Underlying the diverse efforts of Chinese scientists is a sense of ‘inclusiveness'', or the idea of bringing together previously detached research expertise. Thus, the big-question strategy cannot be interpreted as just another nationally organized agenda in response to global scientific advancements. Instead, it represents a more intricate development path corresponding to how contemporary research evolves on the ground.In comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stageIn comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stage. Government input—such as the potential stewardship of the MOST in directing a big-question approach or long-term funding—remain important; the scientists who were interviewed expend a great deal of effort to attract governmental participation. Yet, China'' experience highlights that the key to comprehending regional scientific capacity lies not so much in what the government can do, but rather in what is taking place in laboratories. It is important to remember that Chinese iGEM victories, collaborative synthetic-biology projects and ethical discussions all took place before the government became involved. Thus, to appreciate fully the dynamics of an emerging science, it might be necessary to focus on what is formulated from the bottom up.The experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary researchThe experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary research. More specifically, it is a result of the commitment of Chinese scientists to incorporating national and international resources, actors and social concerns. For practical reasons, the national organization of research, such as through the big-question approach, might still have an important role. However, synthetic biology might be not only a mosaic of national agendas, but also shaped by transnational activities and scientific resources. What Chinese scientists will collectively achieve remains to be seen. Yet, the emergence of synthetic biology in China might be indicative of a new paradigm for how research practices can be introduced, normalized and regulated.  相似文献   

8.
9.
10.
11.
The public view of life-extension technologies is more nuanced than expected and researchers must engage in discussions if they hope to promote awareness and acceptanceThere is increasing research and commercial interest in the development of novel interventions that might be able to extend human life expectancy by decelerating the ageing process. In this context, there is unabated interest in the life-extending effects of caloric restriction in mammals, and there are great hopes for drugs that could slow human ageing by mimicking its effects (Fontana et al, 2010). The multinational pharmaceutical company GlaxoSmithKline, for example, acquired Sirtris Pharmaceuticals in 2008, ostensibly for their portfolio of drugs targeting ‘diseases of ageing''. More recently, the immunosuppressant drug rapamycin has been shown to extend maximum lifespan in mice (Harrison et al, 2009). Such findings have stoked the kind of enthusiasm that has become common in media reports of life-extension and anti-ageing research, with claims that rapamycin might be “the cure for all that ails” (Hasty, 2009), or that it is an “anti-aging drug [that] could be used today” (Blagosklonny, 2007).Given the academic, commercial and media interest in prolonging human lifespan—a centuries-old dream of humanity—it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives, and to ask whether they would be willing to buy and use drugs that slow the ageing process. Surveys that have addressed these questions, have given some rather surprising results, contrary to the expectations of many researchers in the field. They have also highlighted that although human life extension (HLE) and ageing are topics with enormous implications for society and individuals, scientists have not communicated efficiently with the public about their research and its possible applications.Given the academic, commercial and media interest in prolonging human lifespan […] it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives…Proponents and opponents of HLE often assume that public attitudes towards ageing interventions will be strongly for or against, but until now, there has been little empirical evidence with which to test these assumptions (Lucke & Hall, 2005). We recently surveyed members of the public in Australia and found a variety of opinions, including some ambivalence towards the development and use of drugs that could slow ageing and increase lifespan. Our findings suggest that many members of the public anticipate both positive and negative outcomes from this work (Partridge 2009a, b, 2010; Underwood et al, 2009).In a community survey of public attitudes towards HLE we found that around two-thirds of a sample of 605 Australian adults supported research with the potential to increase the maximum human lifespan by slowing ageing (Partridge et al, 2010). However, only one-third expressed an interest in using an anti-ageing pill if it were developed. Half of the respondents were not interested in personally using such a pill and around one in ten were undecided.Some proponents of HLE anticipate their research being impeded by strong public antipathy (Miller, 2002, 2009). Richard Miller has claimed that opposition to the development of anti-ageing interventions often exists because of an “irrational public predisposition” to think that increased lifespans will only lead to elongation of infirmity. He has called this “gerontologiphobia”—a shared feeling among laypeople that while research to cure age-related diseases such as dementia is laudable, research that aims to intervene in ageing is a “public menace” (Miller, 2002).We found broad support for the amelioration of age-related diseases and for technologies that might preserve quality of life, but scepticism about a major promise of HLE—that it will delay the onset of age-related diseases and extend an individual''s healthy lifespan. From the people we interviewed, the most commonly cited potential negative personal outcome of HLE was that it would extend the number of years a person spent with chronic illnesses and poor quality of life (Partridge et al, 2009a). Although some members of the public envisioned more years spent in good health, almost 40% of participants were concerned that a drug to slow ageing would do more harm than good to them personally; another 13% were unsure about the benefits and costs (Partridge et al, 2010).…it might be that advocates of HLE have failed to persuade the public on this issueIt would be unwise to label such concerns as irrational, when it might be that advocates of HLE have failed to persuade the public on this issue. Have HLE researchers explained what they have discovered about ageing and what it means? Perhaps the public see the claims that have been made about HLE as ‘too good to be true‘.Results of surveys of biogerontologists suggest that they are either unaware or dismissive of public concerns about HLE. They often ignore them, dismiss them as “far-fetched”, or feel no responsibility “to respond” (Settersten Jr et al, 2008). Given this attitude, it is perhaps not surprising that the public are sceptical of their claims.Scientists are not always clear about the outcomes of their work, biogerontologists included. Although the life-extending effects of interventions in animal models are invoked as arguments for supporting anti-ageing research, it is not certain that these interventions will also extend healthy lifespans in humans. Miller (2009) reassuringly claims that the available evidence consistently suggests that quality of life is maintained in laboratory animals with extended lifespans, but he acknowledges that the evidence is “sparse” and urges more research on the topic (Miller, 2009). In the light of such ambiguity, researchers need to respond to public concerns in ways that reflect the available evidence and the potential of their work, without becoming apostles for technologies that have not yet been developed. An anti-ageing drug that extends lifespan without maintaining quality of life is clearly undesirable, but the public needs to be persuaded that such an outcome can be avoided.The public is also concerned about the possible adverse side effects of anti-ageing drugs. Many people were bemused when they discovered that members of the Caloric Restriction Society experienced a loss of libido and loss of muscle mass as a result of adhering to a low-calorie diet to extend their longevity—for many people, such side effects would not be worth the promise of some extra years of life. Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humans (Fontana et al, 2010). If researchers do not discuss these possible effects, then a curious public might draw their own conclusions.Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humansSome HLE advocates seem eager to tout potential anti-ageing drugs as being free from adverse side effects. For example, Blagosklonny (2007) has argued that rapamycin could be used to prevent age-related diseases in humans because it is “a non-toxic, well tolerated drug that is suitable for everyday oral administration” with its major “side-effects” being anti-tumour, bone-protecting, and mimicking caloric restriction effects. By contrast, Kaeberlein & Kennedy (2009) have advised the public against using the drug because of its immunosuppressive effects.Aubrey de Grey has called for scientists to provide more optimistic timescales for HLE on several occasions. He claims that public opposition to interventions in ageing is based on “extraordinarily transparently flawed opinions” that HLE would be unethical and unsustainable (de Grey, 2004). In his view, public opposition is driven by scepticism about whether HLE will be possible, and that concerns about extending infirmity, injustice or social harms are simply excuses to justify people''s belief that ageing is ‘not so bad'' (de Grey, 2007). He argues that this “pro-ageing trance” can only be broken by persuading the public that HLE technologies are just around the corner.Contrary to de Grey''s expectations of public pessimism, 75% of our survey participants thought that HLE technologies were likely to be developed in the near future. Furthermore, concerns about the personal, social and ethical implications of ageing interventions and HLE were not confined to those who believed that HLE is not feasible (Partridge et al, 2010).Juengst et al (2003) have rightly pointed out that any interventions that slow ageing and substantially increase human longevity might generate more social, economic, political, legal, ethical and public health issues than any other technological advance in biomedicine. Our survey supports this idea; the major ethical concerns raised by members of the public reflect the many and diverse issues that are discussed in the bioethics literature (Partridge et al, 2009b; Partridge & Hall, 2007).When pressed, even enthusiasts admit that a drastic extension of human life might be a mixed blessing. A recent review by researchers at the US National Institute on Aging pointed to several economic and social challenges that arise from longevity extension (Sierra et al, 2009). Perry (2004) suggests that the ability to slow ageing will cause “profound changes” and a “firestorm of controversy”. Even de Grey (2005) concedes that the development of an effective way to slow ageing will cause “mayhem” and “absolute pandemonium”. If even the advocates of anti-ageing and HLE anticipate widespread societal disruption, the public is right to express concerns about the prospect of these things becoming reality. It is accordingly unfair to dismiss public concerns about the social and ethical implications as “irrational”, “inane” or “breathtakingly stupid” (de Grey, 2004).The breadth of the possible implications of HLE reinforces the need for more discussion about the funding of such research and management of its outcomes ( Juengst et al, 2003). Biogerontologists need to take public concerns more seriously if they hope to foster support for their work. If there are misperceptions about the likely outcomes of intervention in ageing, then biogerontologists need to better explain their research to the public and discuss how their concerns will be addressed. It is not enough to hope that a breakthrough in human ageing research will automatically assuage public concerns about the effects of HLE on quality of life, overpopulation, economic sustainability, the environment and inequities in access to such technologies. The trajectories of other controversial research areas—such as human embryonic stem cell research and assisted reproductive technologies (Deech & Smajdor, 2007)—have shown that “listening to public concerns on research and responding appropriately” is a more effective way of fostering support than arrogant dismissal of public concerns (Anon, 2009).Biogerontologists need to take public concerns more seriously if they hope to foster support for their work? Open in a separate windowBrad PartridgeOpen in a separate windowJayne LuckeOpen in a separate windowWayne Hall  相似文献   

12.
Elixirs of death     
Substandard and fake drugs are increasingly threatening lives in both the developed and developing world, but governments and industry are struggling to improve the situation.When people take medicine, they assume that it will make them better. However many patients cannot trust their drugs to be effective or even safe. Fake or substandard medicine is a major public health problem and it seems to be growing. More than 200 heart patients died in Pakistan in 2012 after taking a contaminated drug against hypertension [1]. In 2006, cough syrup that contained diethylene glycol as a cheap substitute for pharmaceutical-grade glycerin was distributed in Panama, causing the death of at least 219 people [2,3]. However, the problem is not restricted to developing countries. In 2012, more than 500 patients came down with fungal meningitis and several dozens died after receiving contaminated steroid injections from a compounding pharmacy in Massachusetts [4]. The same year, a fake version of the anti-cancer drug Avastin, which contained no active ingredient, was sold in the USA. The drug seemed to have entered the country through Turkey, Switzerland, Denmark and the UK [5].…many patients cannot trust their drugs to be effective or even safeThe extent of the problem is not really known, as companies and governments do not always report incidents [6]. However, the information that is available is alarming enough, especially in developing countries. One study found that 20% of antihypertensive drugs collected from pharmacies in Rwanda were substandard [7]. Similarly, in a survey of anti-malaria drugs in Southeast Asia and sub-Saharan Africa, 20–42% were found to be either of poor quality or outright fake [8], whilst 56% of amoxicillin capsules sampled in different Arab countries did not meet the US Pharmacopeia requirements [9].Developing countries are particularly susceptible to substandard and fake medicine. Regulatory authorities do not have the means or human resources to oversee drug manufacturing and distribution. A country plagued by civil war or famine might have more pressing problems—including shortages of medicine in the first place. The drug supply chain is confusingly complex with medicines passing through many different hands before they reach the patient, which creates many possible entry points for illegitimate products. Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insurance. Instead, they buy cheap medicine from street vendors at the market or on the bus (Fig 1; [2,10,11]). “People do not have the money to buy medicine at a reasonable price. But quality comes at a price. A reasonable margin is required to pay for a quality control system,” explained Hans Hogerzeil, Professor of Global Health at Groningen University in the Netherlands. In some countries, falsifying medicine has developed into a major business. The low risk of being detected combined with relatively low penalties has turned falsifying medicine into the “perfect crime” [2].Open in a separate windowFigure 1Women sell smuggled, counterfeit medicine on the Adjame market in Abidjan, Ivory Coast, in 2007. Fraudulent street medecine sales rose by 15–25% in the past two years in Ivory Coast.Issouf Sanogo/AFP Photo/Getty Images.There are two main categories of illegitimate drugs. ‘Substandard'' medicines might result from poor-quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intent. It might be manufactured outside the regulatory system, perhaps in an illegitimate production shack that blends chalk with other ingredients and presses it into pills [10]. Whilst falsified medicines do not typically contain any active ingredients, substandard medicine might contain subtherapeutic amounts. This is particularly problematic when it comes to anti-infectious drugs, as it facilitates the emergence and spread of drug resistance [12]. A sad example is the emergence of artemisinin-resistant Plasmodium strains at the Thai–Cambodia border [8] and the Thai–Myanmar border [13], and increasing multidrug-resistant tuberculosis might also be attributed to substandard medication [11].Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insuranceEven if a country effectively prosecutes falsified and substandard medicine within its borders, it is still vulnerable to fakes and low-quality drugs produced elsewhere where regulations are more lax. To address this problem, international initiatives are urgently required [10,14,15], but there is no internationally binding law to combat counterfeit and substandard medicine. Although drug companies, governments and NGOs are interested in good-quality medicines, the different parties seem to have difficulties coming to terms with how to proceed. What has held up progress is a conflation of health issues and economic interests: innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting quality of medicine [14,16].The concern that intellectual property (IP) interests threaten public health dates back to the ‘Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement'' of the World Trade Organization (WTO), adopted in 1994, to establish global protection of intellectual property rights, including patents for pharmaceuticals. The TRIPS Agreement had devastating consequences during the acquired immunodeficiency syndrome epidemic, as it blocked patients in developing countries from access to affordable medicine. Although it includes flexibility, such as the possibility for governments to grant compulsory licenses to manufacture or import a generic version of a patented drug, it has not always been clear how these can be used by countries [14,16,17].In response to public concerns over the public health consequences of TRIPS, the Doha Declaration on the TRIPS Agreement and Public Health was adopted at the WTO''s Ministerial Conference in 2001. It reaffirmed the right of countries to use TRIPS flexibilities and confirmed the primacy of public health over the enforcement of IP rights. Although things have changed for the better, the Doha Declaration did not solve all the problems associated with IP protection and public health. For example, anti-counterfeit legislation, encouraged by multi-national pharmaceutical industries and the EU, threatened to impede the availability of generic medicines in East Africa [14,16,18]. In 2008–2009, European customs authorities seized shipments of legitimate generic medicines in transit from India to other developing countries because they infringed European IP laws [14,16,17]. “We''re left with decisions being taken based on patents and trademarks that should be taken based on health,” commented Roger Bate, a global health expert and resident scholar at the American Enterprise Institute in Washington, USA. “The health community is shooting themselves in the foot.”Conflating health care and IP issues are reflected in the unclear use of the term ‘counterfeit'' [2,14]. “Since the 1990s the World Health Organization (WHO) has used the term ‘counterfeit'' in the sense we now use ‘falsified'',” explained Hogerzeil. “The confusion started in 1995 with the TRIPS agreement, through which the term ‘counterfeit'' got the very narrow meaning of trademark infringement.” As a consequence, an Indian generic, for example, which is legal in some countries but not in others, could be labelled as ‘counterfeit''—and thus acquire the negative connotation of bad quality. “The counterfeit discussion was very much used as a way to block the market of generics and to put them in a bad light,” Hogerzeil concluded.The rifts between the stakeholders have become so deep during the course of these discussions that progress is difficult to achieve. “India is not at all interested in any international regulation. And, unfortunately, it wouldn''t make much sense to do anything without them,” Hogerzeil explained. Indeed, India is a core player: not only does it have a large generics industry, but also the country seems to be, together with China, the biggest source of fake medical products [19,20]. The fact that India is so reluctant to react is tragically ironic, as this stance hampers the growth of its own generic companies like Ranbaxy, Cipla or Piramal. “I certainly don''t believe that Indian generics would lose market share if there was stronger action on public health,” Bate said. Indeed, stricter regulations and control systems would be advantageous, because they would keep fakers at bay. The Indian generic industry is a common target for fakers, because their products are broadly distributed. “The most likely example of a counterfeit product I have come across in emerging markets is a counterfeit Indian generic,” Bate said. Such fakes can damage a company''s reputation and have a negative impact on its revenues when customers stop buying the product.The WHO has had a key role in attempting to draft international regulations that would contain the spread of falsified and substandard medicine. It took a lead in 2006 with the launch of the International Medical Products Anti-Counterfeiting Taskforce (IMPACT). But IMPACT was not a success. Concerns were raised over the influence of multi-national drug companies and the possibility that issues on quality of medicines were conflated with the attempts to enforce stronger IP measures [17]. The WHO distanced itself from IMPACT after 2010. For example, it no longer hosts IMPACT''s secretariat at its headquarters in Geneva [2].‘Substandard'' medicines might result from poor quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intentIn 2010, the WHO''s member states established a working group to further investigate how to proceed, which led to the establishment of a new “Member State mechanism on substandard/spurious/falsely labelled/falsified/counterfeit medical products” (http://www.who.int/medicines/services/counterfeit/en/index.html). However, according to a publication by Amir Attaran from the University of Ottawa, Canada, and international colleagues, the working group “still cannot agree how to define the various poor-quality medicines, much less settle on any concrete actions” [14]. The paper''s authors demand more action and propose a binding legal framework: a treaty. “Until we have stronger public health law, I don''t think that we are going to resolve this problem,” Bate, who is one of the authors of the paper, said.Similarly, the US Food and Drug Administration (FDA) commissioned the Institute of Medicine (IOM) to convene a consensus committee on understanding the global public health implications of falsified and substandard pharmaceuticals [2]. Whilst others have called for a treaty, the IOM report calls on the World Health Assembly—the governing body of the WHO—to develop a code of practice such as a “voluntary soft law” that countries can sign to express their will to do better. “At the moment, there is not yet enough political interest in a treaty. A code of conduct may be more realistic,” Hogerzeil, who is also on the IOM committee, commented. Efforts to work towards a treaty should nonetheless be pursued, Bate insisted: “The IOM is right in that we are not ready to sign a treaty yet, but that does not mean you don''t start negotiating one.”Whilst a treaty might take some time, there are several ideas from the IOM report and elsewhere that could already be put into action to deal with this global health threat [10,12,14,15,19]. Any attempts to safeguard medicines need to address both falsified and substandard medicines, but the counter-measures are different [14]. Falsifying medicine is, by definition, a criminal act. To counteract fakers, action needs to be taken to ensure that the appropriate legal authorities deal with criminals. Substandard medicine, on the other hand, arises when mistakes are made in genuine manufacturing companies. Such mistakes can be reduced by helping companies do better and by improving quality control of drug regulatory authorities.Manufacturing pharmaceuticals is a difficult and costly business that requires clean water, high-quality chemicals, expensive equipment, technical expertise and distribution networks. Large and multi-national companies benefit from economies of scale to cope with these problems. But smaller companies often struggle and compromise in quality [2,21]. “India has 20–40 big companies and perhaps nearly 20,000 small ones. To me, it seems impossible for them to produce at good quality, if they remain so small,” Hogerzeil explained. “And only by being strict, can you force them to combine and to become bigger industries that can afford good-quality assurance systems.” Clamping down on drug quality will therefore lead to a consolidation of the industry, which is an essential step. “If you look at Europe and the US, there were hundreds of drug companies—now there are dozens. And if you look at the situation in India and China today, there are thousands and that will have to come down to dozens as well,” Bate explained.…innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting […] medicineIn addition to consolidating the market by applying stricter rules, the IOM has also suggested measures for supporting companies that observe best practices [2]. For example, the IOM proposes that the International Finance Corporation and the Overseas Private Investment Corporation, which promote private-sector development to reduce poverty, should create separate investment vehicles for pharmaceutical manufacturers who want to upgrade to international standards. Another suggestion is to harmonize market registration of pharmaceutical products, which would ease the regulatory burden for generic producers in developing countries and improve the efficiency of regulatory agencies.Once the medicine leaves the manufacturer, controlling distribution systems becomes another major challenge in combatting falsified and substandard medicine. Global drug supply chains have grown increasingly complicated; drugs cross borders, are sold back and forth between wholesalers and distributers, and are often repackaged. Still, there is a main difference between developing and developed countries. In the latter case, relatively few companies dominate the market, whereas in poorer nations, the distribution system is often fragmented and uncontrolled with parallel schemes, too few pharmacies, even fewer pharmacists and many unlicensed medical vendors. Every transaction creates an opportunity for falsified or substandard medicine to enter the market [2,10,19]. More streamlined and transparent supply chains and stricter licensing requirements would be crucial to improve drug quality. “And we can start in the US,” Hogerzeil commented.…India is a core player: not only does it have a large generics industry, but the country also seems to be, together with China, the biggest source of fake medical productsDistribution could be improved at different levels, starting with the import of medicine. “There are states in the USA where the regulation for medicine importation is very lax. Anyone can import; private clinics can buy medicine from Lebanon or elsewhere and fly them in,” Hogerzeil explained. The next level would be better control over the distribution system within the country. The IOM suggests that state boards should license wholesalers and distributors that meet the National Association of Boards of Pharmacy accreditation standards. “Everybody dealing with medicine has to be licensed,” Hogerzeil said. “And there should be a paper trail of who buys what from whom. That way you close the entry points for illegal drugs and prevent that falsified medicines enter the legal supply chain.” The last level would be a track-and-trace system to identify authentic drugs [2]. Every single package of medicine should be identifiable through an individual marker, such as a 3D bar code. Once it is sold, it is ticked off in a central database, so the marker cannot be reused.According to Hogerzeil, equivalent measures at these different levels should be established in every country. “I don''t believe in double standards”, he said. “Don''t say to Uganda: ‘you can''t do that''. Rather, indicate to them what a cost-effective system in the West looks like and help them, and give them the time, to create something in that direction that is feasible in their situation.”Nigeria, for instance, has demonstrated that with enough political will, it is possible to reduce the proliferation of falsified and substandard medicine. Nigeria had been a major source for falsified products, but things changed in 2001, when Dora Akunyili was appointed Director General of the National Agency for Food and Drug Administration and Control. Akunyili has a personal motivation for fighting falsified drugs: her sister Vivian, a diabetic patient, lost her life to fake insulin in 1988. Akunyili strengthened import controls, campaigned for public awareness, clamped down on counterfeit operations and pushed for harsher punishments [10,19]. Paul Orhii, Akunyili''s successor, is committed to continuing her work [10]. Although there are no exact figures, various surveys indicate that the rate of bad-quality medicine has dropped considerably in Nigeria [10].China is also addressing its drug-quality problems. In a highly publicized event, the former head of China''s State Food and Drug Administration, Zheng Xiaoyu, was executed in 2007 after he was found guilty of accepting bribes to approve untested medicine. Since then, China''s fight against falsified medicine has continued. As a result of heightened enforcement, the number of drug companies in China dwindled from 5,000 in 2004 to about 3,500 this year [2]. Moreover, in July 2012, more than 1,900 suspects were arrested for the sale of fake or counterfeit drugs.Quality comes at a price, however. It is expensive to produce high-quality medicine, and it is expensive to control the production and distribution of drugs. Many low- and middle-income countries might not have the resources to tackle the problem and might not see quality of medicine as a priority. But they should, and affluent countries should help. Not only because health is a human right, but also for economic reasons. A great deal of time and money is invested into testing the safety and efficacy of medicine during drug development, and these resources are wasted when drugs do not reach patients. Falsified and substandard medicines are a financial burden to health systems and the emergence of drug-resistant pathogens might make invaluable medications useless. Investing in the safety of medicine is therefore a humane and an economic imperative.  相似文献   

13.
14.
Is political interference in science unavoidable? A look at the situation in Italy highlights what can happen if scientists do not defend their independence and their science.The second half of the twentieth century has seen the relationship between society, politics and science become increasingly complex and controversial. Particularly in democratic countries—where the application of scientific research and the diffusion of knowledge have contributed to a significant increase in the well-being of citizens—scientists have had to face interference from political, religious and ideological interest groups. Even the seemingly powerful scientific community in the USA was affected by an ‘epidemic of politics'' under the administration of President George W. Bush. This ‘infection of science'' was characterized by inappropriate political meddling in research driven by political prejudices and religious arguments, especially in more controversial research fields. During his tenure, Bush established science and health policies that went against expert advice, and in several cases made controversial appointments to key positions in scientific and health agencies (Kennedy, 2003; Mooney, 2005). This was all the more shocking because science and scientists in the USA have generally enjoyed a great deal of political independence.Even the seemingly powerful scientific community in the USA was affected by an ‘epidemic of politics'' under the administration of President George W. BushSuch ‘epidemics of politics'' are not exclusive to the USA; political interference in scientific research and its applications is endemic in many countries. Such meddling can take various forms depending on the country in question, the different democratic decision-making processes at work, the relative influences of politics, economics and society on the scientific community and, to some extent, the level of scientific literacy of the public. During the past two decades, science in Italy has been suffering from a particularly severe form of political interference that we believe deserves international consideration, if only to act as a warning for other countries.Italian science has often found itself entangled in political controversy. After the unification of the country in 1861, during the last two decades of the nineteenth century and the first decade of the twentieth century, Italian scientists actively participated in political debates about how to improve and integrate the fragments of Italian society, culture, economy, health, and so on. But from the beginning, they often confused political battles with their professional status and/or scientific disagreements (Casella et al, 2000). Throughout the fascist era, the scientific community—similarly to the rest of the country—was subjected to the rule of Benito Mussolini''s regime (Maiocchi, 2004). After the Second World War, both Catholic and Marxist ideologies prevented the rise of an autonomous scientific community, so Italian scientists had and still have little cultural or political influence.During the past two decades, science in Italy has been suffering from a particularly severe form of political interference…Yet Italians are far from hostile to science; they follow advances in research and technology with keen interest and expectation, as shown by a fairly recent survey (Eurobarometer, 2005a, b). Politicians, influential intellectuals and lobbyists who oppose research and innovation for various reasons have therefore adopted a strategy of trying to manipulate and censor facts. Rather than confronting the scientific evidence directly, they maintain a high degree of political control over scientific research and its applications. As a result, the validity of scientific evidence has become optional and its use arbitrary in public and political discussions.This situation has been virtually de rigueur since the advent of Silvio Berlusconi in 1994, although it would be unfair to say that the current Italian Prime Minister is the main culprit. Indeed, many factors have acted together to make Italian science prey to political influence, including the predominance of non-transparent and nepotistic approaches to the public funding of research, the chronic cultural and political impotence of Italian scientists and the waning professional quality of the national political and intellectual elites (Corbellini, 2009). The examples provided here should illustrate the weaknesses of the Italian scientific community and how politicians—irrespective of their political colour—have been reluctant to understand and respect the value of scientific procedures and evidence.In 1997, the Italian media regaled its readers with stories about a new and supposedly effective treatment for cancer, which had been developed by the physician and professor Luigi Di Bella, then at the University of Modena. The media storm was so convincing that a judge in Apulia ordered the local public health authorities to provide patients with the drug cocktail required for the therapy, despite the absence of a scientific basis for the claims or clinical evidence for the efficacy of the treatment (Remuzzi & Schieppati, 1999). The Di Bella multi-therapy (DBM)—as the treatment was called—soon became a topic for political wrangling between the members of right-wing parties who supported the treatment, and the more sceptical, ruling centre-left party. This continued until the health ministry, backed by prominent Italian oncologists, eventually agreed to sponsor a controversial clinical trial. This exposed the Italian medical community to international scorn (Müllner, 1999) and highlighted the lack of accurate and factual scientific information in the public debate (Passalacqua et al, 1999).Politicians, influential intellectuals and lobbyists who oppose research and innovation for various reasons have therefore adopted a strategy of manipulating and censoring factsIn late 2000 and early 2001, Italian plant biotechnologists were up in arms over a decree proposed by the centre-left government''s agricultural ministry that would have banned funding for any plant research involving genetic modification (Frank, 2000). The decree was eventually withdrawn as the result of a political move to prevent the opposition from exploiting the dispute. However, when the centre-right coalition came to power in May 2001, the new Ministry of Agriculture proved equally averse to the use of genetically modified plants. As a result, research in the field of plant genetics in Italy remains virtually devoid of public funding and a series of byzantine regulations still prevent Italian farmers from using genetically modified crops, despite the lack of scientific evidence that they are dangerous. In fact, the law does not explicitly ban their use and they are routinely imported as livestock feed.Striking examples of the manipulation and censorship of science were seen during the fierce debate that followed the introduction of Law 40—which was issued in 2004 with the apparent unofficial support of the Catholic Church—that limited the use of in vitro fertilization (IVF) procedures and banned research on human embryos. According to this law, each IVF procedure is allowed to create only three embryos, all of which must be implanted into the recipient mother (Boggio, 2005). This is in contrast to international guidelines on clinical practice (www.eshre.eu). Law 40 also prohibits pre-implantation diagnosis and the cryopreservation of embryos, as well as the generation of embryonic stem-cell lines, even when these are obtained from superfluous embryos that were created before the law was enforced and are destined to be stored frozen indefinitely.In 2005, patient advocacy groups and left parties called for a referendum to abrogate Law 40. This ignited a fierce dispute with Catholic politicians, backed by a handful of scientists, who called on voters to boycott the referendum and claimed that the law was scientifically sound and improved safety for patients (Vogel, 2005; Boggio & Corbellini, 2009). Interestingly, rather than attempting to justify their position with ethical, legal, scientific or religious arguments, the supporters of Law 40 often adopted the strategy of denigrating scientific research and facts and spreading misleading information (Corbellini, 2006). They claimed, for example, that pre-implantation diagnosis did not work, that the cryopreservation of embryos was not clinically necessary and that research with embryonic stem cells was pointless because adult stem cells had been proven to be effective for treating dozens of diseases (Corbellini, 2007).According to the Italian Constitution, the referendum was invalidated as less than 50% of the electorate voted. The proportion of Italian citizens who usually vote in a referendum is about 60%, and analysis shows that most non-voters decided not to participate because they did not understand what was at stake (Corbellini, 2006). Six years later, Law 40 has finally been revised by a series of decisions at Italy''s Constitutional Court and now, in some circumstances, pre-implantation diagnosis and the cryopreservation of embryos is permitted.The preceding examples have highlighted how Italian politicians and special interest groups have stifled scientific progress and liberty within Italy. The following examples highlight how political meddling and influence are jeopardizing the competitiveness of Italian research on the international stage.The teaching of evolution came frighteningly close to being scrapped from primary school curricula in Italy under a reform instigated by the 2003 centre-right government. It was reinstated only when the issue led to a political brawl between the Cabinet and the left-wing press (Frazzetto, 2004).Italy lacks an independent agency for research and also compulsory, transparent and unbiased selection processesThe same right-wing government was also opposed to the creation of the European Research Council (ERC), arguing that the agency would be too independent from political control (ftp://ftp.cordis.europa.eu/pub/italy/docs/positionfp7_it.pdf). This is not surprising for a country in which the chairs of public research institutions and the scientific directors of research hospitals are appointed by the government (with a few notable exceptions, see Anon, 2008) and where funding is often granted in a top-down manner by governmental decree to specific institutes, without public calls or peer review (Margottini, 2008).Even when funding is subject to peer review, cases in which money ends up at laboratories that are directly affiliated with members of the evaluating commission are, unfortunately, not the exception (Italian Parliament, 2006), which highlights the widespread conflicts of interest that are allowed. Italy lacks both an independent agency for research and compulsory, transparent and unbiased selection processes. As such, the guidelines and criteria that determine which research activities receive public funding are often established directly by the respective ministries, thereby increasing the risk of political interference. This was the case in 2007, when peers of Barbara Ensoli—then at the Istituto Superiore di Sanità (ISS) in Rome—felt that she was receiving a disproportionate amount of government funding, without peer review and in spite of the fact that her work on an HIV/AIDS vaccine was, at least to some scientists, unconvincing (Cohen, 2007).Conversely, in 2009 the Ministry of Health arbitrarily excluded projects involving human embryonic stem-cell lines from a call for proposals on stem-cell research funding—one of the authors of this article, Elena Cattaneo, is now appealing in court against the ministry''s decision (Cattaneo et al, 2010). Further, in October 2010 the Italian Ministry of Health decided, motu proprio, to grant €3 million to a private foundation that claimed to have created adult human stem cells that can be tested in patients with neurodegenerative diseases. This happened in spite of the Ministry''s declarations a few months previously that allocation of public money for research should be subject to peer review.If Italian scientists want to have a leading role in shaping society and the future, they must demand, reinstate and maintain sound principles of transparency and competitiveness in the allocation of public funding. This means that individual researchers—who enjoy the ephemeral benefits gained by deference to politicians and the exploitation of conflicts of interests—should be highlighted as negative examples to the scientific community, as their behaviour is damaging not only science, but also the practice of science as a model for public ethics.We hope that international experts in sociology and science policy find that the censorship of science, the manipulation of facts and the lack of objective peer review and evaluation in Italy deserve their attention and intervene on behalf of Italian science. They would be up against an interesting paradox: such abnormal conducts are often defended in the name of alleged democratic principles. The introduction of Law 40, for example, was justified publicly under the assumption that most Italian citizens were against the use of embryonic stem cells in research—which is, incidentally, false (Eurobarometer, 2006)—and the Apulia judge''s ruling on DBM was made on the grounds of individual freedom of access to therapy, laid down by the Italian constitution.… is Italy an exception, or simply a vision of things to come in other countries?One could ask whether the situation in Italy is simply a local consequence of a deteriorating relationship between science and society, or between scientists and politicians. In other words, is Italy an exception, or simply a vision of things to come in other countries? Regardless, the predicament of Italian science and scientists should stand as a warning of what happens when the rules of transparency are overridden, the scientific community remains largely silent, scientific facts have marginal political influence and science communication is helpless against ideologically driven propaganda that manipulates facts on a large scale (Corbellini, 2010). The experience of scientists in the USA during the Bush administration shows that for other countries this possibility is not too far-fetched and that, to paraphrase the British statesman Edmund Burke (1729–1797): bad science flourishes when good scientists do nothing.? Open in a separate windowElena CattaneoOpen in a separate windowGilberto Corbellini  相似文献   

15.
16.
17.
18.
Ralf Dahm 《EMBO reports》2010,11(3):153-160
Friedrich Miescher''s attempts to uncover the function of DNAIt might seem as though the role of DNA as the carrier of genetic information was not realized until the mid-1940s, when Oswald Avery (1877–1955) and colleagues demonstrated that DNA could transform bacteria (Avery et al, 1944). Although these experiments provided direct evidence for the function of DNA, the first ideas that it might have an important role in processes such as cell proliferation, fertilization and the transmission of heritable traits had already been put forward more than half a century earlier. Friedrich Miescher (1844–1895; Fig 1), the Swiss scientist who discovered DNA in 1869 (Miescher, 1869a), developed surprisingly insightful theories to explain its function and how biological molecules could encode information. Although his ideas were incorrect from today''s point of view, his work contains concepts that come tantalizingly close to our current understanding. But Miescher''s career also holds lessons beyond his scientific insights. It is the story of a brilliant scientist well on his way to making one of the most fundamental discoveries in the history of science, who ultimately fell short of his potential because he clung to established theories and failed to follow through with the interpretation of his findings in a new light.…a brilliant scientist well on his way to making one of the most fundamental discoveries in the history of science […] fell short of his potential because he clung to established theories…Open in a separate windowFigure 1Friedrich Miescher (1844–1895) and his wife, Maria Anna Rüsch. © Library of the University of Basel, Switzerland.It is a curious coincidence in the history of genetics that three of the most decisive discoveries in this field occurred within a decade: in 1859, Charles Darwin (1809–1882) published On the Origin of Species by Means of Natural Selection, in which he expounded the mechanism driving the evolution of species; seven years later, Gregor Mendel''s (1822–1884) paper describing the basic laws of inheritance appeared; and in early 1869, Miescher discovered DNA. Yet, although the magnitude of Darwin''s theory was realized almost immediately, and at least Mendel himself seems to have grasped the importance of his work, Miescher is often viewed as oblivious to the significance of his discovery. It would be another 75 years before Oswald Avery, Colin MacLeod (1909–1972) and Maclyn McCarthy (1911–2005) could convincingly show that DNA was the carrier of genetic information, and another decade before James Watson and Francis Crick (1916–2004) unravelled its structure (Watson & Crick, 1953), paving the way to our understanding of how DNA encodes information and how this is translated into proteins. But Miescher already had astonishing insights into the function of DNA.Between 1868 and 1869, Miescher worked at the University of Tübingen in Germany (Figs 2,,3),3), where he tried to understand the chemical basis of life. A crucial difference in his approach compared with earlier attempts was that he worked with isolated cells—leukocytes that he obtained from pus—and later purified nuclei, rather than whole organs or tissues. The innovative protocols he developed allowed him to investigate the chemical composition of an isolated organelle (Dahm, 2005), which significantly reduced the complexity of his starting material and enabled him to analyse its constituents.Open in a separate windowFigure 2Contemporary view of the town of Tübingen at about the time when Miescher worked there. The medieval castle housing Hoppe-Seyler''s laboratory can be seen atop the hill at the right. © Stadtarchiv Tübingen, Germany.Open in a separate windowFigure 3The former kitchen of Tübingen castle, which formed part of Hoppe-Seyler''s laboratory. It was in this room that Miescher worked during his stay in Tübingen and where he discovered DNA. After his return to Basel, Miescher reminisced how this room with its shadowy, vaulted ceiling and its small, deep-set windows appeared to him like the laboratory of a medieval alchemist. Photograph taken by Paul Sinner, Tübingen, in 1879. © University Library Tübingen.In carefully designed experiments, Miescher discovered DNA—or “Nuclein” as he called it—and showed that it differed from the other classes of biological molecule known at that time (Miescher, 1871a). Most notably, nuclein''s elementary composition with its high phosphorous content convinced him that he had discovered a substance sui generis, that is, of its own kind; a conclusion subsequently confirmed by Miescher''s mentor in Tübingen, the eminent biochemist Felix Hoppe-Seyler (1825–1895; Hoppe-Seyler, 1871; Miescher, 1871a). After his initial analyses, Miescher was convinced that nuclein was an important molecule and suggested in his first publication that it would “merit to be considered equal to the proteins” (Miescher, 1871a).Moreover, Miescher recognized immediately that nuclein could be used to define the nucleus (Miescher, 1870). This was an important realization, as at the time the unequivocal identification of nuclei, and hence their study, was often difficult or even impossible to achieve because their morphology, subcellular localization and staining properties differed between tissues, cell types and states of the cells. Instead, Miescher proposed to base the characterization of nuclei on the presence of this molecule (Miescher, 1870, 1874). Moreover, he held that the nucleus should be defined by properties that are related to its physiological activity, which he believed to be closely linked to nuclein. Miescher had thus made a significant first step towards defining an organelle in terms of its function rather than its appearance.Importantly, his findings also showed that the nucleus is chemically distinct from the cytoplasm at a time when many scientists still assumed that there was nothing unique about this organelle. Miescher thus paved the way for the subsequent realization that cells are subdivided into compartments with distinct molecular composition and functions. On the basis of his observations that nuclein appeared able to separate itself from the “protoplasm” (cytoplasm), Miescher even went so far as to suggest the “possibility that [nuclein can be] distributed in the protoplasm, which could be the precursor for some of the de novo formations of nuclei” (Miescher, 1874). He seemed to anticipate that the nucleus re-forms around the chromosomes after cell division, but unfortunately did not elaborate on under which conditions this might occur. It is therefore impossible to know with certainty to which circumstances he was referring.Miescher thus paved the way for the subsequent realization that cells are subdivided into compartments with distinct molecular composition and functionsIn this context, it is interesting to note that in 1872, Edmund Russow (1841–1897) observed that chromosomes appeared to dissolve in basic solutions. Intriguingly, Miescher had also found that he could precipitate nuclein by using acids and then return it to solution by increasing the pH (Miescher, 1871a). At the time, however, he did not make the link between nuclein and chromatin. This happened around a decade later, in 1881, when Eduard Zacharias (1852–1911) studied the nature of chromosomes by using some of the same methods Miescher had used when characterizing nuclein. Zacharias found that chromosomes, such as nuclein, were resistant to digestion by pepsin solutions and that the chromatin disappeared when he extracted the pepsin-treated cells with dilute alkaline solutions. This led Walther Flemming (1843–1905) to speculate in 1882 that nuclein and chromatin are identical (Mayr, 1982).Alas, Miescher was not convinced. His reluctance to accept these developments was at least partly based on a profound scepticism towards the methods—and hence results—of cytologists and histologists, which, according to Miescher, lacked the precision of chemical approaches as he applied them. The fact that DNA was crucially linked to the function of the nucleus was, however, firmly established in Miescher''s mind and in the following years he tried to obtain additional evidence. He later wrote: “Above all, using a range of suitable plant and animal specimens, I want to prove that Nuclein really specifically belongs to the life of the nucleus” (Miescher, 1876).Although the acidic nature of DNA, its large molecular weight, elementary composition and presence in the nucleus are some of its central properties—all first determined by Miescher—they reveal nothing about its function. Having convinced himself that he had discovered a new type of molecule, Miescher rapidly set out to understand its role in different biological contexts. As a first step, he determined that nuclein occurs in a variety of cell types. Unfortunately, he did not elaborate on the types of tissue or the species his samples were derived from. The only hints as to the specimens he worked with come from letters he wrote to his uncle, the Swiss anatomist Wilhelm His (1831–1904), and his parents; his father, Friedrich Miescher-His (1811–1887), was professor of anatomy in Miescher''s native Basel. In his correspondence, Miescher mentioned other cell types that he had studied for the presence of nuclein, including liver, kidney, yeast cells, erythrocytes and chicken eggs, and hinted at having found nuclein in these as well (Miescher, 1869b; His, 1897). Moreover, Miescher had also planned to look for nuclein in plants, especially in their spores (Miescher, 1869c). This is an intriguing choice given his later fascination with vertebrate germ cells and his speculation on the processes of fertilization and heredity (Miescher, 1871b, 1874).Another clue to the tissues and cell types that Miescher might have examined comes from two papers published by Hoppe-Seyler, who wanted to confirm his student''s results, which he initially viewed with scepticism, before their publication. In the first, another of Hoppe-Seyler''s students, Pal Plósz, reported that nuclein is present in the nucleated erythrocytes of snakes and birds but not in the anuclear erythrocytes of cows (Plósz, 1871). In the second paper, Hoppe-Seyler himself confirmed Miescher''s findings and reported that he had detected nuclein in yeast cells (Hoppe-Seyler, 1871).In an addendum to his 1871 paper, published posthumously, Miescher stated that the apparently ubiquitous presence of nuclein meant that “a new factor has been found for the life of the most basic as well as for the most advanced organisms,” thus opening up a wide range of questions for physiology in general (Miescher, 1870). To argue that Miescher understood that DNA was an essential component of all forms of life is probably an over-interpretation of his words. His statement does, however, clearly show that he believed DNA to be an important factor in the life of a wide range of species.In addition, Miescher looked at tissues under different physiological conditions. He quickly noticed that both nuclein and nuclei were significantly more abundant in proliferating tissues; for instance, he noted that in plants, large amounts of phosphorous are found predominantly in regions of growth and that these parts show the highest densities of nuclei and actively proliferating cells (Miescher, 1871a). Miescher had thus taken the first step towards linking the presence of phosphorous—that is, DNA in this context—to cell proliferation. Some years later, while examining changes in the bodies of salmon as they migrate upstream to their spawning grounds, he noticed that he could, with minimal effort, purify large amounts of pure nuclein from the testes, as they were at the height of cell proliferation in preparation for mating (Miescher, 1874). This provided additional evidence for a link between proliferation and the presence of a high concentration of nuclein.Miescher''s most insightful comments on this issue, however, date from his time in Hoppe-Seyler''s laboratory in Tübingen. He was convinced that histochemical analyses would lead to a much better understanding of certain pathological states than would microscopic studies. He also believed that physiological processes, which at the time were seen as similar, might turn out to be very different if the chemistry were better understood. As early as 1869, the year in which he discovered nuclein, he wrote in a letter to His: “Based on the relative amounts of nuclear substances [DNA], proteins and secondary degradation products, it would be possible to assess the physiological significance of changes with greater accuracy than is feasible now” (Miescher, 1869c).Importantly, Miescher proposed three exemplary processes that might benefit from such analyses: “nutritive progression”, characterized by an increase in the cytoplasmic proteins and the enlargement of the cell; “generative progression”, defined as an increase in “nuclear substances” (nuclein) and as a preliminary phase of cell division in proliferating cells and possibly in tumours; and “regression”, an accumulation of lipids and degenerative products (Miescher, 1869c).When we consider the first two categories, Miescher seems to have understood that an increase in DNA was not only associated with, but also a prerequisite for cell proliferation. Subsequently, cells that are no longer proliferating would increase in size through the synthesis of proteins and hence cytoplasm. Crucially, he believed that chemical analyses of such different states would enable him to obtain a more fundamental insight into the causes underlying these processes. These are astonishingly prescient insights. Sadly, Miescher never followed up on these ideas and, apart from the thoughts expressed in his letter, never published on the topic.…Miescher seems to have understood that an increase in DNA was not only associated with, but also a prerequisite for cell proliferationIt is likely, however, that he had preliminary data supporting these views. Miescher was generally careful to base statements on facts rather than speculation. But, being a perfectionist who published only after extensive verification of his results, he presumably never pursued these studies to such a satisfactory point. It is possible his plans were cut short by leaving Hoppe-Seyler''s laboratory to receive additional training under the supervision of Carl Ludwig (1816–1895) in Leipzig. While there, Miescher turned his attention to matters entirely unrelated to DNA and only resumed his work on nuclein after returning to his native Basel in 1871.Crucially for these subsequent studies of nuclein, Miescher made an important choice: he turned to sperm as his main source of DNA. When analysing the sperm from different species, he noted that the spermatozoa, especially of salmon, have comparatively small tails and thus consist mainly of a nucleus (Miescher, 1874). He immediately grasped that this would greatly facilitate his efforts to isolate DNA at much higher purity (Fig 4). Yet, Miescher also saw beyond the possibility of obtaining pure nuclein from salmon sperm. He realized it also indicated that the nucleus and the nuclein therein might play a crucial role in fertilization and the transmission of heritable traits. In a letter to his colleague Rudolf Boehm (1844–1926) in Würzburg, Miescher wrote: “Ultimately, I expect insights of a more fundamental importance than just for the physiology of sperm” (Miescher, 1871c). It was the beginning of a fascination with the phenomena of fertilization and heredity that would occupy Miescher to the end of his days.Open in a separate windowFigure 4A glass vial containing DNA purified by Friedrich Miescher from salmon sperm. © Alfons Renz, University of Tübingen, Germany.Miescher had entered this field at a critical time. By the middle of the nineteenth century, the old view that cells arise through spontaneous generation had been challenged. Instead, it was widely recognized that cells always arise from other cells (Mayr, 1982). In particular, the development and function of spermatozoa and oocytes, which in the mid-1800s had been shown to be cells, were seen in a new light. Moreover, in 1866, three years before Miescher discovered DNA, Ernst Haeckel (1834–1919) had postulated that the nucleus contained the factors that transmit heritable traits. This proposition from one of the most influential scientists of the time brought the nucleus to the centre of attention for many biologists. Having discovered nuclein as a distinctive molecule present exclusively in this organelle, Miescher realized that he was in an excellent position to make a contribution to this field. Thus, he set about trying to better characterize nuclein with the aim of correlating its chemical properties with the morphology and function of cells, especially of sperm cells.His analyses of the chemical composition of the heads of salmon spermatozoa led Miescher to identify two principal components: in addition to the acidic nuclein, he found an alkaline protein for which he coined the term ‘protamin''; the name is still in use today; protamines are small proteins that replace histones during spermatogenesis. He further determined that these two molecules occur in a “salt-like, not an ether-like [that is, covalent] association” (Miescher, 1874). Following his meticulous analyses of the chemical composition of sperm, he concluded that, “aside from the mentioned substances [protamin and nuclein] nothing is present in significant quantity. As this is crucial for the theory of fertilization, I carry this business out as quantitatively as possible right from the beginning” (Miescher, 1872a). His analyses showed him that the DNA and protamines in sperm occur at constant ratios; a fact that Miescher considered “is certainly of special importance,” without, however, elaborating on what might be this importance. Today, of course, we know that proteins, such as histones and protamines, bind to DNA in defined stoichiometric ratios.Miescher went on to analyse the spermatozoa of carp, frogs (Rana esculenta) and bulls, in which he confirmed the presence of large amounts of nuclein (Miescher, 1874). Importantly, he could show that nuclein is present in only the heads of sperm—the tails being composed largely of lipids and proteins—and that within the head, the nuclein is located in the nucleus (Miescher, 1874; Schmiedeberg & Miescher, 1896). With this discovery, Miescher had not only demonstrated that DNA is a constant component of spermatozoa, but also directed his attention to the sperm heads. On the basis of the observations of other investigators, such as those of Albert von Kölliker (1817–1905) concerning the morphology of spermatozoa in some myriapods and arachnids, Miescher knew that the spermatozoa of some species are aflagellate, that is, lack a tail. This confirmed that the sperm head, and thus the nucleus, was the crucial component. But, the question remained: what in the sperm cells mediated fertilization and the transmission of hereditary traits from one generation to the next?On the basis of his chemical analyses of sperm, Miescher speculated on the existence of molecules that have a crucial part in these processes. In a letter to Boehm, Miescher wrote: “If chemicals do play a role in procreation at all, then the decisive factor is now a known substance” (Miescher, 1872b). But Miescher was unsure as to what might be this substance. He did, however, strongly suspect the combination of nuclein and protamin was the key and that the oocyte might lack a crucial component to be able to develop: “If now the decisive difference between the oocyte and an ordinary cell would be that from the roster of factors, which account for an active arrangement, an element has been removed? For otherwise all proper cellular substances are present in the egg,” he later wrote (Miescher, 1872b).Owing to his inability to detect protamin in the oocyte, Miescher initially favoured this molecule as responsible for fertilization. Later, however, when he failed to detect protamin in the sperm of other species, such as bulls, he changed his mind: “The Nuclein by contrast has proved to be constant [that is, present in the sperm cells of all species Miescher analysed] so far; to it and its associations I will direct my interest from now on” (Miescher, 1872b). Unfortunately, however, although he came tantalizingly close, he never made a clear link between nuclein and heredity.The final section of his 1874 paper on the occurrence and properties of nuclein in the spermatozoa of different vertebrate species is of particular interest because Miescher tried to correlate his chemical findings about nuclein with the physiological role of spermatozoa. He had realized that spermatozoa represented an ideal model system to study the role of DNA because, as he would later put it, “[f]or the actual chemical–biological problems, the great advantage of sperm [cells] is that everything is reduced to the really active substances and that they are caught just at the moment when they exert their greatest physiological function” (Miescher, 1893a). He appreciated that his data were still incomplete, yet wanted to make a first attempt to pull his results together and integrate them into a broader picture to explain fertilization.At the time, Wilhelm Kühne (1837–1900), among others, was putting forward the idea that spermatozoa are the carriers of specific substances that, through their chemical properties, achieve fertilization (Kühne, 1868). Miescher considered his results of the chemical composition of spermatozoa in this context. While critically considering the possibility of a chemical substance explaining fertilization, he stated that: “if we were to assume at all that a single substance, as an enzyme or in any other way, for instance as a chemical impulse, could be the specific cause of fertilization, we would without a doubt first and foremost have to consider Nuclein. Nuclein-bodies were consistently found to be the main components [of spermatozoa]” (Miescher, 1874).With hindsight, these statements seem to suggest that Miescher had identified nuclein as the molecule that mediates fertilization—a crucial assumption to follow up on its role in heredity. Unfortunately, however, Miescher himself was far from convinced that a molecule (or molecules) was responsible for this. There are several reasons for his reluctance, although the influence of his uncle was presumably a crucial factor as it was he who had been instrumental in fostering the young Miescher''s interest in biochemistry and remained a strong influence throughout his life. Indeed, when Miescher came tantalizingly close to uncovering the function of DNA, His''s views proved counterproductive, probably preventing him from interpreting his findings in the context of new results from other scientists at the time. Miescher thus failed to take his studies of nuclein and its function in fertilization and heredity to the next level, which might well have resulted in recognizing DNA as the central molecule in both processes.One specific aspect that diverted Miescher from contemplating the role of nuclein in fertilization was a previous study in which he had erroneously identified the yolk platelets in chicken oocytes as a large number of nuclein-containing granules (Miescher, 1871b). This led him to conclude that the comparatively minimal quantitative contribution of DNA from a spermatozoon to an oocyte, which already contained so much more of the substance, could not have a significant impact on the latter''s physiology. He therefore concluded that, “not in a specific substance can the mystery of fertilization be concealed. […] Not a part, but the whole must act through the cooperation of all its parts” (Miescher, 1874).It is all the more unfortunate that Miescher had identified the yolk platelets in oocytes as nuclein-containing cells because he had realized that the presumed nuclein in these granules differed from the nuclein (that is, DNA) he had isolated previously from other sources, notably by its much higher phosphorous content. But influenced by His''s strong view that these structures were genuine cells, Miescher saw his results in this light. Only several years later, based on results from his contemporaries Flemming and Eduard A. Strasburger (1844–1912) on the morphological properties of nuclei and their behaviour during cell divisions, and Albrecht Kossel''s (1853–1927) discoveries about the composition of DNA (Portugal & Cohen, 1977), did Miescher revise his initial assumption that chicken oocytes contain a large number of nuclein-containing granules. Instead, he finally conceded that the molecules comprising these granules were different from nuclein (Miescher, 1890).Another factor that prevented Miescher from concluding that nuclein was the basis for the transmission of hereditary traits was that he could not conceive of how a single substance might explain the multitude of heritable traits. How, he wondered, could a specific molecule be responsible for the differences between species, races and individuals? Although he granted that “differences in the chemical constitution of these molecules [different types of nuclein] will occur, but only to a limited extent” (Miescher, 1874).And thus, instead of looking to molecules, he—like his uncle His––favoured the idea that the physical movement of the sperm cells or an activation of the oocyte, which he likened to the stimulation of a muscle by neuronal impulses, was responsible for the process of fertilization: “Like the muscle during the activation of its nerve, the oocyte will, when it receives appropriate impulses, become a chemically and physically very different entity” (Miescher, 1874). For nuclein itself, Miescher considered that it might be a source material for other molecules, such as lecithin––one of the few other molecules with a high phosphorous content known at the time (Miescher, 1870, 1871a, 1874). Miescher clearly preferred the idea of nuclein as a repository for material for the cell—mainly phosphorous—rather than as a molecule with a role in encoding information to synthesize such materials. This idea of large molecules being source material for smaller ones was common at the time and was also contemplated for proteins (Miescher, 1870).The entire section of Miescher''s 1874 paper in which he discusses the physiological role of nuclein reads as though he was deliberately trying to assemble evidence against nuclein being the key molecule in fertilization and heredity. This disparaging approach towards the molecule that he himself had discovered might also be explained, at least to some extent, by his pronounced tendency to view his results so critically; tellingly, he published only about 15 papers and lectures in a career spanning nearly three decades.The modern understanding that fertilization is achieved by the fusion of two germ cells only became established in the final quarter of the nineteenth century. Before that time, the almost ubiquitous view was that the sperm cell, through mere contact with the egg, in some way stimulated the oocyte to develop—the physicalist viewpoint. His was a key advocate of this view and firmly rejected the idea that a specific substance might mediate heredity. We can only speculate as to how Miescher would have interpreted his results had he worked in a different intellectual environment at the time, or had he been more independent in the interpretation of his results.We can only speculate as to how Miescher would have interpreted his results had he worked in a different intellectual environment at the time…Miescher''s refusal to accept nuclein as the key to fertilization and heredity is particularly tragic in view of several studies that appeared in the mid-1870s, focusing the attention of scientists on the nuclei. Leopold Auerbach (1828–1897) demonstrated that fertilized eggs contain two nuclei that move towards each other and fuse before the subsequent development of the embryo (Auerbach, 1874). This observation strongly suggested an important role for the nuclei in fertilization. In a subsequent study, Oskar Hertwig (1849–1922) confirmed that the two nuclei—one from the sperm cell and one from the oocyte—fuse before embryogenesis begins. Furthermore, he observed that all nuclei in the embryo derive from this initial nucleus in the zygote (Hertwig, 1876). With this he had established that a single sperm fertilizes the oocyte and that there is a continuous lineage of nuclei from the zygote throughout development. In doing so, he delivered the death blow to the physicalist view of fertilization.By the mid-1880s, Hertwig and Kölliker had already postulated that the crucial component of the nucleus that mediated inheritance was nuclein—an idea that was subsequently accepted by several scientists. Sadly, Miescher remained doubtful until his death in 1895 and thus failed to appreciate the true importance of his discovery. This might have been an overreaction to the claims by others that sperm heads are formed from a homogeneous substance; Miescher had clearly shown that they also contain other molecules, such as proteins. Moreover, Miescher''s erroneous assumption that nuclein occurred only in the outer shell of the sperm head resulted in his failure to realize that stains for chromatin, which stain the centres of the heads, actually label the region where there is nuclein; although he later realized that virtually the entire sperm head is composed of nuclein and associated protein (Miescher, 1892a; Schmiedeberg & Miescher, 1896).Unfortunately, not only Miescher, but the entire scientific community would soon lose faith in DNA as the molecule mediating heredity. Miescher''s work had established DNA as a crucial component of all cells and inspired others to begin exploring its role in heredity, but with the emergence of the tetranucleotide hypothesis at the beginning of the twentieth century, DNA fell from favour and was replaced by proteins as the prime candidates for this function. The tetranucleotide hypothesis—which assumed that DNA was composed of identical subunits, each containing all four bases—prevailed until the late 1940s when Edwin Chargaff (1905–2002) discovered that the different bases in DNA were not present in equimolar amounts (Chargaff et al, 1949, 1951).Unfortunately, not only Miescher, but the entire scientific community would soon lose faith in DNA as the molecule mediating heredityJust a few years before, in 1944, experiments by Avery and colleagues had demonstrated that DNA was sufficient to transform bacteria (Avery et al, 1944). Then in 1952, Al Hershey (1908–1997) and Martha Chase (1927–2003) confirmed these findings by observing that viral DNA—but no protein—enters the bacteria during infection with the T2 bacteriophage and, that this DNA is also present in new viruses produced by infected bacteria (Hershey & Chase, 1952). Finally, in 1953, X-ray images of DNA allowed Watson and Crick to deduce its structure (Watson & Crick, 1953) and thus enable us to understand how DNA works. Importantly, these experiments were made possible by advances in bacteriology and virology, as well as the development of new techniques, such as the radioactive labelling of proteins and nucleic acids, and X-ray crystallography—resources that were beyond the reach of Miescher and his contemporaries.In later years (Fig 5), Miescher''s attention shifted progressively from the role of nuclein in fertilization and heredity to physiological questions, such as those concerning the metabolic changes in the bodies of salmon as they produce massive amounts of germ cells at the expense of muscle tissue. Although he made important and seminal contributions to different areas of physiology, he increasingly neglected to explore his most promising line of research, the function of DNA. Only towards the end of his life did he return to this question and begin to reconsider the issue in a new light, but he achieved no further breakthroughs.Open in a separate windowFigure 5Friedrich Miescher in his later years when he was Professor of Physiology at the University of Basel. In this capacity he also founded the Vesalianum, the University''s Institute for Anatomy and Physiology, which was inaugurated in 1885. This photograph is the frontispiece on the inside cover of a collection of Miescher''s publications and some of his letters, edited and published by his uncle Wilhelm His and colleagues after Miescher''s death. Underneath the picture is Miescher''s signature. © Ralf Dahm.One area, however, where he did propose intriguing hypotheses—although without experimental data to support them—was the molecular underpinnings of heredity. Inspired by Darwin''s work on fertilization in plants, Miescher postulated, for instance, how information might be encoded in biological molecules. He stated that, “the key to sexuality for me lies in stereochemistry,” and expounded his belief that the gemmules of Darwin''s theory of pangenesis were likely to be “numerous asymmetric carbon atoms [present in] organic substances” (Miescher, 1892b), and that sexual reproduction might function to correct mistakes in their “stereometric architecture”. As such, Miescher proposed that hereditary information might be encoded in macromolecules and how mistakes could be corrected, which sounds uncannily as though he had predicted what is now known as the complementation of haploid deficiencies by wild-type alleles. It is particularly tempting to assume that Miescher might have thought this was the case, as Mendel had published his laws of inheritance of recessive characteristics more than 25 years earlier. However, there is no reference to Mendel''s work in the papers, talks or letters that Miescher has left to us.Miescher proposed that hereditary information might be encoded in macromolecules and how mistakes could be corrected…What we do know is that Miescher set out his view of how hereditary information might be stored in macromolecules: “In the enormous protein molecules […] the many asymmetric carbon atoms allow such a colossal number of stereoisomers that all the abundance and diversity of the transmission of hereditary [traits] may find its expression in them, as the words and terms of all languages do in the 24–30 letters of the alphabet. It is therefore completely superfluous to see the sperm cell or oocyte as a repository of countless chemical substances, each of which should be the carrier of a special hereditary trait (de Vries Pangenesis). The protoplasm and the nucleus, that my studies have shown, do not consist of countless chemical substances, but of very few chemical individuals, which, however, perhaps have a very complex chemical structure” (Miescher, 1892b).This is a remarkable passage in Miescher''s writings. The second half of the nineteenth century saw intense speculation about how heritable characteristics are transmitted between the generations. The consensus view assumed the involvement of tiny particles, which were thought to both shape embryonic development and mediate inheritance (Mayr, 1982). Miescher contradicted this view. Instead of a multitude of individual particles, each of which might be responsible for a specific trait (or traits), his results had shown that, for instance, the heads of sperm cells are composed of only very few compounds, chiefly DNA and associated proteins.He elaborated further on his theory of how hereditary information might be stored in large molecules: “Continuity does not only lie in the form, it also lies deeper than the chemical molecule. It lies in the constituent groups of atoms. In this sense I am an adherent of a chemical model of inheritance à outrance [to the utmost]” (Miescher, 1893b). With this statement Miescher firmly rejects any idea of preformation or some morphological continuity transmitted through the germ cells. Instead, he clearly seems to foresee what would only become known much later: the basis of heredity was to be found in the chemical composition of molecules.To explain how this could be achieved, he proposed a model of how information could be encoded in a macromolecule: “If, as is easily possible, a protein molecule comprises 40 asymmetric carbon atoms, there will be 240, that is, approximately a trillion isomerisms [sic]. And this is only one of the possible types of isomerism [not considering other atoms, such as nitrogen]. To achieve the incalculable diversity demanded by the theory of heredity, my theory is better suited than any other. All manner of transitions are conceivable, from the imperceptible to the largest differences” (Miescher, 1893b).Miescher''s ideas about how heritable characteristics might be transmitted and encoded encapsulate several important concepts that have since been proven to be correct. First, he believed that sexual reproduction served to correct mistakes, or mutations as we call them today. Second, he postulated that the transmission of heritable traits occurs through one or a few macromolecules with complex chemical compositions that encode the information, rather than by numerous individual molecules each encoding single traits, as was thought at the time. Third, he foresaw that information is encoded in these molecules through a simple code that results in a staggeringly large number of possible heritable traits and thus explain the diversity of species and individuals observed.Miescher''s ideas about how heritable characteristics might be transmitted and encoded encapsulate several important concepts that have since been proven to be correctIt is a step too far to suggest that Miescher understood what DNA or other macromolecules do, or how hereditary information is stored. He simply could not have done, given the context of his time. His findings and hypotheses that today fit nicely together and often seem to anticipate our modern understanding probably appeared rather disjointed to Miescher and his contemporaries. In his day, too many facts were still in doubt and too many links tenuous. There is always a danger of over-interpreting speculations and hypotheses made a long time ago in today''s light. However, although Miescher himself misinterpreted some of his findings, large parts of his conclusions came astonishingly close to what we now know to be true. Moreover, his work influenced others to pursue their own investigations into DNA and its function (Dahm, 2008). Although DNA research fell out of fashion for several decades after the end of the nineteenth century, the information gleaned by Miescher and his contemporaries formed the foundation for the decisive experiments carried out in the middle of the twentieth century, which unambiguously established the function of DNA.As such, perhaps the most tragic aspect of Miescher''s career was that for most of his life he firmly believed in the physicalist theories of fertilization, as propounded by His and Ludwig among others, and his reluctance to combine the results from his rigorous chemical analyses with the ‘softer'' data generated by cytologists and histologists. Had he made the link between nuclein and chromosomes and accepted its key role in fertilization and heredity, he might have realized that the molecule he had discovered was the key to some of the greatest mysteries of life. As it was, he died with a feeling of a promising career unfulfilled (His, 1897), when, in actual fact, his contributions were to outshine those of most of his contemporaries.…he died with a feeling of a promising career unfulfilled […] when, in actual fact, his contributions were to outshine those of most of his contemporariesIt is tantalizing to speculate the path that Miescher''s investigations—and biology as a whole—might have taken under slightly different circumstances. What would have happened had he followed up on his preliminary results about the role of DNA in different physiological conditions, such as cell proliferation? How would his theories about fertilization and heredity have changed had he not been misled by the mistaken identification of what appeared to him to be a multitude of small nuclei in the oocyte? Or how would he have interpreted his findings concerning nuclein had he not been influenced by the views of his uncle, but also those of the wider scientific establishment?There is a more general lesson in the life and work of Friedrich Miescher that goes beyond his immediate successes and failures. His story is that of a brilliant researcher who developed innovative experimental approaches, chose the right biological systems to address his questions and made ground-breaking discoveries, and who was nonetheless constrained by his intellectual environment and thus prevented from interpreting his findings objectively. It therefore fell to others, who saw his work from a new point of view, to make the crucial inferences and thus establish the function of DNA.? Open in a separate windowRalf Dahm  相似文献   

19.
Geoffrey Miller 《EMBO reports》2012,13(10):880-884
Runaway consumerism imposes social and ecological costs on humans in much the same way that runaway sexual ornamentation imposes survival costs and extinction risks on other animals.Sex and marketing have been coupled for a very long time. At the cultural level, their relationship has been appreciated since the 1960s ‘Mad Men'' era, when the sexual revolution coincided with the golden age of advertising, and marketers realized that ‘sex sells''. At the biological level, their interplay goes much further back to the Cambrian explosion around 530 million years ago. During this period of rapid evolutionary expansion, multicellular organisms began to evolve elaborate sexual ornaments to advertise their genetic quality to the most important consumers of all in the great mating market of life: the opposite sex.Maintaining the genetic quality of one''s offspring had already been a problem for billions of years. Ever since life originated around 3.7 billion years ago, RNA and DNA have been under selection to copy themselves as accurately as possible [1]. Yet perfect self-replication is biochemically impossible, and almost all replication errors are harmful rather than helpful [2]. Thus, mutations have been eroding the genomic stability of single-celled organisms for trillions of generations, and countless lineages of asexual organisms have suffered extinction through mutational meltdown—the runaway accumulation of copying errors [3]. Only through wildly profligate self-cloning could such organisms have any hope of leaving at least a few offspring with no new harmful mutations, so they could best survive and reproduce.Around 1.5 billion years ago, bacteria evolved the most basic form of sex to minimize mutation load: bacterial conjugation [4]. By swapping bits of DNA across the pilus (a tiny intercellular bridge) a bacterium can replace DNA sequences compromised by copying errors with intact sequences from its peers. Bacteria finally had some defence against mutational meltdown, and they thrived and diversified.Then, with the evolution of genuine sexual reproduction through meiosis, perhaps around 1.2 billion years ago, eukaryotes made a great advance in their ability to purge mutations. By combining their genes with a mate''s genes, they could produce progeny with huge genetic variety—and crucially with a wider range of mutation loads [5]. The unlucky offspring who happened to inherit an above-average number of harmful mutations from both parents would die young without reproducing, taking many mutations into oblivion with them. The lucky offspring who happened to inherit a below-average number of mutations from both parents would live long, prosper and produce offspring of higher genetic quality. Sexual recombination also made it easier to spread and combine the rare mutations that happened to be useful, opening the way for much faster evolutionary advances [6]. Sex became the foundation of almost all complex life because it was so good at both short-term damage limitation (purging bad mutations) and long-term innovation (spreading good mutations).Sex became the foundation of almost all complex life because it was so good at both short-term damage limitation […] and long-term innovation…Yet, single-celled organisms always had a problem with sex: they were not very good at choosing sexual partners with the best genes, that is, the lowest mutation loads. Given bacterial capabilities for chemical communication such as quorum-sensing [7], perhaps some prokaryotes and eukaryotes paid attention to short-range chemical cues of genetic quality before swapping genes. However, mating was mainly random before the evolution of longer-range senses and nervous systems.All of this changed profoundly with the Cambrian explosion, which saw organisms undergoing a genetic revolution that increased the complexity of gene regulatory networks, and a morphological revolution that increased the diversity of multicellular body plans. It was also a neurological and psychological revolution. As organisms became increasingly mobile, they evolved senses such as vision [8] and more complex nervous systems [9] to find food and evade predators. However, these new senses also empowered a sexual revolution, as they gave animals new tools for choosing sexual partners. Rather than hooking up randomly with the nearest mate, animals could now select mates based on visible cues of genetic quality such as body size, energy level, bright coloration and behavioural competence. By choosing the highest quality mates, they could produce higher quality offspring with lower mutation loads [10]. Such mate choice imposed selection on all of those quality cues to become larger, brighter and more conspicuous, amplifying them into true sexual ornaments: biological luxury goods such as the guppy''s tail and the peacock''s train that function mainly to impress and attract females [11]. These sexual ornaments evolved to have a complex genetic architecture, to capture a larger share of the genetic variation across individuals and to reveal mutation load more accurately [12].Ever since the Cambrian, the mating market for sexually reproducing animal species has been transformed to some degree into a consumerist fantasy world of conspicuous quality, status, fashion, beauty and romance. Individuals advertise their genetic quality and phenotypic condition through reliable, hard-to-fake signals or ‘fitness indicators'' such as pheromones, songs, ornaments and foreplay. Mates are chosen on the basis of who displays the largest, costliest, most precise, most popular and most salient fitness indicators. Mate choice for fitness indicators is not restricted to females choosing males, but often occurs in both sexes [13], especially in socially monogamous species with mutual mate choice such as humans [14].Thus, for 500 million years, animals have had to straddle two worlds in perpetual tension: natural selection and sexual selection. Each type of selection works through different evolutionary principles and dynamics, and each yields different types of adaptation and biodiversity. Neither fully dominates the other, because sexual attractiveness without survival is a short-lived vanity, whereas ecological competence without reproduction is a long-lived sterility. Natural selection shapes species to fit their geographical habitats and ecological niches, and favours efficiency in growth, foraging, parasite resistance, predator evasion and social competition. Sexual selection shapes each sex to fit the needs, desires and whims of the other sex, and favours conspicuous extravagance in all sorts of fitness indicators. Animal life walks a fine line between efficiency and opulence. More than 130,000 plant species also play the sexual ornamentation game, having evolved flowers to attract pollinators [15].The sexual selection world challenges the popular misconception that evolution is blind and dumb. In fact, as Darwin emphasized, sexual selection is often perceptive and clever, because animal senses and brains mediate mate choice. This makes sexual selection closer in spirit to artificial selection, which is governed by the senses and brains of human breeders. In so far as sexual selection shaped human bodies, minds and morals, we were also shaped by intelligent designers—who just happened to be romantic hominids rather than fictional gods [16].Thus, mate choice for genetic quality is analogous in many ways to consumer choice for brand quality [17]. Mate choice and consumer choice are both semi-conscious—partly instinctive, partly learned through trial and error and partly influenced by observing the choices made by others. Both are partly focused on the objective qualities and useful features of the available options, and partly focused on their arbitrary, aesthetic and fashionable aspects. Both create the demand that suppliers try to understand and fulfil, with each sex striving to learn the mating preferences of the other, and marketers striving to understand consumer preferences through surveys, focus groups and social media data mining.…single-celled organisms always had a problem with sex: they were not very good at choosing the sexual partners with the best genes…Mate choice and consumer choice can both yield absurdly wasteful outcomes: a huge diversity of useless, superficial variations in the biodiversity of species and the economic diversity of brands, products and packaging. Most biodiversity seems to be driven by sexual selection favouring whimsical differences across populations in the arbitrary details of fitness indicators, not just by naturally selected adaptation to different ecological niches [18]. The result is that within each genus, a species can be most easily identified by its distinct mating calls, sexual ornaments, courtship behaviours and genital morphologies [19], not by different foraging tactics or anti-predator defences. Similarly, much of the diversity in consumer products—such as shirts, cars, colleges or mutual funds—is at the level of arbitrary design details, branding, packaging and advertising, not at the level of objective product features and functionality.These analogies between sex and marketing run deep, because both depend on reliable signals of quality. Until recently, two traditions of signalling theory developed independently in the biological and social sciences. The first landmark in biological signalling theory was Charles Darwin''s analysis of mate choice for sexual ornaments as cues of good fitness and fertility in his book, The Descent of Man, and Selection in Relation to Sex (1871). Ronald Fisher analysed the evolution of mate preferences for fitness indicators in 1915 [20]. Amotz Zahavi proposed the ‘handicap principle'', arguing that only costly signals could be reliable, hard-to-fake indicators of genetic quality or phenotypic condition in 1975 [21]. Richard Dawkins and John Krebs applied game theory to analyse the reliability of animal signals, and the co-evolution of signallers and receivers in 1978 [22]. In 1990, Alan Grafen eventually proposed a formal model of the ‘handicap principle'' [23], and Richard Michod and Oren Hasson analysed ‘reliable indicators of fitness'' [24]. Since then, biological signalling theory has flourished and has informed research on sexual selection, animal communication and social behaviour.…new senses also empowered a sexual revolution […] Rather than hooking up randomly with the nearest mate, animals could now select mates based on visible cues of genetic quality…The parallel tradition of signalling theory in the social sciences and philosophy goes back to Aristotle, who argued that ethical and rational acts are reliable signals of underlying moral and cognitive virtues (ca 350–322 BC). Friedrich Nietzsche analysed beauty, creativity, morality and even cognition as expressions of biological vigour by using signalling logic (1872–1888). Thorstein Veblen proposed that conspicuous luxuries, quality workmanship and educational credentials act as reliable signals of wealth, effort and taste in The Theory of the Leisure Class (1899), The Instinct of Workmanship (1914) and The Higher Learning in America (1922). Vance Packard used signalling logic to analyse social class, runaway consumerism and corporate careerism in The Status Seekers (1959), The Waste Makers (1960) and The Pyramid Climbers (1962), and Ernst Gombrich analysed beauty in art as a reliable signal of the artist''s skill and effort in Art and Illusion (1977) and A Sense of Order (1979). Michael Spence developed formal models of educational credentials as reliable signals of capability and conscientiousness in Market Signalling (1974). Robert Frank used signalling logic to analyse job titles, emotions, career ambitions and consumer luxuries in Choosing the Right Pond (1985), Passions within Reason (1988), The Winner-Take-All-Society (1995) and Luxury Fever (2000).Evolutionary psychology and evolutionary anthropology have been integrating these two traditions to better understand many puzzles in human evolution that defy explanation in terms of natural selection for survival. For example, signalling theory has illuminated the origins and functions of facial beauty, female breasts and buttocks, body ornamentation, clothing, big game hunting, hand-axes, art, music, humour, poetry, story-telling, courtship gifts, charity, moral virtues, leadership, status-seeking, risk-taking, sports, religion, political ideologies, personality traits, adaptive self-deception and consumer behaviour [16,17,25,26,27,28,29].Building on signalling theory and sexual selection theory, the new science of evolutionary consumer psychology [30] has been making big advances in understanding consumer goods as reliable signals—not just signals of monetary wealth and elite taste, but signals of deeper traits such as intelligence, moral virtues, mating strategies and the ‘Big Five'' personality traits: openness, conscientiousness, agreeableness, extraversion and emotional stability [17]. These individual traits are deeper than wealth and taste in several ways: they are found in the other great apes, are heritable across generations, are stable across life, are important in all cultures and are naturally salient when interacting with mates, friends and kin [17,27,31]. For example, consumers seek elite university degrees as signals of intelligence; they buy organic fair-trade foods as signals of agreeableness; and they value foreign travel and avant-garde culture as signals of openness [17]. New molecular genetics research suggests that mutation load accounts for much of the heritable variation in human intelligence [32] and personality [33], so consumerist signals of these traits might be revealing genetic quality indirectly. If so, conspicuous consumption can be seen as just another ‘good-genes indicator'' favoured by mate choice.…sexual attractiveness without survival is a short-lived vanity, whereas ecological competence without reproduction is a long-lived sterilityIndeed, studies suggest that much conspicuous consumption, especially by young single people, functions as some form of mating effort. After men and women think about potential dates with attractive mates, men say they would spend more money on conspicuous luxury goods such as prestige watches, whereas women say they would spend more time doing conspicuous charity activities such as volunteering at a children''s hospital [34]. Conspicuous consumption by males reveals that they are pursuing a short-term mating strategy [35], and this activity is most attractive to women at peak fertility near ovulation [36]. Men give much higher tips to lap dancers who are ovulating [37]. Ovulating women choose sexier and more revealing clothes, shoes and fashion accessories [38]. Men living in towns with a scarcity of women compete harder to acquire luxuries and accumulate more consumer debt [39]. Romantic gift-giving is an important tactic in human courtship and mate retention, especially for men who might be signalling commitment [40]. Green consumerism—preferring eco-friendly products—is an effective form of conspicuous conservation, signalling both status and altruism [41].Findings such as these challenge traditional assumptions in economics. For example, ever since the Marginal Revolution—the development of economic theory during the 1870s—mainstream economics has made the ‘Rational Man'' assumption that consumers maximize their expected utility from their product choices, without reference to what other consumers are doing or desiring. This assumption was convenient both analytically—as it allowed easier mathematical modelling of markets and price equilibria—and ideologically in legitimizing free markets and luxury goods. However, new research from evolutionary consumer psychology and behavioural economics shows that consumers often desire ‘positional goods'' such as prestige-branded luxuries that signal social position and status through their relative cost, exclusivity and rarity. Positional goods create ‘positional externalities''—the harmful social side-effects of runaway status-seeking and consumption arms races [42].…biodiversity seems driven by sexual selection favouring whimsical differences […] Similarly […] diversity in consumer products […] is at the level of arbitrary design…These positional externalities are important because they undermine the most important theoretical justification for free markets—the first fundamental theorem of welfare economics, a formalization of Adam Smith''s ‘invisible hand'' argument, which says that competitive markets always lead to efficient distributions of resources. In the 1930s, the British Marxist biologists Julian Huxley and J.B.S. Haldane were already wary of such rationales for capitalism, and understood that runaway consumerism imposes social and ecological costs on humans in much the same way that runaway sexual ornamentation imposes survival costs and extinction risks on other animals [16]. Evidence shows that consumerist status-seeking leads to economic inefficiencies and costs to human welfare [42]. Runaway consumerism might be one predictable result of a human nature shaped by sexual selection, but we can display desirable traits in many other ways, such as green consumerism, conspicuous charity, ethical investment and through social media such as Facebook [17,43].Future work in evolutionary consumer psychology should give further insights into the links between sex, mutations, evolution and marketing. These links have been important for at least 500 million years and probably sparked the evolution of human intelligence, language, creativity, beauty, morality and ideology. A better understanding of these links could help us nudge global consumerist capitalism into a more sustainable form that imposes lower costs on the biosphere and yields higher benefits for future generations.? Open in a separate windowGeoffrey Miller  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号