首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Elixirs of death     
Substandard and fake drugs are increasingly threatening lives in both the developed and developing world, but governments and industry are struggling to improve the situation.When people take medicine, they assume that it will make them better. However many patients cannot trust their drugs to be effective or even safe. Fake or substandard medicine is a major public health problem and it seems to be growing. More than 200 heart patients died in Pakistan in 2012 after taking a contaminated drug against hypertension [1]. In 2006, cough syrup that contained diethylene glycol as a cheap substitute for pharmaceutical-grade glycerin was distributed in Panama, causing the death of at least 219 people [2,3]. However, the problem is not restricted to developing countries. In 2012, more than 500 patients came down with fungal meningitis and several dozens died after receiving contaminated steroid injections from a compounding pharmacy in Massachusetts [4]. The same year, a fake version of the anti-cancer drug Avastin, which contained no active ingredient, was sold in the USA. The drug seemed to have entered the country through Turkey, Switzerland, Denmark and the UK [5].…many patients cannot trust their drugs to be effective or even safeThe extent of the problem is not really known, as companies and governments do not always report incidents [6]. However, the information that is available is alarming enough, especially in developing countries. One study found that 20% of antihypertensive drugs collected from pharmacies in Rwanda were substandard [7]. Similarly, in a survey of anti-malaria drugs in Southeast Asia and sub-Saharan Africa, 20–42% were found to be either of poor quality or outright fake [8], whilst 56% of amoxicillin capsules sampled in different Arab countries did not meet the US Pharmacopeia requirements [9].Developing countries are particularly susceptible to substandard and fake medicine. Regulatory authorities do not have the means or human resources to oversee drug manufacturing and distribution. A country plagued by civil war or famine might have more pressing problems—including shortages of medicine in the first place. The drug supply chain is confusingly complex with medicines passing through many different hands before they reach the patient, which creates many possible entry points for illegitimate products. Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insurance. Instead, they buy cheap medicine from street vendors at the market or on the bus (Fig 1; [2,10,11]). “People do not have the money to buy medicine at a reasonable price. But quality comes at a price. A reasonable margin is required to pay for a quality control system,” explained Hans Hogerzeil, Professor of Global Health at Groningen University in the Netherlands. In some countries, falsifying medicine has developed into a major business. The low risk of being detected combined with relatively low penalties has turned falsifying medicine into the “perfect crime” [2].Open in a separate windowFigure 1Women sell smuggled, counterfeit medicine on the Adjame market in Abidjan, Ivory Coast, in 2007. Fraudulent street medecine sales rose by 15–25% in the past two years in Ivory Coast.Issouf Sanogo/AFP Photo/Getty Images.There are two main categories of illegitimate drugs. ‘Substandard'' medicines might result from poor-quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intent. It might be manufactured outside the regulatory system, perhaps in an illegitimate production shack that blends chalk with other ingredients and presses it into pills [10]. Whilst falsified medicines do not typically contain any active ingredients, substandard medicine might contain subtherapeutic amounts. This is particularly problematic when it comes to anti-infectious drugs, as it facilitates the emergence and spread of drug resistance [12]. A sad example is the emergence of artemisinin-resistant Plasmodium strains at the Thai–Cambodia border [8] and the Thai–Myanmar border [13], and increasing multidrug-resistant tuberculosis might also be attributed to substandard medication [11].Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insuranceEven if a country effectively prosecutes falsified and substandard medicine within its borders, it is still vulnerable to fakes and low-quality drugs produced elsewhere where regulations are more lax. To address this problem, international initiatives are urgently required [10,14,15], but there is no internationally binding law to combat counterfeit and substandard medicine. Although drug companies, governments and NGOs are interested in good-quality medicines, the different parties seem to have difficulties coming to terms with how to proceed. What has held up progress is a conflation of health issues and economic interests: innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting quality of medicine [14,16].The concern that intellectual property (IP) interests threaten public health dates back to the ‘Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement'' of the World Trade Organization (WTO), adopted in 1994, to establish global protection of intellectual property rights, including patents for pharmaceuticals. The TRIPS Agreement had devastating consequences during the acquired immunodeficiency syndrome epidemic, as it blocked patients in developing countries from access to affordable medicine. Although it includes flexibility, such as the possibility for governments to grant compulsory licenses to manufacture or import a generic version of a patented drug, it has not always been clear how these can be used by countries [14,16,17].In response to public concerns over the public health consequences of TRIPS, the Doha Declaration on the TRIPS Agreement and Public Health was adopted at the WTO''s Ministerial Conference in 2001. It reaffirmed the right of countries to use TRIPS flexibilities and confirmed the primacy of public health over the enforcement of IP rights. Although things have changed for the better, the Doha Declaration did not solve all the problems associated with IP protection and public health. For example, anti-counterfeit legislation, encouraged by multi-national pharmaceutical industries and the EU, threatened to impede the availability of generic medicines in East Africa [14,16,18]. In 2008–2009, European customs authorities seized shipments of legitimate generic medicines in transit from India to other developing countries because they infringed European IP laws [14,16,17]. “We''re left with decisions being taken based on patents and trademarks that should be taken based on health,” commented Roger Bate, a global health expert and resident scholar at the American Enterprise Institute in Washington, USA. “The health community is shooting themselves in the foot.”Conflating health care and IP issues are reflected in the unclear use of the term ‘counterfeit'' [2,14]. “Since the 1990s the World Health Organization (WHO) has used the term ‘counterfeit'' in the sense we now use ‘falsified'',” explained Hogerzeil. “The confusion started in 1995 with the TRIPS agreement, through which the term ‘counterfeit'' got the very narrow meaning of trademark infringement.” As a consequence, an Indian generic, for example, which is legal in some countries but not in others, could be labelled as ‘counterfeit''—and thus acquire the negative connotation of bad quality. “The counterfeit discussion was very much used as a way to block the market of generics and to put them in a bad light,” Hogerzeil concluded.The rifts between the stakeholders have become so deep during the course of these discussions that progress is difficult to achieve. “India is not at all interested in any international regulation. And, unfortunately, it wouldn''t make much sense to do anything without them,” Hogerzeil explained. Indeed, India is a core player: not only does it have a large generics industry, but also the country seems to be, together with China, the biggest source of fake medical products [19,20]. The fact that India is so reluctant to react is tragically ironic, as this stance hampers the growth of its own generic companies like Ranbaxy, Cipla or Piramal. “I certainly don''t believe that Indian generics would lose market share if there was stronger action on public health,” Bate said. Indeed, stricter regulations and control systems would be advantageous, because they would keep fakers at bay. The Indian generic industry is a common target for fakers, because their products are broadly distributed. “The most likely example of a counterfeit product I have come across in emerging markets is a counterfeit Indian generic,” Bate said. Such fakes can damage a company''s reputation and have a negative impact on its revenues when customers stop buying the product.The WHO has had a key role in attempting to draft international regulations that would contain the spread of falsified and substandard medicine. It took a lead in 2006 with the launch of the International Medical Products Anti-Counterfeiting Taskforce (IMPACT). But IMPACT was not a success. Concerns were raised over the influence of multi-national drug companies and the possibility that issues on quality of medicines were conflated with the attempts to enforce stronger IP measures [17]. The WHO distanced itself from IMPACT after 2010. For example, it no longer hosts IMPACT''s secretariat at its headquarters in Geneva [2].‘Substandard'' medicines might result from poor quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intentIn 2010, the WHO''s member states established a working group to further investigate how to proceed, which led to the establishment of a new “Member State mechanism on substandard/spurious/falsely labelled/falsified/counterfeit medical products” (http://www.who.int/medicines/services/counterfeit/en/index.html). However, according to a publication by Amir Attaran from the University of Ottawa, Canada, and international colleagues, the working group “still cannot agree how to define the various poor-quality medicines, much less settle on any concrete actions” [14]. The paper''s authors demand more action and propose a binding legal framework: a treaty. “Until we have stronger public health law, I don''t think that we are going to resolve this problem,” Bate, who is one of the authors of the paper, said.Similarly, the US Food and Drug Administration (FDA) commissioned the Institute of Medicine (IOM) to convene a consensus committee on understanding the global public health implications of falsified and substandard pharmaceuticals [2]. Whilst others have called for a treaty, the IOM report calls on the World Health Assembly—the governing body of the WHO—to develop a code of practice such as a “voluntary soft law” that countries can sign to express their will to do better. “At the moment, there is not yet enough political interest in a treaty. A code of conduct may be more realistic,” Hogerzeil, who is also on the IOM committee, commented. Efforts to work towards a treaty should nonetheless be pursued, Bate insisted: “The IOM is right in that we are not ready to sign a treaty yet, but that does not mean you don''t start negotiating one.”Whilst a treaty might take some time, there are several ideas from the IOM report and elsewhere that could already be put into action to deal with this global health threat [10,12,14,15,19]. Any attempts to safeguard medicines need to address both falsified and substandard medicines, but the counter-measures are different [14]. Falsifying medicine is, by definition, a criminal act. To counteract fakers, action needs to be taken to ensure that the appropriate legal authorities deal with criminals. Substandard medicine, on the other hand, arises when mistakes are made in genuine manufacturing companies. Such mistakes can be reduced by helping companies do better and by improving quality control of drug regulatory authorities.Manufacturing pharmaceuticals is a difficult and costly business that requires clean water, high-quality chemicals, expensive equipment, technical expertise and distribution networks. Large and multi-national companies benefit from economies of scale to cope with these problems. But smaller companies often struggle and compromise in quality [2,21]. “India has 20–40 big companies and perhaps nearly 20,000 small ones. To me, it seems impossible for them to produce at good quality, if they remain so small,” Hogerzeil explained. “And only by being strict, can you force them to combine and to become bigger industries that can afford good-quality assurance systems.” Clamping down on drug quality will therefore lead to a consolidation of the industry, which is an essential step. “If you look at Europe and the US, there were hundreds of drug companies—now there are dozens. And if you look at the situation in India and China today, there are thousands and that will have to come down to dozens as well,” Bate explained.…innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting […] medicineIn addition to consolidating the market by applying stricter rules, the IOM has also suggested measures for supporting companies that observe best practices [2]. For example, the IOM proposes that the International Finance Corporation and the Overseas Private Investment Corporation, which promote private-sector development to reduce poverty, should create separate investment vehicles for pharmaceutical manufacturers who want to upgrade to international standards. Another suggestion is to harmonize market registration of pharmaceutical products, which would ease the regulatory burden for generic producers in developing countries and improve the efficiency of regulatory agencies.Once the medicine leaves the manufacturer, controlling distribution systems becomes another major challenge in combatting falsified and substandard medicine. Global drug supply chains have grown increasingly complicated; drugs cross borders, are sold back and forth between wholesalers and distributers, and are often repackaged. Still, there is a main difference between developing and developed countries. In the latter case, relatively few companies dominate the market, whereas in poorer nations, the distribution system is often fragmented and uncontrolled with parallel schemes, too few pharmacies, even fewer pharmacists and many unlicensed medical vendors. Every transaction creates an opportunity for falsified or substandard medicine to enter the market [2,10,19]. More streamlined and transparent supply chains and stricter licensing requirements would be crucial to improve drug quality. “And we can start in the US,” Hogerzeil commented.…India is a core player: not only does it have a large generics industry, but the country also seems to be, together with China, the biggest source of fake medical productsDistribution could be improved at different levels, starting with the import of medicine. “There are states in the USA where the regulation for medicine importation is very lax. Anyone can import; private clinics can buy medicine from Lebanon or elsewhere and fly them in,” Hogerzeil explained. The next level would be better control over the distribution system within the country. The IOM suggests that state boards should license wholesalers and distributors that meet the National Association of Boards of Pharmacy accreditation standards. “Everybody dealing with medicine has to be licensed,” Hogerzeil said. “And there should be a paper trail of who buys what from whom. That way you close the entry points for illegal drugs and prevent that falsified medicines enter the legal supply chain.” The last level would be a track-and-trace system to identify authentic drugs [2]. Every single package of medicine should be identifiable through an individual marker, such as a 3D bar code. Once it is sold, it is ticked off in a central database, so the marker cannot be reused.According to Hogerzeil, equivalent measures at these different levels should be established in every country. “I don''t believe in double standards”, he said. “Don''t say to Uganda: ‘you can''t do that''. Rather, indicate to them what a cost-effective system in the West looks like and help them, and give them the time, to create something in that direction that is feasible in their situation.”Nigeria, for instance, has demonstrated that with enough political will, it is possible to reduce the proliferation of falsified and substandard medicine. Nigeria had been a major source for falsified products, but things changed in 2001, when Dora Akunyili was appointed Director General of the National Agency for Food and Drug Administration and Control. Akunyili has a personal motivation for fighting falsified drugs: her sister Vivian, a diabetic patient, lost her life to fake insulin in 1988. Akunyili strengthened import controls, campaigned for public awareness, clamped down on counterfeit operations and pushed for harsher punishments [10,19]. Paul Orhii, Akunyili''s successor, is committed to continuing her work [10]. Although there are no exact figures, various surveys indicate that the rate of bad-quality medicine has dropped considerably in Nigeria [10].China is also addressing its drug-quality problems. In a highly publicized event, the former head of China''s State Food and Drug Administration, Zheng Xiaoyu, was executed in 2007 after he was found guilty of accepting bribes to approve untested medicine. Since then, China''s fight against falsified medicine has continued. As a result of heightened enforcement, the number of drug companies in China dwindled from 5,000 in 2004 to about 3,500 this year [2]. Moreover, in July 2012, more than 1,900 suspects were arrested for the sale of fake or counterfeit drugs.Quality comes at a price, however. It is expensive to produce high-quality medicine, and it is expensive to control the production and distribution of drugs. Many low- and middle-income countries might not have the resources to tackle the problem and might not see quality of medicine as a priority. But they should, and affluent countries should help. Not only because health is a human right, but also for economic reasons. A great deal of time and money is invested into testing the safety and efficacy of medicine during drug development, and these resources are wasted when drugs do not reach patients. Falsified and substandard medicines are a financial burden to health systems and the emergence of drug-resistant pathogens might make invaluable medications useless. Investing in the safety of medicine is therefore a humane and an economic imperative.  相似文献   

2.
3.
The public view of life-extension technologies is more nuanced than expected and researchers must engage in discussions if they hope to promote awareness and acceptanceThere is increasing research and commercial interest in the development of novel interventions that might be able to extend human life expectancy by decelerating the ageing process. In this context, there is unabated interest in the life-extending effects of caloric restriction in mammals, and there are great hopes for drugs that could slow human ageing by mimicking its effects (Fontana et al, 2010). The multinational pharmaceutical company GlaxoSmithKline, for example, acquired Sirtris Pharmaceuticals in 2008, ostensibly for their portfolio of drugs targeting ‘diseases of ageing''. More recently, the immunosuppressant drug rapamycin has been shown to extend maximum lifespan in mice (Harrison et al, 2009). Such findings have stoked the kind of enthusiasm that has become common in media reports of life-extension and anti-ageing research, with claims that rapamycin might be “the cure for all that ails” (Hasty, 2009), or that it is an “anti-aging drug [that] could be used today” (Blagosklonny, 2007).Given the academic, commercial and media interest in prolonging human lifespan—a centuries-old dream of humanity—it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives, and to ask whether they would be willing to buy and use drugs that slow the ageing process. Surveys that have addressed these questions, have given some rather surprising results, contrary to the expectations of many researchers in the field. They have also highlighted that although human life extension (HLE) and ageing are topics with enormous implications for society and individuals, scientists have not communicated efficiently with the public about their research and its possible applications.Given the academic, commercial and media interest in prolonging human lifespan […] it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives…Proponents and opponents of HLE often assume that public attitudes towards ageing interventions will be strongly for or against, but until now, there has been little empirical evidence with which to test these assumptions (Lucke & Hall, 2005). We recently surveyed members of the public in Australia and found a variety of opinions, including some ambivalence towards the development and use of drugs that could slow ageing and increase lifespan. Our findings suggest that many members of the public anticipate both positive and negative outcomes from this work (Partridge 2009a, b, 2010; Underwood et al, 2009).In a community survey of public attitudes towards HLE we found that around two-thirds of a sample of 605 Australian adults supported research with the potential to increase the maximum human lifespan by slowing ageing (Partridge et al, 2010). However, only one-third expressed an interest in using an anti-ageing pill if it were developed. Half of the respondents were not interested in personally using such a pill and around one in ten were undecided.Some proponents of HLE anticipate their research being impeded by strong public antipathy (Miller, 2002, 2009). Richard Miller has claimed that opposition to the development of anti-ageing interventions often exists because of an “irrational public predisposition” to think that increased lifespans will only lead to elongation of infirmity. He has called this “gerontologiphobia”—a shared feeling among laypeople that while research to cure age-related diseases such as dementia is laudable, research that aims to intervene in ageing is a “public menace” (Miller, 2002).We found broad support for the amelioration of age-related diseases and for technologies that might preserve quality of life, but scepticism about a major promise of HLE—that it will delay the onset of age-related diseases and extend an individual''s healthy lifespan. From the people we interviewed, the most commonly cited potential negative personal outcome of HLE was that it would extend the number of years a person spent with chronic illnesses and poor quality of life (Partridge et al, 2009a). Although some members of the public envisioned more years spent in good health, almost 40% of participants were concerned that a drug to slow ageing would do more harm than good to them personally; another 13% were unsure about the benefits and costs (Partridge et al, 2010).…it might be that advocates of HLE have failed to persuade the public on this issueIt would be unwise to label such concerns as irrational, when it might be that advocates of HLE have failed to persuade the public on this issue. Have HLE researchers explained what they have discovered about ageing and what it means? Perhaps the public see the claims that have been made about HLE as ‘too good to be true‘.Results of surveys of biogerontologists suggest that they are either unaware or dismissive of public concerns about HLE. They often ignore them, dismiss them as “far-fetched”, or feel no responsibility “to respond” (Settersten Jr et al, 2008). Given this attitude, it is perhaps not surprising that the public are sceptical of their claims.Scientists are not always clear about the outcomes of their work, biogerontologists included. Although the life-extending effects of interventions in animal models are invoked as arguments for supporting anti-ageing research, it is not certain that these interventions will also extend healthy lifespans in humans. Miller (2009) reassuringly claims that the available evidence consistently suggests that quality of life is maintained in laboratory animals with extended lifespans, but he acknowledges that the evidence is “sparse” and urges more research on the topic (Miller, 2009). In the light of such ambiguity, researchers need to respond to public concerns in ways that reflect the available evidence and the potential of their work, without becoming apostles for technologies that have not yet been developed. An anti-ageing drug that extends lifespan without maintaining quality of life is clearly undesirable, but the public needs to be persuaded that such an outcome can be avoided.The public is also concerned about the possible adverse side effects of anti-ageing drugs. Many people were bemused when they discovered that members of the Caloric Restriction Society experienced a loss of libido and loss of muscle mass as a result of adhering to a low-calorie diet to extend their longevity—for many people, such side effects would not be worth the promise of some extra years of life. Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humans (Fontana et al, 2010). If researchers do not discuss these possible effects, then a curious public might draw their own conclusions.Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humansSome HLE advocates seem eager to tout potential anti-ageing drugs as being free from adverse side effects. For example, Blagosklonny (2007) has argued that rapamycin could be used to prevent age-related diseases in humans because it is “a non-toxic, well tolerated drug that is suitable for everyday oral administration” with its major “side-effects” being anti-tumour, bone-protecting, and mimicking caloric restriction effects. By contrast, Kaeberlein & Kennedy (2009) have advised the public against using the drug because of its immunosuppressive effects.Aubrey de Grey has called for scientists to provide more optimistic timescales for HLE on several occasions. He claims that public opposition to interventions in ageing is based on “extraordinarily transparently flawed opinions” that HLE would be unethical and unsustainable (de Grey, 2004). In his view, public opposition is driven by scepticism about whether HLE will be possible, and that concerns about extending infirmity, injustice or social harms are simply excuses to justify people''s belief that ageing is ‘not so bad'' (de Grey, 2007). He argues that this “pro-ageing trance” can only be broken by persuading the public that HLE technologies are just around the corner.Contrary to de Grey''s expectations of public pessimism, 75% of our survey participants thought that HLE technologies were likely to be developed in the near future. Furthermore, concerns about the personal, social and ethical implications of ageing interventions and HLE were not confined to those who believed that HLE is not feasible (Partridge et al, 2010).Juengst et al (2003) have rightly pointed out that any interventions that slow ageing and substantially increase human longevity might generate more social, economic, political, legal, ethical and public health issues than any other technological advance in biomedicine. Our survey supports this idea; the major ethical concerns raised by members of the public reflect the many and diverse issues that are discussed in the bioethics literature (Partridge et al, 2009b; Partridge & Hall, 2007).When pressed, even enthusiasts admit that a drastic extension of human life might be a mixed blessing. A recent review by researchers at the US National Institute on Aging pointed to several economic and social challenges that arise from longevity extension (Sierra et al, 2009). Perry (2004) suggests that the ability to slow ageing will cause “profound changes” and a “firestorm of controversy”. Even de Grey (2005) concedes that the development of an effective way to slow ageing will cause “mayhem” and “absolute pandemonium”. If even the advocates of anti-ageing and HLE anticipate widespread societal disruption, the public is right to express concerns about the prospect of these things becoming reality. It is accordingly unfair to dismiss public concerns about the social and ethical implications as “irrational”, “inane” or “breathtakingly stupid” (de Grey, 2004).The breadth of the possible implications of HLE reinforces the need for more discussion about the funding of such research and management of its outcomes ( Juengst et al, 2003). Biogerontologists need to take public concerns more seriously if they hope to foster support for their work. If there are misperceptions about the likely outcomes of intervention in ageing, then biogerontologists need to better explain their research to the public and discuss how their concerns will be addressed. It is not enough to hope that a breakthrough in human ageing research will automatically assuage public concerns about the effects of HLE on quality of life, overpopulation, economic sustainability, the environment and inequities in access to such technologies. The trajectories of other controversial research areas—such as human embryonic stem cell research and assisted reproductive technologies (Deech & Smajdor, 2007)—have shown that “listening to public concerns on research and responding appropriately” is a more effective way of fostering support than arrogant dismissal of public concerns (Anon, 2009).Biogerontologists need to take public concerns more seriously if they hope to foster support for their work? Open in a separate windowBrad PartridgeOpen in a separate windowJayne LuckeOpen in a separate windowWayne Hall  相似文献   

4.
Martinson BC 《EMBO reports》2011,12(8):758-762
Universities have been churning out PhD students to reap financial and other rewards for training biomedical scientists. This deluge of cheap labour has created unhealthy competition, which encourages scientific misconduct.Most developed nations invest a considerable amount of public money in scientific research for a variety of reasons: most importantly because research is regarded as a motor for economic progress and development, and to train a research workforce for both academia and industry. Not surprisingly, governments are occasionally confronted with questions about whether the money invested in research is appropriate and whether taxpayers are getting the maximum value for their investments.…questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientistsThe training and maintenance of the research workforce is a large component of these investments. Yet discussions in the USA about the appropriate size of this workforce have typically been contentious, owing to an apparent lack of reliable data to tell us whether the system yields academic ‘reproduction rates'' that are above, below or at replacement levels. In the USA, questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientists. As Donald Kennedy, then Editor-in-Chief of Science, noted several years ago, leaders in prestigious academic institutions have repeatedly rung alarm bells about shortages in the science workforce. Less often does one see questions raised about whether too many scientists are being produced or concerns about unintended consequences that may result from such overproduction. Yet recognizing that resources are finite, it seems reasonable to ask what level of competition for resources is productive, and at what level does competition become counter-productive.Finding a proper balance between the size of the research workforce and the resources available to sustain it has other important implications. Unhealthy competition—too many people clamouring for too little money and too few desirable positions—creates its own problems, most notably research misconduct and lower-quality, less innovative research. If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edge. Moreover, many in the science community worry that every publicized case of research misconduct could jeopardize those resources, if politicians and taxpayers become unwilling to invest in a research system that seems to be riddled with fraud and misconduct.The biomedical research enterprise in the USA provides a useful context in which to examine the level of competition for resources among academic scientists. My thesis is that the system of publicly funded research in the USA as it is currently configured supports a feedback system of institutional incentives that generate excessive competition for resources in biomedical research. These institutional incentives encourage universities to overproduce graduate students and postdoctoral scientists, who are both trainees and a cheap source of skilled labour for research while in training. However, once they have completed their training, they become competitors for money and positions, thereby exacerbating competitive pressures.Questions raised about whether too many scientists are being produced or concerns about the unintended consequences of such overproduction are less commonThe resulting scarcity of resources, partly through its effect on peer review, leads to a shunting of resources away from both younger researchers and the most innovative ideas, which undermines the effectiveness of the research enterprise as a whole. Faced with an increasing number of grant applications and the consequent decrease in the percentage of projects that can be funded, reviewers tend to ‘play it safe'' and favour projects that have a higher likelihood of yielding results, even if the research is conservative in the sense that it does not explore new questions. Resource scarcity can also introduce unwanted randomness to the process of determining which research gets funded. A large group of scientists, led by a cancer biologist, has recently mounted a campaign against a change in a policy of the National Institutes of Health (NIH) to allow only one resubmission of an unfunded grant proposal (Wadman, 2011). The core of their argument is that peer reviewers are likely able to distinguish the top 20% of research applications from the rest, but that within that top 20%, distinguishing the top 5% or 10% means asking peer reviewers for a level of precision that is simply not possible. With funding levels in many NIH institutes now within that 5–10% range, the argument is that reviewers are being forced to choose at random which excellent applications do and do not get funding. In addition to the inefficiency of overproduction and excessive competition in terms of their costs to society and opportunity costs to individuals, these institutional incentives might undermine the integrity and quality of science, and reduce the likelihood of breakthroughs.My colleagues and I have expressed such concerns about workforce dynamics and related issues in several publications (Martinson, 2007; Martinson et al, 2005, 2006, 2009, 2010). Early on, we observed that, “missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures” (Martinson et al, 2005). Our more recent publications have been more specific about the institutional and systemic structures concerned. It seems that at least a few important leaders in science share these concerns.In April 2009, the NIH, through the National Institute of General Medical Sciences (NIGMS), issued a request for applications (RFA) calling for proposals to develop computational models of the research workforce (http://grants.nih.gov/grants/guide/rfa-files/RFA-GM-10-003.html). Although such an initiative might be premature given the current level of knowledge, the rationale behind the RFA seems irrefutable: “there is a need to […] pursue a systems-based approach to the study of scientific workforce dynamics.” Roughly four decades after the NIH appeared on the scene, this is, to my knowledge, the first official, public recognition that the biomedical workforce tends not to conform nicely to market forces of supply and demand, despite the fact that others have previously made such arguments.Early last year, Francis Collins, Director of the NIH, published a PolicyForum article in Science, voicing many of the concerns I have expressed about specific influences that have led to growth rates in the science workforce that are undermining the effectiveness of research in general, and biomedical research in particular. He notes the increasing stress in the biomedical research community after the end of the NIH “budget doubling” between 1998 and 2003, and the likelihood of further disruptions when the American Recovery and Reinvestment Act of 2009 (ARRA) funding ends in 2011. Arguing that innovation is crucial to the future success of biomedical research, he notes the tendency towards conservatism of the NIH peer-review process, and how this worsens in fiscally tight times. Collins further highlights the ageing of the NIH workforce—as grants increasingly go to older scientists—and the increasing time that researchers are spending in itinerant and low-paid postdoctoral positions as they stack up in a holding pattern, waiting for faculty positions that may or may not materialize. Having noted these challenging trends, and echoing the central concerns of a 2007 Nature commentary (Martinson, 2007), he concludes that “…it is time for NIH to develop better models to guide decisions about the optimum size and nature of the US workforce for biomedical research. A related issue that needs attention, though it will be controversial, is whether institutional incentives in the current system that encourage faculty to obtain up to 100% of their salary from grants are the best way to encourage productivity.”Similarly, Bruce Alberts, Editor-in-Chief of Science, writing about incentives for innovation, notes that the US biomedical research enterprise includes more than 100,000 graduate students and postdoctoral fellows. He observes that “only a select few will go on to become independent research scientists in academia”, and argues that “assuming that the system supporting this career path works well, these will be the individuals with the most talent and interest in such an endeavor” (Alberts, 2009).His editorial is not concerned with what happens to the remaining majority, but argues that even among the select few who manage to succeed, the funding process for biomedical research “forces them to avoid risk-taking and innovation”. The primary culprit, in his estimation, is the conservatism of the traditional peer-review system for federal grants, which values “research projects that are almost certain to ‘work''”. He continues, “the innovation that is essential for keeping science exciting and productive is replaced by […] research that has little chance of producing the breakthroughs needed to improve human health.”If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edgeAlthough I believe his assessment of the symptoms is correct, I think he has misdiagnosed the cause, in part because he has failed to identify which influence he is concerned with from the network of influences in biomedical research. To contextualize the influences of concern to Alberts, we must consider the remaining majority of doctorally trained individuals so easily dismissed in his editorial, and further examine what drives the dynamics of the biomedical research workforce.Labour economists might argue that market forces will always balance the number of individuals with doctorates with the number of appropriate jobs for them in the long term. Such arguments would ignore, however, the typical information asymmetry between incoming graduate students, whose knowledge about their eventual job opportunities and career options is by definition far more limited than that of those who run the training programmes. They would also ignore the fact that universities are generally not confronted with the externalities resulting from overproduction of PhDs, and have positive financial incentives that encourage overproduction. During the past 40 years, NIH ‘extramural'' funding has become crucial for graduate student training, faculty salaries and university overheads. For their part, universities have embraced NIH extramural funding as a primary revenue source that, for a time, allowed them to implement a business model based on the interconnected assumptions that, as one of the primary ‘outputs'' or ‘products'' of the university, more doctorally trained individuals are always better than fewer, and because these individuals are an excellent source of cheap, skilled labour during their training, they help to contain the real costs of faculty research.“…the current system has succeeded in maximizing the amount of research […] it has also degraded the quality of graduate training and led to an overproduction of PhDs…”However, it has also made universities increasingly dependent on NIH funding. As recently documented by the economist Paula Stephan, most faculty growth in graduate school programmes during the past decade has occurred in medical colleges, with the majority—more than 70%—in non-tenure-track positions. Arguably, this represents a shift of risk away from universities and onto their faculty. Despite perennial cries of concern about shortages in the research workforce (Butz et al, 2003; Kennedy et al, 2004; National Academy of Sciences et al, 2005) a number of commentators have recently expressed concerns that the current system of academic research might be overbuilt (Cech, 2005; Heinig et al, 2007; Martinson, 2007; Stephan, 2007). Some explicitly connect this to structural arrangements between the universities and NIH funding (Cech, 2005; Collins, 2007; Martinson, 2007; Stephan, 2007).In 1995, David Korn pointed out what he saw as some problematic aspects of the business model employed by Academic Medical Centers (AMCs) in the USA during the past few decades (Korn, 1995). He noted the reliance of AMCs on the relatively low-cost, but highly skilled labour represented by postdoctoral fellows, graduate students and others—who quickly start to compete with their own professors and mentors for resources. Having identified the economic dependence of the AMCs on these inexpensive labour pools, he noted additional problems with the graduate training programmes themselves. “These programs are […] imbued with a value system that clearly indicates to all participants that true success is only marked by the attainment of a faculty position in a high-profile research institution and the coveted status of principal investigator on NIH grants.” Pointing to “more than 10 years of severe supply/demand imbalance in NIH funds”, Korn concluded that, “considering the generative nature of each faculty mentor, this enterprise could only sustain itself in an inflationary environment, in which the society''s investment in biomedical research and clinical care was continuously and sharply expanding.” From 1994 to 2003, total funding for biomedical research in the USA increased at an annual rate of 7.8%, after adjustment for inflation. The comparable rate of growth between 2003 and 2007 was 3.4% (Dorsey et al, 2010). These observations resonate with the now classic observation by Derek J. de Solla Price, from more than 30 years before, that growth in science frequently follows an exponential pattern that cannot continue indefinitely; the enterprise must eventually come to a plateau (de Solla Price, 1963).In May 2009, echoing some of Korn''s observations, Nobel laureate Roald Hoffmann caused a stir in the US science community when he argued for a “de-coupling” of the dual roles of graduate students as trainees and cheap labour (Hoffmann, 2009). His suggestion was to cease supporting graduate students with faculty research grants, and to use the money instead to create competitive awards for which graduate students could apply, making them more similar to free agents. During the ensuing discussion, Shirley Tilghman, president of Princeton University, argued that “although the current system has succeeded in maximizing the amount of research performed […] it has also degraded the quality of graduate training and led to an overproduction of PhDs in some areas. Unhitching training from research grants would be a much-needed form of professional ‘birth control''” (Mervis, 2009).The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientistsAlthough the issue of what I will call the ‘academic birth rate'' is the central concern of this analysis, the ‘academic end-of-life'' also warrants some attention. The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientists. A 2008 news item in Science quoted then 70-year-old Robert Wells, a molecular geneticist at Texas A&M University, “‘if I and other old birds continue to land the grants, the [young scientists] are not going to get them.” He worries that the budget will not be able to support “the 100 people ‘I''ve trained […] to replace me''” (Kaiser, 2008). While his claim of 100 trainees might be astonishing, it might be more astonishing that his was the outlying perspective. The majority of senior scientists interviewed for that article voiced intentions to keep doing science—and going after NIH grants—until someone forced them to stop or they died.Some have looked at the current situation with concern, primarily because of the threats it poses to the financial and academic viability of universities (Korn, 1995; Heinig et al, 2007; Korn & Heinig, 2007), although most of those who express such concerns have been distinctly reticent to acknowledge the role of universities in creating and maintaining the situation. Others have expressed concerns about the differential impact of extreme competition and meagre job prospects on the recruitment, development and career survival of young and aspiring scientists (Freeman et al, 2001; Kennedy et al, 2004; Martinson et al, 2006; Anderson et al, 2007a; Martinson, 2007; Stephan, 2007). There seems to be little disagreement, however, that the system has generated excessively high competition for federal research funding, and that this threatens to undermine the very innovation and production of knowledge that is its raison d''etre.The production of knowledge in science, particularly of the ‘revolutionary'' variety, is generally not a linear input–output process with predictable returns on investment, clear timelines and high levels of certainty (Lane, 2009). On the contrary, it is arguable that “revolutionary science is a high risk and long-term endeavour which usually fails” (Charlton & Andras, 2008). Predicting where, when and by whom breakthroughs in understanding will be produced has proven to be an extremely difficult task. In the face of such uncertainty, and denying the realities of finite resources, some have argued that the best bet is to maximize the number of scientists, using that logic to justify a steady-state production of new PhDs, regardless of whether the labour market is sending signals of increasing or decreasing demand for that supply. Only recently have we begun to explore the effects of the current arrangement on the process of knowledge production, and on innovation in particular (Charlton & Andras, 2008; Kolata, 2009).…most of those who express such concerns have been reticent to acknowledge the role of universities themselves in creating and maintaining the situationBruce Alberts, in the above-mentioned editorial, points to several initiatives launched by the NIH that aim to get a larger share of NIH funding into the hands of young scientists with particularly innovative ideas. These include the “New Innovator Award,” the “Pioneer Award” and the “Transformational R01 Awards”. The proportion of NIH funding dedicated to these awards, however, amounts to “only 0.27% of the NIH budget” (Alberts, 2009). Such a small proportion of the NIH budget does not seem likely to generate a large amount of more innovative science. Moreover, to the extent that such initiatives actually succeed in enticing more young investigators to become dependent on NIH funds, any benefit these efforts have in terms of innovation may be offset by further increases in competition for resources that will come when these new ‘innovators'' reach the end of this specialty funding and add to the rank and file of those scrapping for funds through the standard mechanisms.Our studies on research integrity have been mostly oriented towards understanding how the influences within which academic scientists work might affect their behaviour, and thus the quality of the science they produce (Anderson et al, 2007a, 2007b; Martinson et al, 2009, 2010). My colleagues and I have focused on whether biomedical researchers perceive fairness in the various exchange relationships within their work systems. I am persuaded by the argument that expectations of fairness in exchange relationships have been hard-wired into us through evolution (Crockett et al, 2008; Hsu et al, 2008; Izuma et al, 2008; Pennisi, 2009), with the advent of modern markets being a primary manifestation of this. Thus, violations of these expectations strike me as potentially corrupting influences. Such violations might be prime motivators for ill will, possibly engendering bad-faith behaviour among those who perceive themselves to have been slighted, and therefore increasing the risk of research misconduct. They might also corrupt the enterprise by signalling to talented young people that biomedical research is an inhospitable environment in which to develop a career, possibly chasing away some of the most talented individuals, and encouraging a selection of characteristics that might not lead to optimal effectiveness, in terms of scientific innovation and productivity (Charlton, 2009).To the extent that we have an ecology with steep competition that is fraught with high risks of career failure for young scientists after they incur large costs of time, effort and sometimes financial resources to obtain a doctoral degree, why would we expect them to take on the additional, substantial risks involved in doing truly innovative science and asking risky research questions? And why, in such a cut-throat setting, would we not anticipate an increase in corner-cutting, and a corrosion of good scientific practice, collegiality, mentoring and sociability? Would we not also expect a reduction in high-risk, innovative science, and a reversion to a more career-safe type of ‘normal'' science? Would this not reduce the effectiveness of the institution of biomedical research? I do not claim to know the conditions needed to maximize the production of research that is novel, innovative and conducted with integrity. I am fairly certain, however, that putting scientists in tenuous positions in which their careers and livelihoods would be put at risk by pursuing truly revolutionary research is one way to insure against it.  相似文献   

5.
L Bornmann 《EMBO reports》2012,13(8):673-676
The global financial crisis has changed how nations and agencies prioritize research investment. There has been a push towards science with expected benefits for society, yet devising reliable tools to predict and measure the social impact of research remains a major challenge.Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier, and it has become the underlying rationale for public support and funding of science.However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [1]. In fact, there is no direct link between the scientific quality of a research project and its societal value. As Paul Nightingale and Alister Scott of the University of Sussex''s Science and Technology Policy Research centre have pointed out: “research that is highly cited or published in top journals may be good for the academic discipline but not for society” [2]. Moreover, it might take years, or even decades, until a particular body of knowledge yields new products or services that affect society. By way of example, in an editorial on the topic in the British Medical Journal, editor Richard Smith cites the original research into apoptosis as work that is of high quality, but that has had “no measurable impact on health” [3]. He contrasts this with, for example, research into “the cost effectiveness of different incontinence pads”, which is certainly not seen as high value by the scientific community, but which has had an immediate and important societal impact.…the growth of scientific research during the past decades has outpaced the public resources available to fund itThe problem actually begins with defining the ‘societal impact of research''. A series of different concepts has been introduced: ‘third-stream activities'' [4], ‘societal benefits'' or ‘societal quality'' [5], ‘usefulness'' [6], ‘public values'' [7], ‘knowledge transfer'' [8] and ‘societal relevance'' [9, 10]. Yet, each of these concepts is ultimately concerned with measuring the social, cultural, environmental and economic returns from publicly funded research, be they products or ideas.In this context, ‘societal benefits'' refers to the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making. ‘Cultural benefits'' are those that add to the cultural capital of a nation, for example, by giving insight into how we relate to other societies and cultures, by providing a better understanding of our history and by contributing to cultural preservation and enrichment. ‘Environmental benefits'' benefit the natural capital of a nation, by reducing waste and pollution, and by increasing natural preserves or biodiversity. Finally, ‘economic benefits'' increase the economic capital of a nation by enhancing its skills base and by improving its productivity [11].Given the variability and the complexity of evaluating the societal impact of research, Barend van der Meulen at the Rathenau Institute for research and debate on science and technology in the Netherlands, and Arie Rip at the School of Management and Governance of the University of Twente, the Netherlands, have noted that “it is not clear how to evaluate societal quality, especially for basic and strategic research” [5]. There is no accepted framework with adequate datasets comparable to,for example, Thomson Reuters'' Web of Science, which enables the calculation of bibliometric values such as the h index [12] or journal impact factor [13]. There are also no criteria or methods that can be applied to the evaluation of societal impact, whilst conventional research and development (R&D) indicators have given little insight, with the exception of patent data. In fact, in many studies, the societal impact of research has been postulated rather than demonstrated [14]. For Benoît Godin at the Institut National de la Recherche Scientifique (INRS) in Quebec, Canada, and co-author Christian Doré, “systematic measurements and indicators [of the] impact on the social, cultural, political, and organizational dimensions are almost totally absent from the literature” [15]. Furthermore, they note, most research in this field is primarily concerned with economic impact.A presentation by Ben Martin from the Science and Technology Policy Research Unit at Sussex University, UK, cites four common problems that arise in the context of societal impact measurements [16]. The first is the causality problem—it is not clear which impact can be attributed to which cause. The second is the attribution problem, which arises because impact can be diffuse or complex and contingent, and it is not clear what should be attributed to research or to other inputs. The third is the internationality problem that arises as a result of the international nature of R&D and innovation, which makes attribution virtually impossible. Finally, the timescale problem arises because the premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impact.…in many studies, the societal impact of research has been postulated rather than demonstratedIn addition, there are four other problems. First, it is hard to find experts to assess societal impact that is based on peer evaluation. As Robert Frodeman and James Britt Holbrook at the University of North Texas, USA, have noted, “[s]cientists generally dislike impacts considerations” and evaluating research in terms of its societal impact “takes scientists beyond the bounds of their disciplinary expertise” [10]. Second, given that the scientific work of an engineer has a different impact than the work of a sociologist or historian, it will hardly be possible to have a single assessment mechanism [4, 17]. Third, societal impact measurement should take into account that there is not just one model of a successful research institution. As such, assessment should be adapted to the institution''s specific strengths in teaching and research, the cultural context in which it exists and national standards. Finally, the societal impact of research is not always going to be desirable or positive. For example, Les Rymer, graduate education policy advisor to the Australian Group of Eight (Go8) network of university vice-chancellors, noted in a report for the Go8 that, “environmental research that leads to the closure of a fishery might have an immediate negative economic impact, even though in the much longer term it will preserve a resource that might again become available for use. The fishing industry and conservationists might have very different views as to the nature of the initial impact—some of which may depend on their view about the excellence of the research and its disinterested nature” [18].Unlike scientific impact measurement, for which there are numerous established methods that are continually refined, research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishments. Even so, governments already conduct budget-relevant measurements, or plan to do so. The best-known national evaluation system is the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. Efforts are under way to set up the Research Excellence Framework (REF), which is set to replace the RAE in 2014 “to support the desire of modern research policy for promoting problem-solving research” [21]. In order to develop the new arrangements for the assessment and funding of research in the REF, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches for evaluating the impact of research [20]. The recommendation from this consultation is that impact should be measured in a quantifiable way, and expert panels should review narrative evidence in case studies supported by appropriate indicators [19,21].…premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impactMany of the studies that have carried out societal impact measurement chose to do so on the basis of case studies. Although this method is labour-intensive and a craft rather than a quantitative activity, it seems to be the best way of measuring the complex phenomenon that is societal impact. The HEFCE stipulates that “case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe” [22]. Claire Donovan at Brunel University, London, UK, considers the preference for a case-study approach in the REF to be “the ‘state of the art'' [for providing] the necessary evidence-base for increased financial support of university research across all fields” [23]. According to Finn Hansson from the Department of Leadership, Policy and Philosophy at the Copenhagen Business School, Denmark, and co-author Erik Ernø-Kjølhede, the new REF is “a clear political signal that the traditional model for assessing research quality based on a discipline-oriented Mode 1 perception of research, first and foremost in the form of publication in international journals, was no longer considered sufficient by the policy-makers” [19]. ‘Mode 1'' describes research governed by the academic interests of a specific community, whereas ‘Mode 2'' is characterized by collaboration—both within the scientific realm and with other stakeholders—transdisciplinarity and basic research that is being conducted in the context of application [19].The new REF will also entail changes in budget allocations. The evaluation of a research unit for the purpose of allocations will determine 20% of the societal influence dimension [19]. The final REF guidance contains lists of examples for different types of societal impact [24].Societal impact is much harder to measure than scientific impact, and there are probably no indicators that can be used across all disciplines and institutions for collation in databases [17]. Societal impact often takes many years to become apparent, and “[t]he routes through which research can influence individual behaviour or inform social policy are often very diffuse” [18].Yet, the practitioners of societal impact measurement should not conduct this exercise alone; scientists should also take part. According to Steve Hanney at Brunel University, an expert in assessing payback or impacts from health research, and his co-authors, many scientists see societal impact measurement as a threat to their scientific freedom and often reject it [25]. If the allocation of funds is increasingly oriented towards societal impact issues, it challenges the long-standing reward system in science whereby scientists receive credits—not only citations and prizes but also funds—for their contributions to scientific advancement. However, given that societal impact measurement is already important for various national evaluations—and other countries will follow probably—scientists should become more concerned with this aspect of their research. In fact, scientists are often unaware that their research has a societal impact. “The case study at BRASS [Centre for Business Relationships, Accountability, Sustainability and Society] uncovered activities that were previously ‘under the radar'', that is, researchers have been involved in activities they realised now can be characterized as productive interactions” [26] between them and societal stakeholders. It is probable that research in many fields already has a direct societal impact, or induces productive interactions, but that it is not yet perceived as such by the scientists conducting the work.…research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishmentsThe involvement of scientists is also necessary in the development of mechanisms to collect accurate and comparable data [27]. Researchers in a particular discipline will be able to identify appropriate indicators to measure the impact of their kind of work. If the approach to establishing measurements is not sufficiently broad in scope, there is a danger that readily available indicators will be used for evaluations, even if they do not adequately measure societal impact [16]. There is also a risk that scientists might base their research projects and grant applications on readily available and ultimately misleading indicators. As Hansson and Ernø-Kjølhede point out, “the obvious danger is that researchers and universities intensify their efforts to participate in activities that can be directly documented rather than activities that are harder to document but in reality may be more useful to society” [19]. Numerous studies have documented that scientists already base their activities on the criteria and indicators that are applied in evaluations [19, 28, 29].Until reliable and robust methods to assess impact are developed, it makes sense to use expert panels to qualitatively assess the societal relevance of research in the first instance. Rymer has noted that, “just as peer review can be useful in assessing the quality of academic work in an academic context, expert panels with relevant experience in different areas of potential impact can be useful in assessing the difference that research has made” [18].Whether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting the public funding and support of basic researchWhether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting public funding and support of basic research. This has always been the case, but new research into measures that can assess the societal impact of research would provide better qualitative and quantitative data on which funding agencies and politicians could base decisions. At the same time, such measurement should not come at the expense of basic, blue-sky research, given that it is and will remain near-impossible to predict the impact of certain research projects years or decades down the line.  相似文献   

6.
7.
8.
9.
10.
Ralf Dahm 《EMBO reports》2010,11(3):153-160
Friedrich Miescher''s attempts to uncover the function of DNAIt might seem as though the role of DNA as the carrier of genetic information was not realized until the mid-1940s, when Oswald Avery (1877–1955) and colleagues demonstrated that DNA could transform bacteria (Avery et al, 1944). Although these experiments provided direct evidence for the function of DNA, the first ideas that it might have an important role in processes such as cell proliferation, fertilization and the transmission of heritable traits had already been put forward more than half a century earlier. Friedrich Miescher (1844–1895; Fig 1), the Swiss scientist who discovered DNA in 1869 (Miescher, 1869a), developed surprisingly insightful theories to explain its function and how biological molecules could encode information. Although his ideas were incorrect from today''s point of view, his work contains concepts that come tantalizingly close to our current understanding. But Miescher''s career also holds lessons beyond his scientific insights. It is the story of a brilliant scientist well on his way to making one of the most fundamental discoveries in the history of science, who ultimately fell short of his potential because he clung to established theories and failed to follow through with the interpretation of his findings in a new light.…a brilliant scientist well on his way to making one of the most fundamental discoveries in the history of science […] fell short of his potential because he clung to established theories…Open in a separate windowFigure 1Friedrich Miescher (1844–1895) and his wife, Maria Anna Rüsch. © Library of the University of Basel, Switzerland.It is a curious coincidence in the history of genetics that three of the most decisive discoveries in this field occurred within a decade: in 1859, Charles Darwin (1809–1882) published On the Origin of Species by Means of Natural Selection, in which he expounded the mechanism driving the evolution of species; seven years later, Gregor Mendel''s (1822–1884) paper describing the basic laws of inheritance appeared; and in early 1869, Miescher discovered DNA. Yet, although the magnitude of Darwin''s theory was realized almost immediately, and at least Mendel himself seems to have grasped the importance of his work, Miescher is often viewed as oblivious to the significance of his discovery. It would be another 75 years before Oswald Avery, Colin MacLeod (1909–1972) and Maclyn McCarthy (1911–2005) could convincingly show that DNA was the carrier of genetic information, and another decade before James Watson and Francis Crick (1916–2004) unravelled its structure (Watson & Crick, 1953), paving the way to our understanding of how DNA encodes information and how this is translated into proteins. But Miescher already had astonishing insights into the function of DNA.Between 1868 and 1869, Miescher worked at the University of Tübingen in Germany (Figs 2,,3),3), where he tried to understand the chemical basis of life. A crucial difference in his approach compared with earlier attempts was that he worked with isolated cells—leukocytes that he obtained from pus—and later purified nuclei, rather than whole organs or tissues. The innovative protocols he developed allowed him to investigate the chemical composition of an isolated organelle (Dahm, 2005), which significantly reduced the complexity of his starting material and enabled him to analyse its constituents.Open in a separate windowFigure 2Contemporary view of the town of Tübingen at about the time when Miescher worked there. The medieval castle housing Hoppe-Seyler''s laboratory can be seen atop the hill at the right. © Stadtarchiv Tübingen, Germany.Open in a separate windowFigure 3The former kitchen of Tübingen castle, which formed part of Hoppe-Seyler''s laboratory. It was in this room that Miescher worked during his stay in Tübingen and where he discovered DNA. After his return to Basel, Miescher reminisced how this room with its shadowy, vaulted ceiling and its small, deep-set windows appeared to him like the laboratory of a medieval alchemist. Photograph taken by Paul Sinner, Tübingen, in 1879. © University Library Tübingen.In carefully designed experiments, Miescher discovered DNA—or “Nuclein” as he called it—and showed that it differed from the other classes of biological molecule known at that time (Miescher, 1871a). Most notably, nuclein''s elementary composition with its high phosphorous content convinced him that he had discovered a substance sui generis, that is, of its own kind; a conclusion subsequently confirmed by Miescher''s mentor in Tübingen, the eminent biochemist Felix Hoppe-Seyler (1825–1895; Hoppe-Seyler, 1871; Miescher, 1871a). After his initial analyses, Miescher was convinced that nuclein was an important molecule and suggested in his first publication that it would “merit to be considered equal to the proteins” (Miescher, 1871a).Moreover, Miescher recognized immediately that nuclein could be used to define the nucleus (Miescher, 1870). This was an important realization, as at the time the unequivocal identification of nuclei, and hence their study, was often difficult or even impossible to achieve because their morphology, subcellular localization and staining properties differed between tissues, cell types and states of the cells. Instead, Miescher proposed to base the characterization of nuclei on the presence of this molecule (Miescher, 1870, 1874). Moreover, he held that the nucleus should be defined by properties that are related to its physiological activity, which he believed to be closely linked to nuclein. Miescher had thus made a significant first step towards defining an organelle in terms of its function rather than its appearance.Importantly, his findings also showed that the nucleus is chemically distinct from the cytoplasm at a time when many scientists still assumed that there was nothing unique about this organelle. Miescher thus paved the way for the subsequent realization that cells are subdivided into compartments with distinct molecular composition and functions. On the basis of his observations that nuclein appeared able to separate itself from the “protoplasm” (cytoplasm), Miescher even went so far as to suggest the “possibility that [nuclein can be] distributed in the protoplasm, which could be the precursor for some of the de novo formations of nuclei” (Miescher, 1874). He seemed to anticipate that the nucleus re-forms around the chromosomes after cell division, but unfortunately did not elaborate on under which conditions this might occur. It is therefore impossible to know with certainty to which circumstances he was referring.Miescher thus paved the way for the subsequent realization that cells are subdivided into compartments with distinct molecular composition and functionsIn this context, it is interesting to note that in 1872, Edmund Russow (1841–1897) observed that chromosomes appeared to dissolve in basic solutions. Intriguingly, Miescher had also found that he could precipitate nuclein by using acids and then return it to solution by increasing the pH (Miescher, 1871a). At the time, however, he did not make the link between nuclein and chromatin. This happened around a decade later, in 1881, when Eduard Zacharias (1852–1911) studied the nature of chromosomes by using some of the same methods Miescher had used when characterizing nuclein. Zacharias found that chromosomes, such as nuclein, were resistant to digestion by pepsin solutions and that the chromatin disappeared when he extracted the pepsin-treated cells with dilute alkaline solutions. This led Walther Flemming (1843–1905) to speculate in 1882 that nuclein and chromatin are identical (Mayr, 1982).Alas, Miescher was not convinced. His reluctance to accept these developments was at least partly based on a profound scepticism towards the methods—and hence results—of cytologists and histologists, which, according to Miescher, lacked the precision of chemical approaches as he applied them. The fact that DNA was crucially linked to the function of the nucleus was, however, firmly established in Miescher''s mind and in the following years he tried to obtain additional evidence. He later wrote: “Above all, using a range of suitable plant and animal specimens, I want to prove that Nuclein really specifically belongs to the life of the nucleus” (Miescher, 1876).Although the acidic nature of DNA, its large molecular weight, elementary composition and presence in the nucleus are some of its central properties—all first determined by Miescher—they reveal nothing about its function. Having convinced himself that he had discovered a new type of molecule, Miescher rapidly set out to understand its role in different biological contexts. As a first step, he determined that nuclein occurs in a variety of cell types. Unfortunately, he did not elaborate on the types of tissue or the species his samples were derived from. The only hints as to the specimens he worked with come from letters he wrote to his uncle, the Swiss anatomist Wilhelm His (1831–1904), and his parents; his father, Friedrich Miescher-His (1811–1887), was professor of anatomy in Miescher''s native Basel. In his correspondence, Miescher mentioned other cell types that he had studied for the presence of nuclein, including liver, kidney, yeast cells, erythrocytes and chicken eggs, and hinted at having found nuclein in these as well (Miescher, 1869b; His, 1897). Moreover, Miescher had also planned to look for nuclein in plants, especially in their spores (Miescher, 1869c). This is an intriguing choice given his later fascination with vertebrate germ cells and his speculation on the processes of fertilization and heredity (Miescher, 1871b, 1874).Another clue to the tissues and cell types that Miescher might have examined comes from two papers published by Hoppe-Seyler, who wanted to confirm his student''s results, which he initially viewed with scepticism, before their publication. In the first, another of Hoppe-Seyler''s students, Pal Plósz, reported that nuclein is present in the nucleated erythrocytes of snakes and birds but not in the anuclear erythrocytes of cows (Plósz, 1871). In the second paper, Hoppe-Seyler himself confirmed Miescher''s findings and reported that he had detected nuclein in yeast cells (Hoppe-Seyler, 1871).In an addendum to his 1871 paper, published posthumously, Miescher stated that the apparently ubiquitous presence of nuclein meant that “a new factor has been found for the life of the most basic as well as for the most advanced organisms,” thus opening up a wide range of questions for physiology in general (Miescher, 1870). To argue that Miescher understood that DNA was an essential component of all forms of life is probably an over-interpretation of his words. His statement does, however, clearly show that he believed DNA to be an important factor in the life of a wide range of species.In addition, Miescher looked at tissues under different physiological conditions. He quickly noticed that both nuclein and nuclei were significantly more abundant in proliferating tissues; for instance, he noted that in plants, large amounts of phosphorous are found predominantly in regions of growth and that these parts show the highest densities of nuclei and actively proliferating cells (Miescher, 1871a). Miescher had thus taken the first step towards linking the presence of phosphorous—that is, DNA in this context—to cell proliferation. Some years later, while examining changes in the bodies of salmon as they migrate upstream to their spawning grounds, he noticed that he could, with minimal effort, purify large amounts of pure nuclein from the testes, as they were at the height of cell proliferation in preparation for mating (Miescher, 1874). This provided additional evidence for a link between proliferation and the presence of a high concentration of nuclein.Miescher''s most insightful comments on this issue, however, date from his time in Hoppe-Seyler''s laboratory in Tübingen. He was convinced that histochemical analyses would lead to a much better understanding of certain pathological states than would microscopic studies. He also believed that physiological processes, which at the time were seen as similar, might turn out to be very different if the chemistry were better understood. As early as 1869, the year in which he discovered nuclein, he wrote in a letter to His: “Based on the relative amounts of nuclear substances [DNA], proteins and secondary degradation products, it would be possible to assess the physiological significance of changes with greater accuracy than is feasible now” (Miescher, 1869c).Importantly, Miescher proposed three exemplary processes that might benefit from such analyses: “nutritive progression”, characterized by an increase in the cytoplasmic proteins and the enlargement of the cell; “generative progression”, defined as an increase in “nuclear substances” (nuclein) and as a preliminary phase of cell division in proliferating cells and possibly in tumours; and “regression”, an accumulation of lipids and degenerative products (Miescher, 1869c).When we consider the first two categories, Miescher seems to have understood that an increase in DNA was not only associated with, but also a prerequisite for cell proliferation. Subsequently, cells that are no longer proliferating would increase in size through the synthesis of proteins and hence cytoplasm. Crucially, he believed that chemical analyses of such different states would enable him to obtain a more fundamental insight into the causes underlying these processes. These are astonishingly prescient insights. Sadly, Miescher never followed up on these ideas and, apart from the thoughts expressed in his letter, never published on the topic.…Miescher seems to have understood that an increase in DNA was not only associated with, but also a prerequisite for cell proliferationIt is likely, however, that he had preliminary data supporting these views. Miescher was generally careful to base statements on facts rather than speculation. But, being a perfectionist who published only after extensive verification of his results, he presumably never pursued these studies to such a satisfactory point. It is possible his plans were cut short by leaving Hoppe-Seyler''s laboratory to receive additional training under the supervision of Carl Ludwig (1816–1895) in Leipzig. While there, Miescher turned his attention to matters entirely unrelated to DNA and only resumed his work on nuclein after returning to his native Basel in 1871.Crucially for these subsequent studies of nuclein, Miescher made an important choice: he turned to sperm as his main source of DNA. When analysing the sperm from different species, he noted that the spermatozoa, especially of salmon, have comparatively small tails and thus consist mainly of a nucleus (Miescher, 1874). He immediately grasped that this would greatly facilitate his efforts to isolate DNA at much higher purity (Fig 4). Yet, Miescher also saw beyond the possibility of obtaining pure nuclein from salmon sperm. He realized it also indicated that the nucleus and the nuclein therein might play a crucial role in fertilization and the transmission of heritable traits. In a letter to his colleague Rudolf Boehm (1844–1926) in Würzburg, Miescher wrote: “Ultimately, I expect insights of a more fundamental importance than just for the physiology of sperm” (Miescher, 1871c). It was the beginning of a fascination with the phenomena of fertilization and heredity that would occupy Miescher to the end of his days.Open in a separate windowFigure 4A glass vial containing DNA purified by Friedrich Miescher from salmon sperm. © Alfons Renz, University of Tübingen, Germany.Miescher had entered this field at a critical time. By the middle of the nineteenth century, the old view that cells arise through spontaneous generation had been challenged. Instead, it was widely recognized that cells always arise from other cells (Mayr, 1982). In particular, the development and function of spermatozoa and oocytes, which in the mid-1800s had been shown to be cells, were seen in a new light. Moreover, in 1866, three years before Miescher discovered DNA, Ernst Haeckel (1834–1919) had postulated that the nucleus contained the factors that transmit heritable traits. This proposition from one of the most influential scientists of the time brought the nucleus to the centre of attention for many biologists. Having discovered nuclein as a distinctive molecule present exclusively in this organelle, Miescher realized that he was in an excellent position to make a contribution to this field. Thus, he set about trying to better characterize nuclein with the aim of correlating its chemical properties with the morphology and function of cells, especially of sperm cells.His analyses of the chemical composition of the heads of salmon spermatozoa led Miescher to identify two principal components: in addition to the acidic nuclein, he found an alkaline protein for which he coined the term ‘protamin''; the name is still in use today; protamines are small proteins that replace histones during spermatogenesis. He further determined that these two molecules occur in a “salt-like, not an ether-like [that is, covalent] association” (Miescher, 1874). Following his meticulous analyses of the chemical composition of sperm, he concluded that, “aside from the mentioned substances [protamin and nuclein] nothing is present in significant quantity. As this is crucial for the theory of fertilization, I carry this business out as quantitatively as possible right from the beginning” (Miescher, 1872a). His analyses showed him that the DNA and protamines in sperm occur at constant ratios; a fact that Miescher considered “is certainly of special importance,” without, however, elaborating on what might be this importance. Today, of course, we know that proteins, such as histones and protamines, bind to DNA in defined stoichiometric ratios.Miescher went on to analyse the spermatozoa of carp, frogs (Rana esculenta) and bulls, in which he confirmed the presence of large amounts of nuclein (Miescher, 1874). Importantly, he could show that nuclein is present in only the heads of sperm—the tails being composed largely of lipids and proteins—and that within the head, the nuclein is located in the nucleus (Miescher, 1874; Schmiedeberg & Miescher, 1896). With this discovery, Miescher had not only demonstrated that DNA is a constant component of spermatozoa, but also directed his attention to the sperm heads. On the basis of the observations of other investigators, such as those of Albert von Kölliker (1817–1905) concerning the morphology of spermatozoa in some myriapods and arachnids, Miescher knew that the spermatozoa of some species are aflagellate, that is, lack a tail. This confirmed that the sperm head, and thus the nucleus, was the crucial component. But, the question remained: what in the sperm cells mediated fertilization and the transmission of hereditary traits from one generation to the next?On the basis of his chemical analyses of sperm, Miescher speculated on the existence of molecules that have a crucial part in these processes. In a letter to Boehm, Miescher wrote: “If chemicals do play a role in procreation at all, then the decisive factor is now a known substance” (Miescher, 1872b). But Miescher was unsure as to what might be this substance. He did, however, strongly suspect the combination of nuclein and protamin was the key and that the oocyte might lack a crucial component to be able to develop: “If now the decisive difference between the oocyte and an ordinary cell would be that from the roster of factors, which account for an active arrangement, an element has been removed? For otherwise all proper cellular substances are present in the egg,” he later wrote (Miescher, 1872b).Owing to his inability to detect protamin in the oocyte, Miescher initially favoured this molecule as responsible for fertilization. Later, however, when he failed to detect protamin in the sperm of other species, such as bulls, he changed his mind: “The Nuclein by contrast has proved to be constant [that is, present in the sperm cells of all species Miescher analysed] so far; to it and its associations I will direct my interest from now on” (Miescher, 1872b). Unfortunately, however, although he came tantalizingly close, he never made a clear link between nuclein and heredity.The final section of his 1874 paper on the occurrence and properties of nuclein in the spermatozoa of different vertebrate species is of particular interest because Miescher tried to correlate his chemical findings about nuclein with the physiological role of spermatozoa. He had realized that spermatozoa represented an ideal model system to study the role of DNA because, as he would later put it, “[f]or the actual chemical–biological problems, the great advantage of sperm [cells] is that everything is reduced to the really active substances and that they are caught just at the moment when they exert their greatest physiological function” (Miescher, 1893a). He appreciated that his data were still incomplete, yet wanted to make a first attempt to pull his results together and integrate them into a broader picture to explain fertilization.At the time, Wilhelm Kühne (1837–1900), among others, was putting forward the idea that spermatozoa are the carriers of specific substances that, through their chemical properties, achieve fertilization (Kühne, 1868). Miescher considered his results of the chemical composition of spermatozoa in this context. While critically considering the possibility of a chemical substance explaining fertilization, he stated that: “if we were to assume at all that a single substance, as an enzyme or in any other way, for instance as a chemical impulse, could be the specific cause of fertilization, we would without a doubt first and foremost have to consider Nuclein. Nuclein-bodies were consistently found to be the main components [of spermatozoa]” (Miescher, 1874).With hindsight, these statements seem to suggest that Miescher had identified nuclein as the molecule that mediates fertilization—a crucial assumption to follow up on its role in heredity. Unfortunately, however, Miescher himself was far from convinced that a molecule (or molecules) was responsible for this. There are several reasons for his reluctance, although the influence of his uncle was presumably a crucial factor as it was he who had been instrumental in fostering the young Miescher''s interest in biochemistry and remained a strong influence throughout his life. Indeed, when Miescher came tantalizingly close to uncovering the function of DNA, His''s views proved counterproductive, probably preventing him from interpreting his findings in the context of new results from other scientists at the time. Miescher thus failed to take his studies of nuclein and its function in fertilization and heredity to the next level, which might well have resulted in recognizing DNA as the central molecule in both processes.One specific aspect that diverted Miescher from contemplating the role of nuclein in fertilization was a previous study in which he had erroneously identified the yolk platelets in chicken oocytes as a large number of nuclein-containing granules (Miescher, 1871b). This led him to conclude that the comparatively minimal quantitative contribution of DNA from a spermatozoon to an oocyte, which already contained so much more of the substance, could not have a significant impact on the latter''s physiology. He therefore concluded that, “not in a specific substance can the mystery of fertilization be concealed. […] Not a part, but the whole must act through the cooperation of all its parts” (Miescher, 1874).It is all the more unfortunate that Miescher had identified the yolk platelets in oocytes as nuclein-containing cells because he had realized that the presumed nuclein in these granules differed from the nuclein (that is, DNA) he had isolated previously from other sources, notably by its much higher phosphorous content. But influenced by His''s strong view that these structures were genuine cells, Miescher saw his results in this light. Only several years later, based on results from his contemporaries Flemming and Eduard A. Strasburger (1844–1912) on the morphological properties of nuclei and their behaviour during cell divisions, and Albrecht Kossel''s (1853–1927) discoveries about the composition of DNA (Portugal & Cohen, 1977), did Miescher revise his initial assumption that chicken oocytes contain a large number of nuclein-containing granules. Instead, he finally conceded that the molecules comprising these granules were different from nuclein (Miescher, 1890).Another factor that prevented Miescher from concluding that nuclein was the basis for the transmission of hereditary traits was that he could not conceive of how a single substance might explain the multitude of heritable traits. How, he wondered, could a specific molecule be responsible for the differences between species, races and individuals? Although he granted that “differences in the chemical constitution of these molecules [different types of nuclein] will occur, but only to a limited extent” (Miescher, 1874).And thus, instead of looking to molecules, he—like his uncle His––favoured the idea that the physical movement of the sperm cells or an activation of the oocyte, which he likened to the stimulation of a muscle by neuronal impulses, was responsible for the process of fertilization: “Like the muscle during the activation of its nerve, the oocyte will, when it receives appropriate impulses, become a chemically and physically very different entity” (Miescher, 1874). For nuclein itself, Miescher considered that it might be a source material for other molecules, such as lecithin––one of the few other molecules with a high phosphorous content known at the time (Miescher, 1870, 1871a, 1874). Miescher clearly preferred the idea of nuclein as a repository for material for the cell—mainly phosphorous—rather than as a molecule with a role in encoding information to synthesize such materials. This idea of large molecules being source material for smaller ones was common at the time and was also contemplated for proteins (Miescher, 1870).The entire section of Miescher''s 1874 paper in which he discusses the physiological role of nuclein reads as though he was deliberately trying to assemble evidence against nuclein being the key molecule in fertilization and heredity. This disparaging approach towards the molecule that he himself had discovered might also be explained, at least to some extent, by his pronounced tendency to view his results so critically; tellingly, he published only about 15 papers and lectures in a career spanning nearly three decades.The modern understanding that fertilization is achieved by the fusion of two germ cells only became established in the final quarter of the nineteenth century. Before that time, the almost ubiquitous view was that the sperm cell, through mere contact with the egg, in some way stimulated the oocyte to develop—the physicalist viewpoint. His was a key advocate of this view and firmly rejected the idea that a specific substance might mediate heredity. We can only speculate as to how Miescher would have interpreted his results had he worked in a different intellectual environment at the time, or had he been more independent in the interpretation of his results.We can only speculate as to how Miescher would have interpreted his results had he worked in a different intellectual environment at the time…Miescher''s refusal to accept nuclein as the key to fertilization and heredity is particularly tragic in view of several studies that appeared in the mid-1870s, focusing the attention of scientists on the nuclei. Leopold Auerbach (1828–1897) demonstrated that fertilized eggs contain two nuclei that move towards each other and fuse before the subsequent development of the embryo (Auerbach, 1874). This observation strongly suggested an important role for the nuclei in fertilization. In a subsequent study, Oskar Hertwig (1849–1922) confirmed that the two nuclei—one from the sperm cell and one from the oocyte—fuse before embryogenesis begins. Furthermore, he observed that all nuclei in the embryo derive from this initial nucleus in the zygote (Hertwig, 1876). With this he had established that a single sperm fertilizes the oocyte and that there is a continuous lineage of nuclei from the zygote throughout development. In doing so, he delivered the death blow to the physicalist view of fertilization.By the mid-1880s, Hertwig and Kölliker had already postulated that the crucial component of the nucleus that mediated inheritance was nuclein—an idea that was subsequently accepted by several scientists. Sadly, Miescher remained doubtful until his death in 1895 and thus failed to appreciate the true importance of his discovery. This might have been an overreaction to the claims by others that sperm heads are formed from a homogeneous substance; Miescher had clearly shown that they also contain other molecules, such as proteins. Moreover, Miescher''s erroneous assumption that nuclein occurred only in the outer shell of the sperm head resulted in his failure to realize that stains for chromatin, which stain the centres of the heads, actually label the region where there is nuclein; although he later realized that virtually the entire sperm head is composed of nuclein and associated protein (Miescher, 1892a; Schmiedeberg & Miescher, 1896).Unfortunately, not only Miescher, but the entire scientific community would soon lose faith in DNA as the molecule mediating heredity. Miescher''s work had established DNA as a crucial component of all cells and inspired others to begin exploring its role in heredity, but with the emergence of the tetranucleotide hypothesis at the beginning of the twentieth century, DNA fell from favour and was replaced by proteins as the prime candidates for this function. The tetranucleotide hypothesis—which assumed that DNA was composed of identical subunits, each containing all four bases—prevailed until the late 1940s when Edwin Chargaff (1905–2002) discovered that the different bases in DNA were not present in equimolar amounts (Chargaff et al, 1949, 1951).Unfortunately, not only Miescher, but the entire scientific community would soon lose faith in DNA as the molecule mediating heredityJust a few years before, in 1944, experiments by Avery and colleagues had demonstrated that DNA was sufficient to transform bacteria (Avery et al, 1944). Then in 1952, Al Hershey (1908–1997) and Martha Chase (1927–2003) confirmed these findings by observing that viral DNA—but no protein—enters the bacteria during infection with the T2 bacteriophage and, that this DNA is also present in new viruses produced by infected bacteria (Hershey & Chase, 1952). Finally, in 1953, X-ray images of DNA allowed Watson and Crick to deduce its structure (Watson & Crick, 1953) and thus enable us to understand how DNA works. Importantly, these experiments were made possible by advances in bacteriology and virology, as well as the development of new techniques, such as the radioactive labelling of proteins and nucleic acids, and X-ray crystallography—resources that were beyond the reach of Miescher and his contemporaries.In later years (Fig 5), Miescher''s attention shifted progressively from the role of nuclein in fertilization and heredity to physiological questions, such as those concerning the metabolic changes in the bodies of salmon as they produce massive amounts of germ cells at the expense of muscle tissue. Although he made important and seminal contributions to different areas of physiology, he increasingly neglected to explore his most promising line of research, the function of DNA. Only towards the end of his life did he return to this question and begin to reconsider the issue in a new light, but he achieved no further breakthroughs.Open in a separate windowFigure 5Friedrich Miescher in his later years when he was Professor of Physiology at the University of Basel. In this capacity he also founded the Vesalianum, the University''s Institute for Anatomy and Physiology, which was inaugurated in 1885. This photograph is the frontispiece on the inside cover of a collection of Miescher''s publications and some of his letters, edited and published by his uncle Wilhelm His and colleagues after Miescher''s death. Underneath the picture is Miescher''s signature. © Ralf Dahm.One area, however, where he did propose intriguing hypotheses—although without experimental data to support them—was the molecular underpinnings of heredity. Inspired by Darwin''s work on fertilization in plants, Miescher postulated, for instance, how information might be encoded in biological molecules. He stated that, “the key to sexuality for me lies in stereochemistry,” and expounded his belief that the gemmules of Darwin''s theory of pangenesis were likely to be “numerous asymmetric carbon atoms [present in] organic substances” (Miescher, 1892b), and that sexual reproduction might function to correct mistakes in their “stereometric architecture”. As such, Miescher proposed that hereditary information might be encoded in macromolecules and how mistakes could be corrected, which sounds uncannily as though he had predicted what is now known as the complementation of haploid deficiencies by wild-type alleles. It is particularly tempting to assume that Miescher might have thought this was the case, as Mendel had published his laws of inheritance of recessive characteristics more than 25 years earlier. However, there is no reference to Mendel''s work in the papers, talks or letters that Miescher has left to us.Miescher proposed that hereditary information might be encoded in macromolecules and how mistakes could be corrected…What we do know is that Miescher set out his view of how hereditary information might be stored in macromolecules: “In the enormous protein molecules […] the many asymmetric carbon atoms allow such a colossal number of stereoisomers that all the abundance and diversity of the transmission of hereditary [traits] may find its expression in them, as the words and terms of all languages do in the 24–30 letters of the alphabet. It is therefore completely superfluous to see the sperm cell or oocyte as a repository of countless chemical substances, each of which should be the carrier of a special hereditary trait (de Vries Pangenesis). The protoplasm and the nucleus, that my studies have shown, do not consist of countless chemical substances, but of very few chemical individuals, which, however, perhaps have a very complex chemical structure” (Miescher, 1892b).This is a remarkable passage in Miescher''s writings. The second half of the nineteenth century saw intense speculation about how heritable characteristics are transmitted between the generations. The consensus view assumed the involvement of tiny particles, which were thought to both shape embryonic development and mediate inheritance (Mayr, 1982). Miescher contradicted this view. Instead of a multitude of individual particles, each of which might be responsible for a specific trait (or traits), his results had shown that, for instance, the heads of sperm cells are composed of only very few compounds, chiefly DNA and associated proteins.He elaborated further on his theory of how hereditary information might be stored in large molecules: “Continuity does not only lie in the form, it also lies deeper than the chemical molecule. It lies in the constituent groups of atoms. In this sense I am an adherent of a chemical model of inheritance à outrance [to the utmost]” (Miescher, 1893b). With this statement Miescher firmly rejects any idea of preformation or some morphological continuity transmitted through the germ cells. Instead, he clearly seems to foresee what would only become known much later: the basis of heredity was to be found in the chemical composition of molecules.To explain how this could be achieved, he proposed a model of how information could be encoded in a macromolecule: “If, as is easily possible, a protein molecule comprises 40 asymmetric carbon atoms, there will be 240, that is, approximately a trillion isomerisms [sic]. And this is only one of the possible types of isomerism [not considering other atoms, such as nitrogen]. To achieve the incalculable diversity demanded by the theory of heredity, my theory is better suited than any other. All manner of transitions are conceivable, from the imperceptible to the largest differences” (Miescher, 1893b).Miescher''s ideas about how heritable characteristics might be transmitted and encoded encapsulate several important concepts that have since been proven to be correct. First, he believed that sexual reproduction served to correct mistakes, or mutations as we call them today. Second, he postulated that the transmission of heritable traits occurs through one or a few macromolecules with complex chemical compositions that encode the information, rather than by numerous individual molecules each encoding single traits, as was thought at the time. Third, he foresaw that information is encoded in these molecules through a simple code that results in a staggeringly large number of possible heritable traits and thus explain the diversity of species and individuals observed.Miescher''s ideas about how heritable characteristics might be transmitted and encoded encapsulate several important concepts that have since been proven to be correctIt is a step too far to suggest that Miescher understood what DNA or other macromolecules do, or how hereditary information is stored. He simply could not have done, given the context of his time. His findings and hypotheses that today fit nicely together and often seem to anticipate our modern understanding probably appeared rather disjointed to Miescher and his contemporaries. In his day, too many facts were still in doubt and too many links tenuous. There is always a danger of over-interpreting speculations and hypotheses made a long time ago in today''s light. However, although Miescher himself misinterpreted some of his findings, large parts of his conclusions came astonishingly close to what we now know to be true. Moreover, his work influenced others to pursue their own investigations into DNA and its function (Dahm, 2008). Although DNA research fell out of fashion for several decades after the end of the nineteenth century, the information gleaned by Miescher and his contemporaries formed the foundation for the decisive experiments carried out in the middle of the twentieth century, which unambiguously established the function of DNA.As such, perhaps the most tragic aspect of Miescher''s career was that for most of his life he firmly believed in the physicalist theories of fertilization, as propounded by His and Ludwig among others, and his reluctance to combine the results from his rigorous chemical analyses with the ‘softer'' data generated by cytologists and histologists. Had he made the link between nuclein and chromosomes and accepted its key role in fertilization and heredity, he might have realized that the molecule he had discovered was the key to some of the greatest mysteries of life. As it was, he died with a feeling of a promising career unfulfilled (His, 1897), when, in actual fact, his contributions were to outshine those of most of his contemporaries.…he died with a feeling of a promising career unfulfilled […] when, in actual fact, his contributions were to outshine those of most of his contemporariesIt is tantalizing to speculate the path that Miescher''s investigations—and biology as a whole—might have taken under slightly different circumstances. What would have happened had he followed up on his preliminary results about the role of DNA in different physiological conditions, such as cell proliferation? How would his theories about fertilization and heredity have changed had he not been misled by the mistaken identification of what appeared to him to be a multitude of small nuclei in the oocyte? Or how would he have interpreted his findings concerning nuclein had he not been influenced by the views of his uncle, but also those of the wider scientific establishment?There is a more general lesson in the life and work of Friedrich Miescher that goes beyond his immediate successes and failures. His story is that of a brilliant researcher who developed innovative experimental approaches, chose the right biological systems to address his questions and made ground-breaking discoveries, and who was nonetheless constrained by his intellectual environment and thus prevented from interpreting his findings objectively. It therefore fell to others, who saw his work from a new point of view, to make the crucial inferences and thus establish the function of DNA.? Open in a separate windowRalf Dahm  相似文献   

11.
The French government has ambitious goals to make France a leading nation for synthetic biology research, but it still needs to put its money where its mouth is and provide the field with dedicated funding and other support.Synthetic biology is one of the most rapidly growing fields in the biological sciences and is attracting an increasing amount of public and private funding. France has also seen a slow but steady development of this field: the establishment of a national network of synthetic biologists in 2005, the first participation of a French team at the International Genetically Engineered Machine competition in 2007, the creation of a Master''s curriculum, an institute dedicated to synthetic and systems biology at the University of Évry-Val-d''Essonne-CNRS-Genopole in 2009–2010, and an increasing number of conferences and debates. However, scientists have driven the field with little dedicated financial support from the government.Yet the French government has a strong self-perception of its strengths and has set ambitious goals for synthetic biology. The public are told about a “new generation of products, industries and markets” that will derive from synthetic biology, and that research in the field will result in “a substantial jump for biotechnology” and an “industrial revolution”[1,2]. Indeed, France wants to compete with the USA, the UK, Germany and the rest of Europe and aims “for a world position of second or third”[1]. However, in contrast with the activities of its competitors, the French government has no specific scheme for funding or otherwise supporting synthetic biology[3]. Although we read that “France disposes of strong competences” and “all the assets needed”[2], one wonders how France will achieve its ambitious goals without dedicated budgets or detailed roadmaps to set up such institutions.In fact, France has been a straggler: whereas the UK and the USA have published several reports on synthetic biology since 2007, and have set up dedicated governing networks and research institutions, the governance of synthetic biology in France has only recently become an official matter. The National Research and Innovation Strategy (SNRI) only defined synthetic biology as a “priority” challenge in 2009 and created a working group in 2010 to assess the field''s developments, potentialities and challenges; the report was published in 2011[1].At the same time, the French Parliamentary Office for the Evaluation of Scientific and Technological Choices (OPECST) began a review of the field “to establish a worldwide state of the art and the position of our country in terms of training, research and technology transfer”. Its 2012 report entitled The Challenges of Synthetic Biology[2] assessed the main ethical, legal, economic and social challenges of the field. It made several recommendations for a “controlled” and “transparent” development of synthetic biology. This is not a surprise given that the development of genetically modified organisms and nuclear power in France has been heavily criticized for lack of transparency, and that the government prefers to avoid similar future controversies. Indeed, the French government seems more cautious today: making efforts to assess potential dangers and public opinion before actually supporting the science itself.Both reports stress the necessity of a “real” and “transparent” dialogue between science and society and call for “serene […] peaceful and constructive” public discussion. The proposed strategy has three aims: to establish an observatory, to create a permanent forum for discussion and to broaden the debate to include citizens[4]. An Observatory for Synthetic Biology was set up in January 2012 to collect information, mobilize actors, follow debates, analyse the various positions and organize a public forum. Let us hope that this observatory—unlike so many other structures—will have a tangible and durable influence on policy-making, public opinion and scientific practice.Many structural and organizational challenges persist, as neither the National Agency for Research nor the National Centre for Scientific Research have defined the field as a funding priority and public–private partnerships are rare in France. Moreover, strict boundaries between academic disciplines impede interdisciplinary work, and synthetic biology is often included in larger research programmes rather than supported as a research field in itself. Although both the SNRI and the OPECST reports make recommendations for future developments—including setting up funding policies and platforms—it is not clear whether these will materialize, or when, where and what size of investments will be made.France has ambitious goals for synthetic biology, but it remains to be seen whether the government is willing to put ‘meat to the bones'' in terms of financial and institutional support. If not, these goals might come to be seen as unrealistic and downgraded or they will be replaced with another vision that sees synthetic biology as something that only needs discussion and deliberation but no further investment. One thing is already certain: the future development of synthetic biology in France is a political issue.  相似文献   

12.
13.
14.
15.
The authors of “The anglerfish deception” respond to the criticism of their article.EMBO reports (2012) advanced online publication; doi: 10.1038/embor.2012.70EMBO reports (2012) 13 2, 100–105; doi: 10.1038/embor.2011.254Our respondents, eight current or former members of the EFSA GMO panel, focus on defending the EFSA''s environmental risk assessment (ERA) procedures. In our article for EMBO reports, we actually focused on the proposed EU GMO legislative reform, especially the European Commission (EC) proposal''s false political inflation of science, which denies the normative commitments inevitable in risk assessment (RA). Unfortunately the respondents do not address this problem. Indeed, by insisting that Member States enjoy freedom over risk management (RM) decisions despite the EFSA''s central control over RA, they entirely miss the relevant point. This is the unacknowledged policy—normative commitments being made before, and during, not only after, scientific ERA. They therefore only highlight, and extend, the problem we identified.The respondents complain that we misunderstood the distinction between RA and RM. We did not. We challenged it as misconceived and fundamentally misleading—as though only objective science defined RA, with normative choices cleanly confined to RM. Our point was that (i) the processes of scientific RA are inevitably shaped by normative commitments, which (ii) as a matter of institutional, policy and scientific integrity must be acknowledged and inclusively deliberated. They seem unaware that many authorities [1,2,3,4] have recognized such normative choices as prior matters, of RA policy, which should be established in a broadly deliberative manner “in advance of risk assessment to ensure that [RA] is systematic, complete, unbiased and transparent” [1]. This was neither recognized nor permitted in the proposed EC reform—a central point that our respondents fail to recognize.In dismissing our criticism that comparative safety assessment appears as a ‘first step'' in defining ERA, according to the new EFSA ERA guidelines, which we correctly referred to in our text but incorrectly referenced in the bibliography [5], our respondents again ignore this widely accepted ‘framing'' or ‘problem formulation'' point for science. The choice of comparator has normative implications as it immediately commits to a definition of what is normal and, implicitly, acceptable. Therefore the specific form and purpose of the comparison(s) is part of the validity question. Their claim that we are against comparison as a scientific step is incorrect—of course comparison is necessary. This simply acts as a shield behind which to avoid our and others'' [6] challenge to their self-appointed discretion to define—or worse, allow applicants to define—what counts in the comparative frame. Denying these realities and their difficult but inevitable implications, our respondents instead try to justify their own particular choices as ‘science''. First, they deny the first-step status of comparative safety assessment, despite its clear appearance in their own ERA Guidance Document [5]—in both the representational figure (p.11) and the text “the outcome of the comparative safety assessment allows the determination of those ‘identified'' characteristics that need to be assessed [...] and will further structure the ERA” (p.13). Second, despite their claims to the contrary, ‘comparative safety assessment'', effectively a resurrection of substantial equivalence, is a concept taken from consumer health RA, controversially applied to the more open-ended processes of ERA, and one that has in fact been long-discredited if used as a bottleneck or endpoint for rigorous RA processes [7,8,9,10]. The key point is that normative commitments are being embodied, yet not acknowledged, in RA science. This occurs through a range of similar unaccountable RA steps introduced into the ERA Guidance, such as judgement of ‘biological relevance'', ‘ecological relevance'', or ‘familiarity''. We cannot address these here, but our basic point is that such endless ‘methodological'' elaborations of the kind that our EFSA colleagues perform, only obscure the institutional changes needed to properly address the normative questions for policy-engaged science.Our respondents deny our claim concerning the singular form of science the EC is attempting to impose on GM policy and debate, by citing formal EFSA procedures for consultations with Member States and non-governmental organizations. However, they directly refute themselves by emphasizing that all Member State GM cultivation bans, permitted only on scientific grounds, have been deemed invalid by EFSA. They cannot have it both ways. We have addressed the importance of unacknowledged normativity in quality assessments of science for policy in Europe elsewhere [11]. However, it is the ‘one door, one key'' policy framework for science, deriving from the Single Market logic, which forces such singularity. While this might be legitimate policy, it is not scientific. It is political economy.Our respondents conclude by saying that the paramount concern of the EFSA GMO panel is the quality of its science. We share this concern. However, they avoid our main point that the EC-proposed legislative reform would only exacerbate their problem. Ignoring the normative dimensions of regulatory science and siphoning-off scientific debate and its normative issues to a select expert panel—which despite claiming independence faces an EU Ombudsman challenge [12] and European Parliament refusal to discharge their 2010 budget, because of continuing questions over conflicts of interests [13,14]—will not achieve quality science. What is required are effective institutional mechanisms and cultural norms that identify, and deliberatively address, otherwise unnoticed normative choices shaping risk science and its interpretive judgements. It is not the EFSA''s sole responsibility to achieve this, but it does need to recognize and press the point, against resistance, to develop better EU science and policy.  相似文献   

16.
Direct-to-consumer genetic tests and population genome research challenge traditional notions of privacy and consentThe concerns about genetic privacy in the 1990s were largely triggered by the Human Genome Project (HGP) and the establishment of population biobanks in the following decade. Citizens and lawmakers were worried that genetic information on people, or even subpopulations, could be used to discriminate or stigmatize. The ensuing debates led to legislation both in Europe and the USA to protect the privacy of genetic information and prohibit genetic discrimination.Notions of genetic determinism have also been eroded as population genomics research has discovered a plethora of risk factors that offer only probabilistic value…Times have changed. The cost of DNA sequencing has decreased markedly, which means it will soon be possible to sequence individual human genomes for a few thousand dollars. Notions of genetic determinism have also been eroded as population genomics research has discovered a plethora of risk factors that offer only probabilistic value for predicting disease. Nevertheless, there are several increasingly popular internet genetic testing services that do offer predictions to consumers of their health risks on the basis of genetic factors, medical history and lifestyle. Also, not to be underestimated is the growing popularity of social networks on the internet that expose the decline in traditional notions of the privacy of personal information. It was only a matter of time until all these developments began to challenge the notion of genetic privacy.For instance, the internet-based Personal Genome Project asks volunteers to make their personal, medical and genetic information publicly available so as, “to advance our understanding of genetic and environmental contributions to human traits and to improve our ability to diagnose, treat, and prevent illness” (www.personalgenomes.org). The Project, which was founded by George Church at Harvard University, has enrolled its first 10 volunteers and plans to expand to 100,000. Its proponents have proclaimed the limitations, if not the death, of privacy (Lunshof et al, 2008) and maintain that, under the principle of veracity, their own personal genomes will be made public. Moreover, they have argued that in a socially networked world there can be no total guarantee of confidentiality. Indeed, total protection of privacy is increasingly unrealistic in an era in which direct-to-consumer (DTC) genetic testing is offered on the internet (Lee & Crawley, 2009) and forensic technologies can potentially ‘identify'' individuals in aggregated data sets, even if their identity has been anonymized (Homer et al, 2008).Since the start of the HGP in the 1990s, personal privacy and the confidentiality of genetic information have been important ethical and legal issues. Their ‘regulatory'' expression in policies and legislation has been influenced by both genetic determinism and exceptionalism. Paradoxically, there has been a concomitant emergence of collaborative and international consortia conducting genomics research on populations. These consortia openly share data, on the premise that it is for public benefit. These developments require a re-examination of an ‘ethics of scientific research'' that is founded solely on the protection and rights of the individual.… total protection of privacy is increasingly unrealistic in an era in which direct-to-consumer (DTC) genetic testing is offered on the internetAlthough personalized medicine empowers consumers and democratizes the sharing of ‘information'' beyond the data sharing that characterizes population genomics research (Kaye et al, 2009), it also creates new social groups based on beliefs of common genetic susceptibility and risk (Lee & Crawley, 2009). The increasing allure of DTC genetic tests and the growth of online communities based on these services also challenges research in population genomics to provide the necessary scientific knowledge (Yang et al, 2009). The scientific data from population studies might therefore lend some useful validation to the results from DTC, as opposed to the probabilistic ‘harmful'' information that is now provided to consumers (Ransohoff & Khoury, 2010; Action Group on Erosion, Technology and Concentration, 2008). Population data clearly erodes the linear, deterministic model of Mendelian inheritance, in addition to providing information on inherited risk factors. The socio-demographic data provided puts personal genetic risk factors in a ‘real environmental'' context (Knoppers, 2009).Thus, beginning with a brief overview of the principles of data sharing and privacy under both population and consumer testing, we will see that the notion of identifiability is closely linked to the definition of what constitutes ‘personal'' information. It is against this background that we need to examine the issue of consumer consent to online offers of genetic tests that promise whole-genome sequencing and analysis. Moreover, we also demonstrate the need to restructure ethical reviews of genetic research that are not part of classical clinical trials and that are non-interventionist, such as population studies.The HGP heralded a new open access approach under the Bermuda Principles of 1996: “It was agreed that all human genomic sequence information, generated by centres funded for large-scale human sequencing, should be freely available and in the public domain in order to encourage research and development and to maximise its benefit to society” (HUGO, 1996). Reaffirmed in 2003 under the Fort Lauderdale Rules, the premise was that, “the scientific community will best be served if the results of community resource projects are made immediately available for free and unrestricted use by the scientific community to engage in the full range of opportunities for creative science” (HUGO, 2003). The international Human Genome Organization (HUGO) played an important role in achieving this consensus. Its Ethics Committee considered genomic databases as “global public goods” (HUGO Ethics Committee, 2003). The value of this information—based on the donation of biological samples and health information—to realize the benefits of personal genomics is maximized through collaborative, high-quality research. Indeed, it could be argued that, “there is an ethical imperative to promote access and exchange of information, provided confidentiality is protected” (European Society of Human Genetics, 2003). This promotion of data sharing culminated in a recent policy on releasing research data, including pre-publication data (Toronto International Data Release Workshop, 2009).There is room for improvement in both the personal genome and the population genome endeavoursIn its 2009 Guidelines for Human Biobanks and Genetic Research Databases, the Organization for Economic Cooperation and Development (OECD) states that the “operators of the HBGRD [Human Biobanks and Genetic Research Databases] should strive to make data and materials widely available to researchers so as to advance knowledge and understanding.” More specifically, the Guidelines propose mechanisms to ensure the validity of access procedures and applications for access. In fact, they insist that access to human biological materials and data should be based on “objective and clearly articulated criteria [...] consistent with the participants'' informed consent”. Access policies should be fair, transparent and not inhibit research (OECD, 2009).In parallel to such open and public science was the rise of privacy protection, particularly when it concerns genetic information. The United Nations Educational, Scientific and Cultural Organization''s (UNESCO) 2003 International Declaration on Human Genetic Data (UNESCO, 2003) epitomizes this approach. Setting genetic information apart from other sensitive medical or personal information, it mandated an “express” consent for each research use of human genetic data or samples in the absence of domestic law, or, when such use “corresponds to an important public interest reason”. Currently, however, large population genomics infrastructures use a broad consent as befits both their longitudinal nature as well as their goal of serving future unspecified scientific research. The risk is that ethics review committees that require such continuous “express” consents will thereby foreclose efficient access to data in such population resources for disease-specific research. It is difficult for researchers to provide proof of such “important public interest[s]” in order to avoid reconsents.Personal information itself refers to identifying and identifiable information. Logically, a researcher who receives a coded data set but who does not have access to the linking keys, would not have access to ‘identifiable'' information and so the rules governing access to personal data would not apply (Interagency Advisory Panel on Research Ethics, 2009; OHRP, 2008). In fact, in the USA, such research is considered to be on ‘non-humans'' and, in the absence of institutional rules to the contrary, it would theoretically not require research ethics approval (www.vanderbilthealth.com/main/25443).… the ethics norms that govern clinical research are not suited for the wide range of data privacy and consent issues in today''s social networks and bioinformatics systemsNevertheless, if the samples or data of an individual are accessible in more than one repository or on DTC internet sites, a remote possibility remains that any given individual could be re-identified (Homer et al, 2008). To prevent the restriction of open access to public databases, owing to the fear of re-identifiability, a more reasonable approach is necessary; “[t]his means that a mere hypothetical possibility to single out the individual is not enough to consider the persons as ‘identifiable''” (Data Protection Working Party, 2007). This is a proportionate and important approach because fundamental genomic ‘maps'' such as the International HapMap Project (www.hapmap.org) and the 1000 Genomes project (www.1000genomes.org) have stated as their goal “to make data as widely available as possible to further scientific progress” (Kaye et al, 2009). What then of the nature of the consent and privacy protections in DTC genetic testing?The Personal Genome Project makes the genetic and medical data of its volunteers publicly available. Indeed, there is a marked absence of the traditional confidentiality and other protections of the physician–patient relationship across such sites; overall, the degree of privacy protection by commercial DTC and other sequencing enterprises varies. The company 23andMe allows consumers to choose whether they wish to disclose personal information, but warns that disclosure of personal information is also possible “through other means not associated with 23andMe, […] to friends and/or family members […] and other individuals”. 23andMe also announces that it might enter into commercial or other partnerships for access to its databases (www.23andme.com). deCODEme offers tiered levels of visibility, but does not grant access to third parties in the absence of explicit consumer authorization (www.decodeme.com). GeneEssence will share coded DNA samples with other parties and can transfer or sell personal information or samples with an opt-out option according to their Privacy Policy, though the terms of the latter can be changed at any time (www.geneessence.com). Navigenics is transparent: “If you elect to contribute your genetic information to science through the Navigenics service, you allow us to share Your Genetic Data and Your Phenotype Information with not-for-profit organizations who perform genetic or medical research” (www.navigenics.com). Finally, SeqWright separates the personal information of its clients from their genetic information so as to avoid access to the latter in the case of a security breach (www.seqwright.com).Much has been said about the lack of clinical utility and validity of DTC genetic testing services (Howard & Borry, 2009), to say nothing of the absence of genetic counsellors or physicians to interpret the resulting probabilistic information (Knoppers & Avard, 2009; Wright & Kroese, 2010). But what are the implications for consent and privacy considering the seemingly divergent needs of ensuring data sharing in population projects and ‘protecting'' consumer-citizens in the marketplace?At first glance, the same accusations of paternalism levelled at ethics review committees who hesitate to respect the broad consent of participants in population databases could be applied to restraining the very same citizens from genetic ‘info-voyeurism'' on the internet. But, it should be remembered that citizen empowerment, which enables their participation both in population projects and in DTC, is expressed within very different contexts. Population biobanks, by the very fact of their broad consent and long-term nature, have complex security systems and are subject to governance and ongoing ethical monitoring and review. In addition, independent committees evaluate requests for access (Knoppers & Abdul-Rahman, 2010). The same cannot be said for the governance of the DTC companies just presented.There is room for improvement in both the personal genome and the population genome endeavours. The former require regulatory approaches to ensure the quality, safety, security and utility of their services. The latter require further clarification of their ongoing funding and operations and more transparency to the public as researchers begin to access these resources for disease-specific studies (Institute of Medicine, 2009). Public genomic databases should be interoperable and grant access to authenticated researchers internationally in order to be of utility and statistical significance (Burton et al, 2009). Moreover, to enable international access to such databases for disease-specific research means that the interests of publicly funded research and privacy protection must be weighed against each other, rather than imposing a requirement that research has to demonstrate that the public interest substantially outweighs privacy protection (Weisbrot, 2009). Collaboration through interoperability has been one of the goals of the Public Population Project in Genomics (P3G; www.p3g.org) and, more recently, of the Biobanking and Biomolecular Resources Research Infrastructure (www.bbmri.eu).Even if the tools for harmonization and standardization are built and used, will trans-border data flow still be stymied by privacy concerns? The mutual recognition between countries of privacy equivalent approaches—that is, safe harbour—the limiting of access to approved researchers and the development of international best practices in privacy, security and transparency through a Code of Conduct along with a system for penalizing those who fail to respect such norms, would go some way towards maintaining public trust in genomic and genetic research (P3G Consortium et al, 2009). Finally, consumer protection agencies should monitor DTC sites under a regulatory regime, to ensure that these companies adhere to their own privacy policies.… genetic information is probabilistic and participating in population or on-line studies may not create the fatalistic and harmful discriminatory scenarios originally perceived or imaginedMore importantly in both contexts, the ethics norms that govern clinical research are not suited for the wide range of data privacy and consent issues in today''s social networks and bioinformatics systems. One could go further and ask whether the current biomedical ethics review system is inadequate—if not inappropriate—in these ‘data-driven research'' contexts. Perhaps it is time to create ethics review and oversight systems that are particularly adapted for those citizens who seek either to participate through online services or to contribute to population research resources. Both are contexts of minimal risk and require structural governance reforms rather than the application of traditional ethics consent and privacy review processes that are more suited to clinical research involving drugs or devices. In this information age, genetic information is probabilistic, and participating in population or online studies might not create the fatalistic and harmful discriminatory scenarios originally perceived or imagined. The time is ripe for a change in governance and regulatory approaches, a reform that is consistent with what citizens seem to have already understood and acted on.? Open in a separate windowBartha Maria Knoppers  相似文献   

17.
Kyu Rhee 《EMBO reports》2013,14(11):949-950
Two recent studies in PNAS and Nat Chem Biol highlight the power of modern mass-spectrometry techniques for enzyme discovery applied to microbiology. In so doing, they have uncovered new potential targets for the treatment of tuberculosis.Proc Natl Acad Sci USA (2013) 110 28, 11320–11325 doi: 10.1073/pnas.1221597110Nat Chem Biol (2013). doi:10.1038/nchembio.1355. Advance online publication 29 September 2013Many have come to regard metabolism as a well-understood housekeeping activity of all cells, functionally compartmentalized away from other biological processes. However, growing reports of unexpected links between a diverse range of disease states and specific metabolic enzymes or pathways have begun to challenge this view. In doing so, such discoveries have exposed more glaring, and neglected, deficiencies in our understanding of cellular metabolism, triggering a broad resurgence of interest in metabolism.“Metabolomics […] offers a global window into the biochemical state of a cell or organism…”Metabolomics is the newest of the systems-level disciplines and seeks to reveal the physiological state of a given cell or organism through the global and unbiased study of its small-molecule metabolites [1]. Metabolites are the final products of enzymes and enzyme networks, the substrates and products of which often cannot be deduced from genetic information and the levels of which reflect the integrated product of the genome, proteome and environment [2]. Metabolomics thus offers a global window into the biochemical state of a cell or organism, made experimentally possible by the unprecedented discriminatory power and sensitivity of modern mass-spectrometry-based technologies (Fig 1). Two recent reports from the Carvalho and Neyrolles groups, published recently in Proceedings of the National Academy of Science USA and Nature Chemical Biology [3,4], exemplify the rapidly growing impact of metabolomics-based approaches on tuberculosis research.Open in a separate windowFigure 1Modern mass spectrometry illuminates bacterial metabolism. A comparison of activity-based metabolomic profiling with classic metabolic tracing. See the text for details.Within the field of infectious diseases, the deficiencies in our understanding of microbial metabolism have emerged most prominently in the area of tuberculosis research. Despite the development of the first chemotherapies more than 50 years ago, tuberculosis remains the leading bacterial cause of death worldwide, due in part to a failure to keep pace with the emergence of drug resistance [5]. The causes of this shortfall are multifactorial. However, a key contributing factor is our incomplete understanding of the metabolic properties of Mycobacterium tuberculosis (Mtb), its aetiological agent. Unlike most bacterial pathogens, Mtb infects humans as its only known host and reservoir, within whom it resides largely isolated from other microbes. Mtb has thus evolved its metabolism to serve interdependent physiological and pathogenic roles. Yet, more than a century after Koch''s initial discovery of Mtb and 15 years after the first publication of its genome sequence, knowledge of Mtb''s metabolic network remains surprisingly incomplete [6,7,8].“…tuberculosis remains the leading bacterial cause of death worldwide…”As for almost all sequenced microbial genomes, homology-based in silico approaches have failed to suggest a function for nearly 40% of Mtb genes that, presumably, include a significant number of orphan enzyme activities for which no gene has been ascribed [8]. Such approaches have further neglected the impact of evolutionary selection and its ability to dissociate sequence conservation from biochemical activity and physiological function, in order to help optimize the fitness of a given organism within its specific niche. For Mtb, such genes and enzymes represent an especially promising and biologically selective, but untapped, source of potential drug targets.In the study from the Carvalho group, successful application of a recently developed metabolomics assay—known as activity-based metabolomic profiling (ABMP)—allowed the authors to reassign a putatively annotated nucleotide phosphatase (Rv1692) as a D,L-glycerol 3-phosphate phosphatase [3,9]. ABMP was specifically developed to identify enzymatic activities for genes of unknown function by leveraging the analytical discriminatory power of liquid-chromatography-coupled high-resolution mass spectrometry (LC-MS) to analyse the impact of a recombinant enzyme and potential co-factors on a highly concentrated, small-molecule extract derived from the homologous organism (Fig 1). By monitoring for the matched time and enzyme-dependent depletion and accumulation of putative substrates and products, this assay enables the discovery of catalytic activities—rather than simple binding—by using the cellular metabolome as arguably the most physiological chemical library of potential substrates that can be tested, in a label and synthesis-free manner. Moreover, candidate activities assigned by this method can be confirmed by using independent biochemical approaches—such as reconstitution with purified components—and genetic techniques—such as wild-type and genetic knockout, knockdown or overexpression strains. In reassigning Rv1692 as a glycerol phosphate phosphatase, rather than a nucleotide phosphatase, Carvalho and colleagues demonstrate the potential of ABMP to overcome the biochemical challenge of assigning substrate specificity to a member of a large enzyme superfamily—in this case, the haloacid dehydrogenase superfamily. But, perhaps more significantly, they also direct new biological attention to the largely neglected area of Mtb membrane homeostasis, in which Rv1692 might play an important role in glycerophospholipid recycling and catabolism.“…knowledge of Mtb''s metabolic network remains surprisingly incomplete”Neyrolles and colleagues make use of the same metabolomics platform to perform metabolite-tracing studies by using stable-isotope-labelled precursors, which led them to reassign a putatively annotated asparagine transporter (AnsP1) as an aspartate transporter. AnsP1 bears 55% sequence identity and 70% similarity to an orthologue in Salmonella that belongs to the amino acid transporter family 2.A.3.1, whereas aspartate transporters are typically members of the dicarboxylate amino acid:cation symporter family 2.A.23 [4]. This study demonstrates the ability of metabolomic platforms to not only characterize the activity of a given protein within its natural physiological milieu, but also revive classical experimental methods by using modern technologies. The availability of stable (non-radioactive) isotopically labelled precursors has now made it possible to resolve their specific metabolic fates. In this case, such an approach revealed that Mtb can use aspartate as both a carbon and nitrogen source, after its uptake through AnsP1. Looking beyond the specific biochemical assignment of AnsP1 as an aspartate—rather than asparagine—transporter, this study illustrates the potential impact of such discoveries on downstream paths of investigation. In this case, the remarkable application of high-resolution dynamic secondary ion mass spectroscopy to provide the first direct biochemical images of the nutritional environment of the Mtb-infected phagosome.New technologies are often developed in the context of specific needs. However, their impact is usually not realized until extended beyond such contexts, sometimes resulting in major paradigm shifts. The above examples highlight just two emerging possibilities of how metabolomics technologies can be extended beyond the context of global comparisons and provide unique biological insights. To the extent that the analytical power of these platforms can be adapted to other functional approaches, metabolomics promises to pay handsome biochemical and physiological dividends.  相似文献   

18.
Morris SC 《EMBO reports》2011,12(3):182-182
Being attacked by an octopus gives the opportunity to marvel at how convergent evolution created similar organs and senses in cephalopod and man.It is a scene that would do justice to a horror movie: body clamped against the diver''s mask, one tentacle deftly turning off the oxygen supply while others tug relentlessly at the connecting hoses. Despairingly, the diver looks at the octopus and, across an immense phylogenetic gulf, camera eye meets camera eye. If the struggling diver is a biologist he might take some consolation that the glances exchanged depend on a classic example of convergent evolution.Overwhelmingly, however, the octopus is an encounter with the alien: no hands, but tentacles that can untie surgical silk and clamp with innumerable suckers. Its bulbous body houses an enormous brain, but more than half of the nervous system lies in remote ganglia. Across the body flicker the coruscating patterns of the chromatophores, sometimes freezing the animal into an almost exact replica of the sea-floor, or alternatively transforming itself into a facsimile of the banded sea-snake. Science fiction collides with scientific fact. Are the octopus and its relatives not the best thing we have for a proxy alien? Step a little closer.The octopus and related cephalopods might seem to exemplify the ‘other'', but when it comes to reinventing the evolutionary wheel they are dab hands. In addition to those camera eyes, some squid have the reverse arrangement, whereby transparent portals in the body pour bioluminescent light into the inky oceans. Other sensory convergences include a lateral line system, a good approximation of the semicircular canals and superb oculomotor reflexes. The independent evolution of giant axons and a blood–brain barrier are complemented by an impressive list of anatomical convergences. These include cartilage, a closed circulatory system with elastic arteries, a swim-bladder, respiratory proteins (haemocyanin), the famous ink and even a fair stab at a penis.So, in many ways, cephalopods are honorary fish, but as Andrew Packard (1972) made clear there are still “limits of convergence”. This point is robustly echoed by Ronald O''Dor & Dale Webber (1986) whose paper carries a corresponding subtitle “why squid aren''t fish”. Quite so, but again, step a little closer. Concealed in the body plan are convergences that point to some far more interesting evolutionary principles. Consider those writhing arms. ‘One for all, and eight for all''; in principle all are equipotent, but some are evidently employed for one task and others for another (Byrne et al, 2006). This is exemplified by octopuses that stroll bipedally across the lagoon floor. Yet more remarkable are muscular contractions that move in either direction and collide to define pseudo-joints: a rubbery tentacle is transformed into a limb, complete with ‘wrist'' and ‘elbow''. This led Germán Sumbre and colleagues (2005) not only to identify what to some is an apparently surprising functional convergence, but also to suggest that, in the context of any articulated limb, this could be “the optimal design”.Much is also made of the obvious differences in locomotion: myotomal sinuosity compared with jet propulsion in the squid. In the former, the locomotory efficiency depends crucially on the oxidative red muscle and the larger bulk of white muscle. Red muscle is used in routine swimming, whereas the white one springs into action in times of urgent need, and then repays the oxygen debt in just the same way as when the jogger collapses on the park bench and gasps “lactic acid, lactic acid”. The squid''s mantle muscle holds another surprise. The muscle types are directly analogous to the red and white muscle of fish, with corresponding mitochondrial content and glycolytic activity (Mommsen et al,1981).But if squid are honorary fish, somewhere, surely, the convergences must break down. Well, let''s consider the cephalopod kidney. They are excretory, but do not resemble that of any vertebrate. However, they show something curious: with few exceptions, the kidneys are infested with tiny symbionts, but from two entirely unrelated groups (Furuya et al, 2004). One are the dicyemid mesozoans, which earn the trophy for metazoan simplification, being composed of only about 50 cells. They have abandoned all organs including a nervous system, but intriguingly still employ Pax6. The other group are ciliates and belong to the otherwise obscure chromidinids. Consider this evolutionary conundrum: the only place on the planet where these dicyemids and chromidinids can be found is in places awash with cephalopod urine. Long dismissed as parasitic, they are probably vital to kidney function, and I suspect this is the cephalopods'' smart way of constructing a high-performance kidney.So specific, so precise, so strange is this convergence that I am forcibly reminded of Ramón y Cajal''s (1937) contemplation of the insect eye as “a machine so subtilely devised and so perfectly adapted to an end as the visual apparatus” that it provoked him to continue “I must not conceal the fact that […] I for the first time felt my faith in Darwinism […] weakened, being amazed and confounded by the supreme constructive ingenuity”. So too with the cephalopod kidney, haunted as it is by this symbiotic inevitability.But if you really want to feel the hairs pricking on your neck, consider the brain of the octopus (Young et al, 1963). Lobate and of quite different construction to the vertebrates, nevertheless once again the similarities emerge not least between its vertical lobe and our hippocampus. Within these neural recesses, consciousness has flickered into existence and, by a separate evolutionary route, the Universe is becoming self-aware.  相似文献   

19.
Kuzma J  Kokotovich A 《EMBO reports》2011,12(9):883-888
Targeted genetic modification, which enables scientists to genetically engineer plants more efficiently and precisely, challenges current process-based regulatory frameworks for genetically modified crops.In 2010, more than 85% of the corn acreage and more than 90% of the soybean acreage in the USA was planted with genetically modified (GM) crops (USDA, 2010). Most of those crops contained transgenes from other species, such as bacteria, that confer resistance to herbicides or tolerance to insect pests, and that were introduced into plant cells using Agrobacterium or other delivery methods. The resulting ‘transformed'' cells were regenerated into GM plants that were tested for the appropriate expression of the transgenes, as well as for whether the crop posed an unacceptable environmental or health risk, before being approved for commercial use. The scientific advances that enabled the generation of these GM plants took place in the early 1980s and have changed agriculture irrevocably, as evidenced by the widespread adoption of GM technology. They have also triggered intense debates about the potential risks of GM crops for human health and the environment and new forms of regulation that are needed to mitigate this. There is also continued public resistance to GM crops, particularly in Europe.Plant genetic engineering is at a technological inflection pointPlant genetic engineering is at a technological inflection point. New technologies enable more precise and subtler modification of plant genomes (Weinthal et al, 2010) than the comparably crude methods that were used to create the current stock of GM crops (Fig 1A). These methods allow scientists to insert foreign DNA into the plant genome at precise locations, remove unwanted DNA sequences or introduce subtle modifications, such as single-base substitutions that alter the activity of individual genes. They also raise serious questions about the regulation of GM crops: how do these methods differ from existing techniques and how will the resulting products be regulated? Owing to the specificity of these methods, will resulting products fall outside existing definitions of GM crops and, as a result, be regulated similarly to conventional crops? How will the definition and regulation of GM crops be renegotiated and delineated in light of these new methods?Open in a separate windowFigure 1Comparing traditional transgenesis, targeted transgenesis, targeted mutagenesis and gene replacement. (A) In traditional transgenesis, genes introduced into plant cells integrate at random chromosomal positions. This is illustrated here for a bacterial gene that confers herbicide resistance (Herbr). The plant encodes a gene for the same enzyme, however due to DNA-sequence differences between the bacterial and plant forms of the gene, the plant gene does not confer herbicide resistance (Herbs). (B) The bacterial herbicide-resistance gene can be targeted to a specific chromosomal location through the use of engineered nucleases. The nucleases recognize a specific DNA sequence and create a chromosome break. The bacterial gene is flanked by sequences homologous to the target site and recombines with the plant chromosome at the break site, resulting in a targeted insertion. (C) Engineered nucleases can be used to create targeted gene knockouts. In this illustration, a second nuclease recognizes the coding sequence of the Herbs gene. Cleavage and repair in the absence of a homologous template creates a mutation (orange). (D) A homologous DNA donor can be used to repair targeted breaks in the Herbs gene. This allows sequence changes to be introduced into the native plant gene that confer herbicide resistance. Only a single base change is needed in some instances.Of the new wave of targeted genetic modification (TagMo) techniques, one of the most thoroughly developed is TagMo, which uses engineered zinc-finger nucleases or meganucleases to create DNA double-stranded breaks at specific genomic locations (Townsend et al, 2009; Shukla et al, 2009; Gao et al, 2010). This activates DNA repair mechanisms, which genetic engineers can use to alter the target gene. If, for instance, a DNA fragment is provided that has sequence similarity with the site at which the chromosome is broken, the repair mechanism will use this fragment as a template for repair through homologous recombination (Fig 1B). In this way, any DNA sequence, for instance a bacterial gene that confers herbicide resistance, can be inserted at the site of the chromosome break. TagMos can also be used without a repair template to make single-nucleotide changes. In this case, the broken chromosomes are rejoined imprecisely, creating small insertions or deletions at the break site (Fig 1C) that can alter or knock out gene function.TagMo technology would, therefore, challenge regulatory policies both in the USA and, even more so, in the [EU]…The greatest potential of TagMo technology is in its ability to modify native plant genes in directed and targeted ways. For example, the most widely used herbicide-resistance gene in GM crops comes from bacteria. Plants encode the same enzyme, but it does not confer herbicide resistance because the DNA sequence is different. Yet, resistant forms of the plant gene have been identified that differ from native genes by only a few nucleotides. TagMo could therefore be used to transfer these genes from a related species into a crop to replace the existing genes (Fig 1D) or to exchange specific nucleotides until the desired effect is achieved. In either case, the genetic modification would not necessarily involve transfer of DNA from another species. TagMo technology would, therefore, challenge regulatory policies both in the USA and, even more so, in the European Union (EU). TagMo enables more sophisticated modifications of plant genomes that, in some cases, could be achieved by classical breeding or mutagenesis, which are not formally regulated. On the other hand, TagMo might also be used to introduce foreign genes without using traditional recombinant DNA techniques. As a result, TagMo might fall outside of existing US and EU regulatory definitions and scrutiny.In the USA, federal policies to regulate GM crops could provide a framework in which to consider the way TagMo-derived crops might be regulated (Fig 2; Kuzma & Meghani, 2009; Kuzma et al, 2009; Thompson, 2007; McHughen & Smyth, 2008). In 1986, the Office of Science and Technology Policy established the Coordinated Framework for the Regulation of Biotechnology (CFRB) to oversee the environmental release of GM crops and their products (Office of Science and Technology Policy, 1986). The CFRB involves many federal agencies and is still in operation today (Kuzma et al, 2009). It was predicated on the views that regulation should be based on science and that the risks posed by GM crops were the “same in kind” as those of non-GM products; therefore no new laws were deemed to be required (National Research Council, 2000).Open in a separate windowFigure 2Brief history of the regulation of genetic engineering (Kuzma et al, 2009). EPA, Environmental Protection Agency; FIFRA, Federal Insecticide, Fungicide and Rodenticide Act; FDA, Food and Drug Administration; FPPA, Farmland Protection Policy Act; GMO, genetically modified organism; TOSCA, Toxic Substances Control Act; USDA, United States Department of Agriculture.Various old and existing statutes were interpreted somewhat loosely in order to oversee the regulation of GM plants. Depending on the nature of the product, one or several federal agencies might be responsible. GM plants can be regulated by the US Department of Agriculture (USDA) under the Federal Plant Pest Act as ‘plant pests'' if there is a perceived threat of them becoming ‘pests'' (similarly to weeds). Alternatively, if they are pest-resistant, they can be interpreted as ‘plant pesticides'' by the US Environmental Protection Agency (EPA) under the Federal Insecticide, Fungicide, and Rodenticide Act. Each statute requires some kind of pre-market or pre-release biosafety review—evaluation of potential impacts on other organisms in the environment, gene flow between the GM plant and wild relatives, and potential adverse effects on ecosystems. By contrast, the US Food and Drug Administration (FDA) treats GM food crops as equivalent to conventional food products; as such, no special regulations were promulgated under the Federal Food Drug and Cosmetic Act for GM foods. The agency established a pre-market consultation process for GM and other novel foods that is entirely voluntary.…TagMo-derived crops come in several categories relevant to regulation…Finally, and important for our discussion, the US oversight system was built mostly around the idea that GM plants should be regulated on the basis of characteristics of the end-product and not on the process that is used to create them. In reality, however, the process used to create crops is significant, which is highlighted by the fact that the USDA uses a process-based regulatory trigger (McHughen & Smyth, 2008). Instead of being inconsequential, it is important for oversight whether a plant is considered to be a result of GM.How will crops created by TagMo fit into this regulatory framework? If only subtle changes were made to individual genes, the argument could be made that the products are analogous to mutated conventional crops, which are neither regulated nor subject to pre-market or pre-release biosafety assessments (Breyer et al, 2009). However, single mutations are not without risks; for example, they can lead to an increase in expressed plant toxins (National Research Council, 1989, 2000, 2002, 2004; Magana-Gomez & de la Barca 2009). Conversely, if new or foreign genes are introduced through TagMo methods, the resulting plants might not differ substantially from existing GM crops. Thus, TagMo-derived crops come in several categories relevant to regulation: TagMo-derived crops with inserted foreign DNA from sexually compatible or incompatible species; TagMo-derived crops with no DNA inserted, for instance those in which parts of the chromosome have been deleted or genes inactivated; and TagMo-derived crops that either replace a gene with a modified version or change its nucleotide sequence (Fig 1).TagMo-derived crops with foreign genetic material inserted are most similar to traditional GM crops, according to the USDA rule on “Importation, Interstate Movement, and Release Into the Environment of Certain Genetically Engineered Organisms”, which defines genetic engineering as “the genetic modification of organisms by recombinant DNA (rDNA) techniques” (USDA, 1997). In contrast to conventional transgenesis, TagMo enables scientists to predefine the sites into which foreign genes are inserted. If the site of foreign DNA insertion has been previously characterized and shown to have no negative consequences for the plant or its products, then perhaps regulatory requirements to characterize the insertion site and its effects on the plant could be streamlined.TagMo might be used to introduce foreign DNA from sexually compatible or incompatible species into a host organism, either by insertion or replacement. For example, foreign DNA from one species of Brassica—mustard family—can be introduced into another species of Brassica. Alternatively, TagMo might be used to introduce foreign DNA from any organism into the host, such as from bacteria or animals into plants. Arguments have been put forth advocating less stringent regulation of GM crops with cisgenic DNA sequences that come from sexually compatible species (Schouten et al, 2006). Russell and Sparrow (2008) critically evaluate these arguments and conclude that cisgenic GM crops may still have novel traits in novel settings and thus give rise to novel hazards. Furthermore, if cisgenics are not regulated, it might trigger a public backlash, which could be more costly in the long run (Russell & Sparrow, 2008). TagMo-derived crops with genetic sequences from sexually compatible species should therefore still be considered for regulation. Additional clarity and consistency is needed with respect to how cisgenics are defined in US regulatory policy, regardless of whether they are generated by established methods or by TagMo. The USDA regulatory definition of a GM crops is vague, and the EPA has a broad categorical exemption in its rules for GM crops with sequences from sexually compatible species (EPA, 2001).Public failures will probably ensue if TagMo crops slip into the market under the radar without adequate oversightThe deletion of DNA sequences by TagMo to knock out a target gene is potentially of great agronomic value, as it could remove undesirable traits. For instance, it could eliminate anti-nutrients such as trypsin inhibitors in soybean that prevent the use of soy proteins by animals, or compounds that limit the value of a crop as an industrial material, such as ricin, which contaminates castor oil. Many mutagenesis methods yield similar products as TagMos. However, most conventional mutagenesis methods, including DNA alkylating agents or radioactivity, provide no precision in terms of the DNA sequences modified, and probably cause considerable collateral damage to the genome. It could be argued that TagMo is less likely to cause unpredicted genomic changes; however, additional research is required to better understand off-target effects—that is, unintended modification of other sites—by various TagMo platforms.We propose that the discussion about how to regulate TagMo crops should be open, use public engagement and respect several criteria of oversightGenerating targeted gene knockouts (Fig 1C) does not directly involve transfer of foreign DNA, and such plants might seem to warrant an unregulated status. However, most TagMos use reagents such as engineered nucleases, which are created by rDNA methods. The resulting product might therefore be classified as a GM crop under the existing USDA definition for genetic engineering (USDA, 1997) since most TagMos are created by introducing a target-specific nuclease gene into plant cells. It is also possible to deliver rDNA-derived nucleases to cells as RNA or protein, and so foreign DNA would not need to be introduced into plants to achieve the desired mutagenic outcome. In such cases, the rDNA molecule itself never encounters a plant cell. More direction is required from regulatory agencies to stipulate the way in which rDNA can be used in the process of generating crops before the regulated status is triggered.TagMo-derived crops that introduce alien transgenes or knock out native genes are similar to traditional GM crops or conventionally mutagenized plants, respectively, but TagMo crops that alter the DNA sequence of the target gene (Fig 1D) are more difficult to classify. For example, a GM plant could have a single nucleotide change that distinguishes it from its parent and that confers a new trait such as herbicide resistance. If such a subtle genetic alteration were attained by traditional mutagenesis or by screening for natural variation, the resulting plants would not be regulated. As discussed above, if rDNA techniques are used to create the single nucleotide TagMo, one could argue that it should be regulated. Regulation would then focus on the process rather than the product. If single nucleotide changes were exempt, would there be a threshold in the number of bases that can be modified before concerns are raised or regulatory scrutiny is triggered? Or would there be a difference in regulation if the gene replacement involves a sexually compatible or an incompatible species?Most of this discussion has focused on the use of engineered nucleases such as meganucleases or zinc-finger nucleases to create TagMos. Oligonucleotide-mediated mutagenesis (OMM), however, is also used to modify plant genes (Breyer et al, 2009). OMM uses chemically synthesized oligonucleotides that are homologous to the target gene, except for the nucleotides to be changed. Breyer et al (2009) argue that OMM “should not be considered as using recombinant nucleic acid molecules” and that “OMM should be considered as a form of mutagenesis, a technique which is excluded from the scope of the EU regulation.” However, they admit that the resulting crops could be considered as GM organisms, according to EU regulatory definitions for biotechnology. They report that in the USA, OMM plants have been declared non-GM by the USDA, but it is unclear whether the non-GM distinction in the USA has regulatory implications. OMM is already being used to develop crops with herbicide tolerance, and so regulatory guidelines need to be clarified before market release.In turning to address how TagMo-related oversight should proceed, two questions are central: how are decisions made and who is involved in making them? The analysis above illustrates that many fundamental decisions need to be made concerning the way in which TagMo-derived products will be regulated and, more broadly, what constitutes a GM organism for regulatory purposes. These decisions are inherently values-based in that views on how to regulate TagMo products differ on the basis of understanding of and attitudes towards agriculture, risk, nature and technology. Neglecting the values-based assumptions underlying these decisions can lead to poor decision-making, through misunderstanding the issues at hand, and public and stakeholder backlash resulting from disagreements over values.Bozeman & Sarewitz (2005) consider this problem in a framework of ‘market failures'' and ‘public failures''. GM crops have exhibited both. Market failures are exemplified by the loss of trade with the EU owing to different regulatory standards and levels of caution (PIFB, 2006). Furthermore, there has been a decline in the number of GM crops approved for interstate movement in the USA since 2001. Public failures result from incongruence between actions by decision-makers and the values of the public. Public failures are exemplified by the anti-GM sentiment in the labelling of organic foods in the USA and court challenges to the biosafety review of GM crops by the USDA''s Animal and Plant Health Inspection Service (McHughen & Smyth, 2008). These lawsuits have delayed approval of genetically engineered alfalfa and sugar beet, thus blurring the distinction between public and market failures. Public failures will probably ensue if TagMo crops slip into the market under the radar without adequate oversight.The possibility of public failures with TagMo crops highlights the benefits of an anticipatory governance-based approach, and will help to ensure that the technology meets societal needsAnticipatory governance is a framework with principles that are well suited to guiding TagMo-related oversight and to helping to avoid public failures. It challenges an understanding of technology development that downplays the importance of societal factors—such as implications for stakeholders and the environment—and argues that societal factors should inform technology development and governance from the start (Macnaghten et al, 2005).Anticipatory governance uses three principles: foresight, integration of natural and social science research, and upstream public engagement (Karinen & Guston, 2010). The first two principles emphasize proactive engagement using interdisciplinary knowledge. Governance processes that use these principles include real-time technology assessment (Guston & Sarewitz, 2002) and upstream oversight assessment (Kuzma et al, 2008b). The third principle, upstream public engagement, involves stakeholders and the public in directing values-based assumptions within technology development and oversight (Wilsdon & Wills, 2004). Justifications for upstream public engagement are substantive (stakeholders and the public can provide information that improves decisions), instrumental (including stakeholders and the public in the decision-making process leads to more trusted decisions) and normative (citizens have a right to influence decisions about issues that affect them).TagMo crop developers seem to be arguing for a ‘process-based'' exclusion of TagMo crops from regulatory oversight, without public knowledge of their development or ongoing regulatory communication. We propose that the discussion about how to regulate TagMo crops should be open, use public engagement and respect several criteria of oversight (Kuzma et al, 2008a). These criteria should include not only biosafety, but also broader impacts on human and ecological health and well-being, distribution of health impacts, transparency, treatment of intellectual property and confidential business information, economic costs and benefits, as well as public confidence and values.We also propose that the CFRB should be a starting point for TagMo oversight. The various categories of TagMo require an approach that can discern and address the risks associated with each application. The CFRB allows for such flexibility. At the same time, the CFRB should improve public engagement and transparency, post-market monitoring and some aspects of technical risk assessment.As we have argued, TagMo is on the verge of being broadly implemented to create crop varieties with new traits, and this raises many oversight questions. First, the way in which TagMo technologies will be classified and handled within the US regulatory system has yet to be determined. As these decisions are complex, values-based and have far-reaching implications, they should be made in a transparent way that draws on insights from the natural and social sciences, and involves stakeholders and the public. Second, as products derived from TagMo technologies will soon reach the marketplace, it is important to begin predicting and addressing potential regulatory challenges, to ensure that oversight systems are in place. The possibility of public failures with TagMo crops highlights the benefits of an anticipatory governance-based approach, and will help to ensure that the technology meets societal needs.So far, the EU has emphasized governance approaches and stakeholder involvement in the regulation of new technologies more than the USA. However, if the USA can agree on a regulatory system for TagMo crops that is the result of open and transparent discussions with the public and stakeholders, it could take the lead and act as a model for similar regulation in the EU and globally. Before this can happen, a shift in US approaches to regulatory policy would be needed.? Open in a separate windowJennifer KuzmaOpen in a separate windowAdam Kokotovich  相似文献   

20.
Rinaldi A 《EMBO reports》2012,13(4):303-307
Scientists and journalists try to engage the public with exciting stories, but who is guilty of overselling research and what are the consequences?Scientists love to hate the media for distorting science or getting the facts wrong. Even as they do so, they court publicity for their latest findings, which can bring a slew of media attention and public interest. Getting your research into the national press can result in great boons in terms of political and financial support. Conversely, when scientific discoveries turn out to be wrong, or to have been hyped, the negative press can have a damaging effect on careers and, perhaps more importantly, the image of science itself. Walking the line between ‘selling'' a story and ‘hyping'' it far beyond the evidence is no easy task. Professional science communicators work carefully with scientists and journalists to ensure that the messages from research are translated for the public accurately and appropriately. But when things do go wrong, is it always the fault of journalists, or are scientists and those they employ to communicate sometimes equally to blame?Walking the line between ‘selling'' a story and ‘hyping'' it far beyond the evidence is no easy taskHyping in science has existed since the dawn of research itself. When scientists relied on the money of wealthy benefactors with little expertise to fund their research, the temptation to claim that they could turn lead into gold, or that they could discover the secret of eternal life, must have been huge. In the modern era, hyping of research tends to make less exuberant claims, but it is no less damaging and no less deceitful, even if sometimes unintentionally so. A few recent cases have brought this problem to the surface again.The most frenzied of these was the report in Science last year that a newly isolated bacterial strain could replace phosphate with arsenate in cellular constituents such as nucleic acids and proteins [1]. The study, led by NASA astrobiologist Felisa Wolfe-Simon, showed that a new strain of the Halomonadaceae family of halofilic proteobacteria, isolated from the alkaline and hypersaline Mono Lake in California (Fig 1), could not only survive in arsenic-rich conditions, such as those found in its original environment, but even thrive by using arsenic entirely in place of phosphorus. “The definition of life has just expanded. As we pursue our efforts to seek signs of life in the solar system, we have to think more broadly, more diversely and consider life as we do not know it,” commented Ed Weiler, NASA''s associate administrator for the Science Mission Directorate at the agency''s Headquarters in Washington, in the original press release [2].Open in a separate windowFigure 1Sunrise at Mono Lake. Mono Lake, located in eastern California, is bounded to the west by the Sierra Nevada mountains. This ancient alkaline lake is known for unusual tufa (limestone) formations rising from the water''s surface (shown here), as well as for its hypersalinity and high concentrations of arsenic. See Wolfe-Simon et al [1]. Credit: Henry Bortman.The accompanying “search for life beyond Earth” and “alternative biochemistry makeup” hints contained in the same release were lapped up by the media, which covered the breakthrough with headlines such as “Arsenic-loving bacteria may help in hunt for alien life” (BBC News), “Arsenic-based bacteria point to new life forms” (New Scientist), “Arsenic-feeding bacteria find expands traditional notions of life” (CNN). However, it did not take long for criticism to manifest, with many scientists openly questioning whether background levels of phosphorus could have fuelled the bacteria''s growth in the cultures, whether arsenate compounds are even stable in aqueous solution, and whether the tests the authors used to prove that arsenic atoms were replacing phosphorus ones in key biomolecules were accurate. The backlash was so bitter that Science published the concerns of several research groups commenting on the technical shortcomings of the study and went so far as to change its original press release for reporters, adding a warning note that reads “Clarification: this paper describes a bacterium that substitutes arsenic for a small percentage of its phosphorus, rather than living entirely off arsenic.”Microbiologists Simon Silver and Le T. Phung, from the University of Illinois, Chicago, USA, were heavily critical of the study, voicing their concern in one of the journals of the Federation of European Microbiological Societies, FEMS Microbiology Letters. “The recent online report in Science […] either (1) wonderfully expands our imaginations as to how living cells might function […] or (2) is just the newest example of how scientist-authors can walk off the plank in their imaginations when interpreting their results, how peer reviewers (if there were any) simply missed their responsibilities and how a press release from the publisher of Science can result in irresponsible publicity in the New York Times and on television. We suggest the latter alternative is the case, and that this report should have been stopped at each of several stages” [3]. Meanwhile, Wolfe-Simon is looking for another chance to prove she was right about the arsenic-loving bug, and Silver and colleagues have completed the bacterium''s genome shotgun sequencing and found 3,400 genes in its 3.5 million bases (www.ncbi.nlm.nih.gov/Traces/wgs/?val=AHBC01).“I can only comment that it would probably be best if one had avoided a flurry of press conferences and speculative extrapolations. The discovery, if true, would be similarly impressive without any hype in the press releases,” commented John Ioannidis, Professor of Medicine at Stanford University School of Medicine in the USA. “I also think that this is the kind of discovery that can definitely wait for a validation by several independent teams before stirring the world. It is not the type of research finding that one cannot wait to trumpet as if thousands and millions of people were to die if they did not know about it,” he explained. “If validated, it may be material for a Nobel prize, but if not, then the claims would backfire on the credibility of science in the public view.”Another instructive example of science hyping was sparked by a recent report of fossil teeth, dating to between 200,000 and 400,000 years ago, which were unearthed in the Qesem Cave near Tel Aviv by Israeli and Spanish scientists [4]. Although the teeth cannot yet be conclusively ascribed to Homo sapiens, Homo neanderthalensis, or any other species of hominid, the media coverage and the original press release from Tel Aviv University stretched the relevance of the story—and the evidence—proclaiming that the finding demonstrates humans lived in Israel 400,000 years ago, which should force scientists to rewrite human history. Were such evidence of modern humans in the Middle East so long ago confirmed, it would indeed clash with the prevailing view of human origin in Africa some 200,000 years ago and the dispersal from the cradle continent that began about 70,000 years ago. But, as freelance science writer Brian Switek has pointed out, “The identity of the Qesem Cave humans cannot be conclusively determined. All the grandiose statements about their relevance to the origin of our species reach beyond what the actual fossil material will allow” [5].An example of sensationalist coverage? “It has long been believed that modern man emerged from the continent of Africa 200,000 years ago. Now Tel Aviv University archaeologists have uncovered evidence that Homo sapiens roamed the land now called Israel as early as 400,000 years ago—the earliest evidence for the existence of modern man anywhere in the world,” reads a press release from the New York-based organization, American Friends of Tel Aviv University [6].“The extent of hype depends on how people interpret facts and evidence, and their intent in the claims they are making. Hype in science can range from ‘no hype'', where predictions of scientific futures are 100% fact based, to complete exaggeration based on no facts or evidence,” commented Zubin Master, a researcher in science ethics at the University of Alberta in Edmonton, Canada. “Intention also plays a role in hype and the prediction of scientific futures, as making extravagant claims, for example in an attempt to secure funds, could be tantamount to lying.”Are scientists more and more often indulging in creative speculation when interpreting their results, just to get extraordinary media coverage of their discoveries? Is science journalism progressively shifting towards hyping stories to attract readers?“The vast majority of scientific work can wait for some independent validation before its importance is trumpeted to the wider public. Over-interpretation of results is common and as scientists we are continuously under pressure to show that we make big discoveries,” commented Ioannidis. “However, probably our role [as scientists] is more important in making sure that we provide balanced views of evidence and in identifying how we can question more rigorously the validity of our own discoveries.”“The vast majority of scientific work can wait for some independent validation before its importance is trumpeted to the wider public”Stephanie Suhr, who is involved in the management of the European XFEL—a facility being built in Germany to generate intense X-ray flashes for use in many disciplines—notes in her introduction to a series of essays on the ethics of science journalism that, “Arguably, there may also be an increasing temptation for scientists to hype their research and ‘hit the headlines''” [7]. In her analysis, Suhr quotes at least one instance—the discovery in 2009 of the Darwinius masillae fossil, presented as the missing link in human evolution [8]—in which the release of a ‘breakthrough'' scientific publication seems to have been coordinated with simultaneous documentaries and press releases, resulting in what can be considered a study case for science hyping [7].Although there is nothing wrong in principle with a broad communication strategy aimed at the rapid dissemination of a scientific discovery, some caveats exist. “[This] strategy […] might be better applied to a scientific subject or body of research. When applied to a single study, there [is] a far greater likelihood of engaging in unmerited hype with the risk of diminishing public trust or at least numbing the audience to claims of ‘startling new discoveries'',” wrote science communication expert Matthew Nisbet in his Age of Engagement blog (bigthink.com/blogs/age-of-engagement) about how media communication was managed in the Darwinius affair. “[A]ctivating the various channels and audiences was the right strategy but the language and metaphor used strayed into the realm of hype,” Nisbet, who is an Associate Professor in the School of Communication at American University, Washington DC, USA, commented in his post [9]. “We are ethically bound to think carefully about how to go beyond the very small audience that follows traditional science coverage and think systematically about how to reach a wider, more diverse audience via multiple media platforms. But in engaging with these new media platforms and audiences, we are also ethically bound to avoid hype and maintain accuracy and context” [9].But the blame for science hype cannot be laid solely at the feet of scientists and press officers. Journalists must take their fair share of reproach. “As news online comes faster and faster, there is an enormous temptation for media outlets and journalists to quickly publish topics that will grab the readers'' attention, sometimes at the cost of accuracy,” Suhr wrote [7]. Of course, the media landscape is extremely varied, as science blogger and writer Bora Zivkovic pointed out. “There is no unified thing called ‘Media''. There are wonderful specialized science writers out there, and there are beat reporters who occasionally get assigned a science story as one of several they have to file every day,” he explained. “There are careful reporters, and there are those who tend to hype. There are media outlets that value accuracy above everything else; others that put beauty of language above all else; and there are outlets that value speed, sexy headlines and ad revenue above all.”…the blame for science hype cannot be laid solely at the feet of scientists and press officers. Journalists must take their fair share of reproachOne notable example of media-sourced hype comes from J. Craig Venter''s announcement in the spring of 2010 of the first self-replicating bacterial cell controlled by a synthetic genome (Fig 2). A major media buzz ensued, over-emphasizing and somewhat distorting an anyway remarkable scientific achievement. Press coverage ranged from the extremes of announcing ‘artificial life'' to saying that Venter was playing God, adding to cultural and bioethical tension the warning that synthetic organisms could be turned into biological weapons or cause environmental disasters.Open in a separate windowFigure 2Schematic depicting the assembly of a synthetic Mycoplasma mycoides genome in yeast. For details of the construction of the genome, please see the original article. From Gibson et al [13] Science 329, 52–56. Reprinted with permission from AAAS.“The notion that scientists might some day create life is a fraught meme in Western culture. One mustn''t mess with such things, we are told, because the creation of life is the province of gods, monsters, and practitioners of the dark arts. Thus, any hint that science may be on the verge of putting the power of creation into the hands of mere mortals elicits a certain discomfort, even if the hint amounts to no more than distorted gossip,” remarked Rob Carlson, who writes on the future role of biology as a human technology, about the public reaction and the media frenzy that arose from the news [10].Yet the media can also behave responsibly when faced with extravagant claims in press releases. Fiona Fox, Chief Executive of the Science Media Centre in the UK, details such an example in her blog, On Science and the Media (fionafox.blogspot.com). The Science Media Centre''s role is to facilitate communication between scientists and the press, so they often receive calls from journalists asking to be put in touch with an expert. In this case, the journalist asked for an expert to comment on a story about silver being more effective against cancer than chemotherapy. A wild claim; yet, as Fox points out in her blog, the hype came directly from the institution''s press office: “Under the heading ‘A silver bullet to beat cancer?'' the top line of the press release stated that ‘Lab tests have shown that it (silver) is as effective as the leading chemotherapy drug—and may have far fewer side effects.'' Far from including any caveats or cautionary notes up front, the press office even included an introductory note claiming that the study ‘has confirmed the quack claim that silver has cancer-killing properties''” [11]. Fox praises the majority of the UK national press that concluded that this was not a big story to cover, pointing out that, “We''ve now got to the stage where not only do the best science journalists have to fight the perverse news values of their news editors but also to try to read between the lines of overhyped press releases to get to the truth of what a scientific study is really claiming.”…the concern is that hype inflates public expectations, resulting in a loss of trust in a given technology or research avenue if promises are not kept; however, the premise is not fully provenYet, is hype detrimental to science? In many instances, the concern is that hype inflates public expectations, resulting in a loss of trust in a given technology or research avenue if promises are not kept; however, the premise is not fully proven (Sidebar A). “There is no empirical evidence to suggest that unmet promises due to hype in biotechnology, and possibly other scientific fields, will lead to a loss of public trust and, potentially, a loss of public support for science. Thus, arguments made on hype and public trust must be nuanced to reflect this understanding,” Master pointed out.

Sidebar A | Up and down the hype cycle

AlthoughAlthough hype is usually considered a negative and largely unwanted aspect of scientific and technological communication, it cannot be denied that emphasizing, at least initially, the benefits of a given technology can further its development and use. From this point of view, hype can be seen as a normal stage of technological development, within certain limits. The maturity, adoption and application of specific technologies apparently follow a common trend pattern, described by the information technology company, Gartner, Inc., as the ‘hype cycle''. The idea is based on the observation that, after an initial trigger phase, novel technologies pass through a peak of over-excitement (or hype), often followed by a subsequent general disenchantment, before eventually coming under the spotlight again and reaching a stable plateau of productivity. Thus, hype cycles “[h]ighlight overhyped areas against those that are high impact, estimate how long technologies and trends will take to reach maturity, and help organizations decide when to adopt” (www.gartner.com).“Science is a human endeavour and as such it is inevitably shaped by our subjective responses. Scientists are not immune to these same reactions and it might be valuable to evaluate the visibility of different scientific concepts or technologies using the hype cycle,” commented Pedro Beltrao, a cellular biologist at the University of California San Francisco, USA, who runs the Public Rambling blog (pbeltrao.blogspot.com) about bioinformatics science and technology. The exercise of placing technologies in the context of the hype cycle can help us to distinguish between their real productive value and our subjective level of excitement, Beltrao explained. “As an example, I have tried to place a few concepts and technologies related to systems biology along the cycle''s axis of visibility and maturity [see illustration]. Using this, one could suggest that technologies like gene-expression arrays or mass-spectrometry have reached a stable productivity level, while the potential of concepts like personalized medicine or genome-wide association studies (GWAS) might be currently over-valued.”Together with bioethicist colleague David Resnik, Master has recently highlighted the need for empirical research that examines the relationships between hype, public trust, and public enthusiasm and/or support [12]. Their argument proposes that studies on the effect of hype on public trust can be undertaken by using both quantitative and qualitative methods: “Research can be designed to measure hype through a variety of sources including websites, blogs, movies, billboards, magazines, scientific publications, and press releases,” the authors write. “Semi-structured interviews with several specific stakeholders including genetics researchers, media representatives, patient advocates, other academic researchers (that is, ethicists, lawyers, and social scientists), physicians, ethics review board members, patients with genetic diseases, government spokespersons, and politicians could be performed. Also, members of the general public would be interviewed” [12]. They also point out that such an approach to estimate hype and its effect on public enthusiasm and support should carefully define the public under study, as different publics might have different expectations of scientific research, and will therefore have different baseline levels of trust.Increased awareness of the underlying risks of over-hyping research should help to balance the scientific facts with speculation on the enticing truths and possibilities they revealUltimately, exaggerating, hyping or outright lying is rarely a good thing. Hyping science is detrimental to various degrees to all science communication stakeholders—scientists, institutions, journalists, writers, newspapers and the public. It is important that scientists take responsibility for their share of the hyping done and do not automatically blame the media for making things up or getting things wrong. Such discipline in science communication is increasingly important as science searches for answers to the challenges of this century. Increased awareness of the underlying risks of over-hyping research should help to balance the scientific facts with speculation on the enticing truths and possibilities they reveal. The real challenge lies in favouring such an evolved approach to science communication in the face of a rolling 24-hour news cycle, tight science budgets and the uncontrolled and uncontrollable world of the Internet.? Open in a separate windowThe hype cycle for the life sciences. Pedro Beltrao''s view of the excitement–disappointment–maturation cycle of bioscience-related technologies and/or ideas. GWAS: genome-wide association studies. Credit: Pedro Beltrao.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号