首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Elixirs of death     
Substandard and fake drugs are increasingly threatening lives in both the developed and developing world, but governments and industry are struggling to improve the situation.When people take medicine, they assume that it will make them better. However many patients cannot trust their drugs to be effective or even safe. Fake or substandard medicine is a major public health problem and it seems to be growing. More than 200 heart patients died in Pakistan in 2012 after taking a contaminated drug against hypertension [1]. In 2006, cough syrup that contained diethylene glycol as a cheap substitute for pharmaceutical-grade glycerin was distributed in Panama, causing the death of at least 219 people [2,3]. However, the problem is not restricted to developing countries. In 2012, more than 500 patients came down with fungal meningitis and several dozens died after receiving contaminated steroid injections from a compounding pharmacy in Massachusetts [4]. The same year, a fake version of the anti-cancer drug Avastin, which contained no active ingredient, was sold in the USA. The drug seemed to have entered the country through Turkey, Switzerland, Denmark and the UK [5].…many patients cannot trust their drugs to be effective or even safeThe extent of the problem is not really known, as companies and governments do not always report incidents [6]. However, the information that is available is alarming enough, especially in developing countries. One study found that 20% of antihypertensive drugs collected from pharmacies in Rwanda were substandard [7]. Similarly, in a survey of anti-malaria drugs in Southeast Asia and sub-Saharan Africa, 20–42% were found to be either of poor quality or outright fake [8], whilst 56% of amoxicillin capsules sampled in different Arab countries did not meet the US Pharmacopeia requirements [9].Developing countries are particularly susceptible to substandard and fake medicine. Regulatory authorities do not have the means or human resources to oversee drug manufacturing and distribution. A country plagued by civil war or famine might have more pressing problems—including shortages of medicine in the first place. The drug supply chain is confusingly complex with medicines passing through many different hands before they reach the patient, which creates many possible entry points for illegitimate products. Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insurance. Instead, they buy cheap medicine from street vendors at the market or on the bus (Fig 1; [2,10,11]). “People do not have the money to buy medicine at a reasonable price. But quality comes at a price. A reasonable margin is required to pay for a quality control system,” explained Hans Hogerzeil, Professor of Global Health at Groningen University in the Netherlands. In some countries, falsifying medicine has developed into a major business. The low risk of being detected combined with relatively low penalties has turned falsifying medicine into the “perfect crime” [2].Open in a separate windowFigure 1Women sell smuggled, counterfeit medicine on the Adjame market in Abidjan, Ivory Coast, in 2007. Fraudulent street medecine sales rose by 15–25% in the past two years in Ivory Coast.Issouf Sanogo/AFP Photo/Getty Images.There are two main categories of illegitimate drugs. ‘Substandard'' medicines might result from poor-quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intent. It might be manufactured outside the regulatory system, perhaps in an illegitimate production shack that blends chalk with other ingredients and presses it into pills [10]. Whilst falsified medicines do not typically contain any active ingredients, substandard medicine might contain subtherapeutic amounts. This is particularly problematic when it comes to anti-infectious drugs, as it facilitates the emergence and spread of drug resistance [12]. A sad example is the emergence of artemisinin-resistant Plasmodium strains at the Thai–Cambodia border [8] and the Thai–Myanmar border [13], and increasing multidrug-resistant tuberculosis might also be attributed to substandard medication [11].Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insuranceEven if a country effectively prosecutes falsified and substandard medicine within its borders, it is still vulnerable to fakes and low-quality drugs produced elsewhere where regulations are more lax. To address this problem, international initiatives are urgently required [10,14,15], but there is no internationally binding law to combat counterfeit and substandard medicine. Although drug companies, governments and NGOs are interested in good-quality medicines, the different parties seem to have difficulties coming to terms with how to proceed. What has held up progress is a conflation of health issues and economic interests: innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting quality of medicine [14,16].The concern that intellectual property (IP) interests threaten public health dates back to the ‘Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement'' of the World Trade Organization (WTO), adopted in 1994, to establish global protection of intellectual property rights, including patents for pharmaceuticals. The TRIPS Agreement had devastating consequences during the acquired immunodeficiency syndrome epidemic, as it blocked patients in developing countries from access to affordable medicine. Although it includes flexibility, such as the possibility for governments to grant compulsory licenses to manufacture or import a generic version of a patented drug, it has not always been clear how these can be used by countries [14,16,17].In response to public concerns over the public health consequences of TRIPS, the Doha Declaration on the TRIPS Agreement and Public Health was adopted at the WTO''s Ministerial Conference in 2001. It reaffirmed the right of countries to use TRIPS flexibilities and confirmed the primacy of public health over the enforcement of IP rights. Although things have changed for the better, the Doha Declaration did not solve all the problems associated with IP protection and public health. For example, anti-counterfeit legislation, encouraged by multi-national pharmaceutical industries and the EU, threatened to impede the availability of generic medicines in East Africa [14,16,18]. In 2008–2009, European customs authorities seized shipments of legitimate generic medicines in transit from India to other developing countries because they infringed European IP laws [14,16,17]. “We''re left with decisions being taken based on patents and trademarks that should be taken based on health,” commented Roger Bate, a global health expert and resident scholar at the American Enterprise Institute in Washington, USA. “The health community is shooting themselves in the foot.”Conflating health care and IP issues are reflected in the unclear use of the term ‘counterfeit'' [2,14]. “Since the 1990s the World Health Organization (WHO) has used the term ‘counterfeit'' in the sense we now use ‘falsified'',” explained Hogerzeil. “The confusion started in 1995 with the TRIPS agreement, through which the term ‘counterfeit'' got the very narrow meaning of trademark infringement.” As a consequence, an Indian generic, for example, which is legal in some countries but not in others, could be labelled as ‘counterfeit''—and thus acquire the negative connotation of bad quality. “The counterfeit discussion was very much used as a way to block the market of generics and to put them in a bad light,” Hogerzeil concluded.The rifts between the stakeholders have become so deep during the course of these discussions that progress is difficult to achieve. “India is not at all interested in any international regulation. And, unfortunately, it wouldn''t make much sense to do anything without them,” Hogerzeil explained. Indeed, India is a core player: not only does it have a large generics industry, but also the country seems to be, together with China, the biggest source of fake medical products [19,20]. The fact that India is so reluctant to react is tragically ironic, as this stance hampers the growth of its own generic companies like Ranbaxy, Cipla or Piramal. “I certainly don''t believe that Indian generics would lose market share if there was stronger action on public health,” Bate said. Indeed, stricter regulations and control systems would be advantageous, because they would keep fakers at bay. The Indian generic industry is a common target for fakers, because their products are broadly distributed. “The most likely example of a counterfeit product I have come across in emerging markets is a counterfeit Indian generic,” Bate said. Such fakes can damage a company''s reputation and have a negative impact on its revenues when customers stop buying the product.The WHO has had a key role in attempting to draft international regulations that would contain the spread of falsified and substandard medicine. It took a lead in 2006 with the launch of the International Medical Products Anti-Counterfeiting Taskforce (IMPACT). But IMPACT was not a success. Concerns were raised over the influence of multi-national drug companies and the possibility that issues on quality of medicines were conflated with the attempts to enforce stronger IP measures [17]. The WHO distanced itself from IMPACT after 2010. For example, it no longer hosts IMPACT''s secretariat at its headquarters in Geneva [2].‘Substandard'' medicines might result from poor quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intentIn 2010, the WHO''s member states established a working group to further investigate how to proceed, which led to the establishment of a new “Member State mechanism on substandard/spurious/falsely labelled/falsified/counterfeit medical products” (http://www.who.int/medicines/services/counterfeit/en/index.html). However, according to a publication by Amir Attaran from the University of Ottawa, Canada, and international colleagues, the working group “still cannot agree how to define the various poor-quality medicines, much less settle on any concrete actions” [14]. The paper''s authors demand more action and propose a binding legal framework: a treaty. “Until we have stronger public health law, I don''t think that we are going to resolve this problem,” Bate, who is one of the authors of the paper, said.Similarly, the US Food and Drug Administration (FDA) commissioned the Institute of Medicine (IOM) to convene a consensus committee on understanding the global public health implications of falsified and substandard pharmaceuticals [2]. Whilst others have called for a treaty, the IOM report calls on the World Health Assembly—the governing body of the WHO—to develop a code of practice such as a “voluntary soft law” that countries can sign to express their will to do better. “At the moment, there is not yet enough political interest in a treaty. A code of conduct may be more realistic,” Hogerzeil, who is also on the IOM committee, commented. Efforts to work towards a treaty should nonetheless be pursued, Bate insisted: “The IOM is right in that we are not ready to sign a treaty yet, but that does not mean you don''t start negotiating one.”Whilst a treaty might take some time, there are several ideas from the IOM report and elsewhere that could already be put into action to deal with this global health threat [10,12,14,15,19]. Any attempts to safeguard medicines need to address both falsified and substandard medicines, but the counter-measures are different [14]. Falsifying medicine is, by definition, a criminal act. To counteract fakers, action needs to be taken to ensure that the appropriate legal authorities deal with criminals. Substandard medicine, on the other hand, arises when mistakes are made in genuine manufacturing companies. Such mistakes can be reduced by helping companies do better and by improving quality control of drug regulatory authorities.Manufacturing pharmaceuticals is a difficult and costly business that requires clean water, high-quality chemicals, expensive equipment, technical expertise and distribution networks. Large and multi-national companies benefit from economies of scale to cope with these problems. But smaller companies often struggle and compromise in quality [2,21]. “India has 20–40 big companies and perhaps nearly 20,000 small ones. To me, it seems impossible for them to produce at good quality, if they remain so small,” Hogerzeil explained. “And only by being strict, can you force them to combine and to become bigger industries that can afford good-quality assurance systems.” Clamping down on drug quality will therefore lead to a consolidation of the industry, which is an essential step. “If you look at Europe and the US, there were hundreds of drug companies—now there are dozens. And if you look at the situation in India and China today, there are thousands and that will have to come down to dozens as well,” Bate explained.…innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting […] medicineIn addition to consolidating the market by applying stricter rules, the IOM has also suggested measures for supporting companies that observe best practices [2]. For example, the IOM proposes that the International Finance Corporation and the Overseas Private Investment Corporation, which promote private-sector development to reduce poverty, should create separate investment vehicles for pharmaceutical manufacturers who want to upgrade to international standards. Another suggestion is to harmonize market registration of pharmaceutical products, which would ease the regulatory burden for generic producers in developing countries and improve the efficiency of regulatory agencies.Once the medicine leaves the manufacturer, controlling distribution systems becomes another major challenge in combatting falsified and substandard medicine. Global drug supply chains have grown increasingly complicated; drugs cross borders, are sold back and forth between wholesalers and distributers, and are often repackaged. Still, there is a main difference between developing and developed countries. In the latter case, relatively few companies dominate the market, whereas in poorer nations, the distribution system is often fragmented and uncontrolled with parallel schemes, too few pharmacies, even fewer pharmacists and many unlicensed medical vendors. Every transaction creates an opportunity for falsified or substandard medicine to enter the market [2,10,19]. More streamlined and transparent supply chains and stricter licensing requirements would be crucial to improve drug quality. “And we can start in the US,” Hogerzeil commented.…India is a core player: not only does it have a large generics industry, but the country also seems to be, together with China, the biggest source of fake medical productsDistribution could be improved at different levels, starting with the import of medicine. “There are states in the USA where the regulation for medicine importation is very lax. Anyone can import; private clinics can buy medicine from Lebanon or elsewhere and fly them in,” Hogerzeil explained. The next level would be better control over the distribution system within the country. The IOM suggests that state boards should license wholesalers and distributors that meet the National Association of Boards of Pharmacy accreditation standards. “Everybody dealing with medicine has to be licensed,” Hogerzeil said. “And there should be a paper trail of who buys what from whom. That way you close the entry points for illegal drugs and prevent that falsified medicines enter the legal supply chain.” The last level would be a track-and-trace system to identify authentic drugs [2]. Every single package of medicine should be identifiable through an individual marker, such as a 3D bar code. Once it is sold, it is ticked off in a central database, so the marker cannot be reused.According to Hogerzeil, equivalent measures at these different levels should be established in every country. “I don''t believe in double standards”, he said. “Don''t say to Uganda: ‘you can''t do that''. Rather, indicate to them what a cost-effective system in the West looks like and help them, and give them the time, to create something in that direction that is feasible in their situation.”Nigeria, for instance, has demonstrated that with enough political will, it is possible to reduce the proliferation of falsified and substandard medicine. Nigeria had been a major source for falsified products, but things changed in 2001, when Dora Akunyili was appointed Director General of the National Agency for Food and Drug Administration and Control. Akunyili has a personal motivation for fighting falsified drugs: her sister Vivian, a diabetic patient, lost her life to fake insulin in 1988. Akunyili strengthened import controls, campaigned for public awareness, clamped down on counterfeit operations and pushed for harsher punishments [10,19]. Paul Orhii, Akunyili''s successor, is committed to continuing her work [10]. Although there are no exact figures, various surveys indicate that the rate of bad-quality medicine has dropped considerably in Nigeria [10].China is also addressing its drug-quality problems. In a highly publicized event, the former head of China''s State Food and Drug Administration, Zheng Xiaoyu, was executed in 2007 after he was found guilty of accepting bribes to approve untested medicine. Since then, China''s fight against falsified medicine has continued. As a result of heightened enforcement, the number of drug companies in China dwindled from 5,000 in 2004 to about 3,500 this year [2]. Moreover, in July 2012, more than 1,900 suspects were arrested for the sale of fake or counterfeit drugs.Quality comes at a price, however. It is expensive to produce high-quality medicine, and it is expensive to control the production and distribution of drugs. Many low- and middle-income countries might not have the resources to tackle the problem and might not see quality of medicine as a priority. But they should, and affluent countries should help. Not only because health is a human right, but also for economic reasons. A great deal of time and money is invested into testing the safety and efficacy of medicine during drug development, and these resources are wasted when drugs do not reach patients. Falsified and substandard medicines are a financial burden to health systems and the emergence of drug-resistant pathogens might make invaluable medications useless. Investing in the safety of medicine is therefore a humane and an economic imperative.  相似文献   

2.
The public view of life-extension technologies is more nuanced than expected and researchers must engage in discussions if they hope to promote awareness and acceptanceThere is increasing research and commercial interest in the development of novel interventions that might be able to extend human life expectancy by decelerating the ageing process. In this context, there is unabated interest in the life-extending effects of caloric restriction in mammals, and there are great hopes for drugs that could slow human ageing by mimicking its effects (Fontana et al, 2010). The multinational pharmaceutical company GlaxoSmithKline, for example, acquired Sirtris Pharmaceuticals in 2008, ostensibly for their portfolio of drugs targeting ‘diseases of ageing''. More recently, the immunosuppressant drug rapamycin has been shown to extend maximum lifespan in mice (Harrison et al, 2009). Such findings have stoked the kind of enthusiasm that has become common in media reports of life-extension and anti-ageing research, with claims that rapamycin might be “the cure for all that ails” (Hasty, 2009), or that it is an “anti-aging drug [that] could be used today” (Blagosklonny, 2007).Given the academic, commercial and media interest in prolonging human lifespan—a centuries-old dream of humanity—it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives, and to ask whether they would be willing to buy and use drugs that slow the ageing process. Surveys that have addressed these questions, have given some rather surprising results, contrary to the expectations of many researchers in the field. They have also highlighted that although human life extension (HLE) and ageing are topics with enormous implications for society and individuals, scientists have not communicated efficiently with the public about their research and its possible applications.Given the academic, commercial and media interest in prolonging human lifespan […] it is interesting to gauge what the public thinks about the possibility of living longer, healthier lives…Proponents and opponents of HLE often assume that public attitudes towards ageing interventions will be strongly for or against, but until now, there has been little empirical evidence with which to test these assumptions (Lucke & Hall, 2005). We recently surveyed members of the public in Australia and found a variety of opinions, including some ambivalence towards the development and use of drugs that could slow ageing and increase lifespan. Our findings suggest that many members of the public anticipate both positive and negative outcomes from this work (Partridge 2009a, b, 2010; Underwood et al, 2009).In a community survey of public attitudes towards HLE we found that around two-thirds of a sample of 605 Australian adults supported research with the potential to increase the maximum human lifespan by slowing ageing (Partridge et al, 2010). However, only one-third expressed an interest in using an anti-ageing pill if it were developed. Half of the respondents were not interested in personally using such a pill and around one in ten were undecided.Some proponents of HLE anticipate their research being impeded by strong public antipathy (Miller, 2002, 2009). Richard Miller has claimed that opposition to the development of anti-ageing interventions often exists because of an “irrational public predisposition” to think that increased lifespans will only lead to elongation of infirmity. He has called this “gerontologiphobia”—a shared feeling among laypeople that while research to cure age-related diseases such as dementia is laudable, research that aims to intervene in ageing is a “public menace” (Miller, 2002).We found broad support for the amelioration of age-related diseases and for technologies that might preserve quality of life, but scepticism about a major promise of HLE—that it will delay the onset of age-related diseases and extend an individual''s healthy lifespan. From the people we interviewed, the most commonly cited potential negative personal outcome of HLE was that it would extend the number of years a person spent with chronic illnesses and poor quality of life (Partridge et al, 2009a). Although some members of the public envisioned more years spent in good health, almost 40% of participants were concerned that a drug to slow ageing would do more harm than good to them personally; another 13% were unsure about the benefits and costs (Partridge et al, 2010).…it might be that advocates of HLE have failed to persuade the public on this issueIt would be unwise to label such concerns as irrational, when it might be that advocates of HLE have failed to persuade the public on this issue. Have HLE researchers explained what they have discovered about ageing and what it means? Perhaps the public see the claims that have been made about HLE as ‘too good to be true‘.Results of surveys of biogerontologists suggest that they are either unaware or dismissive of public concerns about HLE. They often ignore them, dismiss them as “far-fetched”, or feel no responsibility “to respond” (Settersten Jr et al, 2008). Given this attitude, it is perhaps not surprising that the public are sceptical of their claims.Scientists are not always clear about the outcomes of their work, biogerontologists included. Although the life-extending effects of interventions in animal models are invoked as arguments for supporting anti-ageing research, it is not certain that these interventions will also extend healthy lifespans in humans. Miller (2009) reassuringly claims that the available evidence consistently suggests that quality of life is maintained in laboratory animals with extended lifespans, but he acknowledges that the evidence is “sparse” and urges more research on the topic (Miller, 2009). In the light of such ambiguity, researchers need to respond to public concerns in ways that reflect the available evidence and the potential of their work, without becoming apostles for technologies that have not yet been developed. An anti-ageing drug that extends lifespan without maintaining quality of life is clearly undesirable, but the public needs to be persuaded that such an outcome can be avoided.The public is also concerned about the possible adverse side effects of anti-ageing drugs. Many people were bemused when they discovered that members of the Caloric Restriction Society experienced a loss of libido and loss of muscle mass as a result of adhering to a low-calorie diet to extend their longevity—for many people, such side effects would not be worth the promise of some extra years of life. Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humans (Fontana et al, 2010). If researchers do not discuss these possible effects, then a curious public might draw their own conclusions.Adverse side effects are acknowledged as a considerable potential challenge to the development of an effective life-extending drug in humansSome HLE advocates seem eager to tout potential anti-ageing drugs as being free from adverse side effects. For example, Blagosklonny (2007) has argued that rapamycin could be used to prevent age-related diseases in humans because it is “a non-toxic, well tolerated drug that is suitable for everyday oral administration” with its major “side-effects” being anti-tumour, bone-protecting, and mimicking caloric restriction effects. By contrast, Kaeberlein & Kennedy (2009) have advised the public against using the drug because of its immunosuppressive effects.Aubrey de Grey has called for scientists to provide more optimistic timescales for HLE on several occasions. He claims that public opposition to interventions in ageing is based on “extraordinarily transparently flawed opinions” that HLE would be unethical and unsustainable (de Grey, 2004). In his view, public opposition is driven by scepticism about whether HLE will be possible, and that concerns about extending infirmity, injustice or social harms are simply excuses to justify people''s belief that ageing is ‘not so bad'' (de Grey, 2007). He argues that this “pro-ageing trance” can only be broken by persuading the public that HLE technologies are just around the corner.Contrary to de Grey''s expectations of public pessimism, 75% of our survey participants thought that HLE technologies were likely to be developed in the near future. Furthermore, concerns about the personal, social and ethical implications of ageing interventions and HLE were not confined to those who believed that HLE is not feasible (Partridge et al, 2010).Juengst et al (2003) have rightly pointed out that any interventions that slow ageing and substantially increase human longevity might generate more social, economic, political, legal, ethical and public health issues than any other technological advance in biomedicine. Our survey supports this idea; the major ethical concerns raised by members of the public reflect the many and diverse issues that are discussed in the bioethics literature (Partridge et al, 2009b; Partridge & Hall, 2007).When pressed, even enthusiasts admit that a drastic extension of human life might be a mixed blessing. A recent review by researchers at the US National Institute on Aging pointed to several economic and social challenges that arise from longevity extension (Sierra et al, 2009). Perry (2004) suggests that the ability to slow ageing will cause “profound changes” and a “firestorm of controversy”. Even de Grey (2005) concedes that the development of an effective way to slow ageing will cause “mayhem” and “absolute pandemonium”. If even the advocates of anti-ageing and HLE anticipate widespread societal disruption, the public is right to express concerns about the prospect of these things becoming reality. It is accordingly unfair to dismiss public concerns about the social and ethical implications as “irrational”, “inane” or “breathtakingly stupid” (de Grey, 2004).The breadth of the possible implications of HLE reinforces the need for more discussion about the funding of such research and management of its outcomes ( Juengst et al, 2003). Biogerontologists need to take public concerns more seriously if they hope to foster support for their work. If there are misperceptions about the likely outcomes of intervention in ageing, then biogerontologists need to better explain their research to the public and discuss how their concerns will be addressed. It is not enough to hope that a breakthrough in human ageing research will automatically assuage public concerns about the effects of HLE on quality of life, overpopulation, economic sustainability, the environment and inequities in access to such technologies. The trajectories of other controversial research areas—such as human embryonic stem cell research and assisted reproductive technologies (Deech & Smajdor, 2007)—have shown that “listening to public concerns on research and responding appropriately” is a more effective way of fostering support than arrogant dismissal of public concerns (Anon, 2009).Biogerontologists need to take public concerns more seriously if they hope to foster support for their work? Open in a separate windowBrad PartridgeOpen in a separate windowJayne LuckeOpen in a separate windowWayne Hall  相似文献   

3.
4.
Martinson BC 《EMBO reports》2011,12(8):758-762
Universities have been churning out PhD students to reap financial and other rewards for training biomedical scientists. This deluge of cheap labour has created unhealthy competition, which encourages scientific misconduct.Most developed nations invest a considerable amount of public money in scientific research for a variety of reasons: most importantly because research is regarded as a motor for economic progress and development, and to train a research workforce for both academia and industry. Not surprisingly, governments are occasionally confronted with questions about whether the money invested in research is appropriate and whether taxpayers are getting the maximum value for their investments.…questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientistsThe training and maintenance of the research workforce is a large component of these investments. Yet discussions in the USA about the appropriate size of this workforce have typically been contentious, owing to an apparent lack of reliable data to tell us whether the system yields academic ‘reproduction rates'' that are above, below or at replacement levels. In the USA, questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientists. As Donald Kennedy, then Editor-in-Chief of Science, noted several years ago, leaders in prestigious academic institutions have repeatedly rung alarm bells about shortages in the science workforce. Less often does one see questions raised about whether too many scientists are being produced or concerns about unintended consequences that may result from such overproduction. Yet recognizing that resources are finite, it seems reasonable to ask what level of competition for resources is productive, and at what level does competition become counter-productive.Finding a proper balance between the size of the research workforce and the resources available to sustain it has other important implications. Unhealthy competition—too many people clamouring for too little money and too few desirable positions—creates its own problems, most notably research misconduct and lower-quality, less innovative research. If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edge. Moreover, many in the science community worry that every publicized case of research misconduct could jeopardize those resources, if politicians and taxpayers become unwilling to invest in a research system that seems to be riddled with fraud and misconduct.The biomedical research enterprise in the USA provides a useful context in which to examine the level of competition for resources among academic scientists. My thesis is that the system of publicly funded research in the USA as it is currently configured supports a feedback system of institutional incentives that generate excessive competition for resources in biomedical research. These institutional incentives encourage universities to overproduce graduate students and postdoctoral scientists, who are both trainees and a cheap source of skilled labour for research while in training. However, once they have completed their training, they become competitors for money and positions, thereby exacerbating competitive pressures.Questions raised about whether too many scientists are being produced or concerns about the unintended consequences of such overproduction are less commonThe resulting scarcity of resources, partly through its effect on peer review, leads to a shunting of resources away from both younger researchers and the most innovative ideas, which undermines the effectiveness of the research enterprise as a whole. Faced with an increasing number of grant applications and the consequent decrease in the percentage of projects that can be funded, reviewers tend to ‘play it safe'' and favour projects that have a higher likelihood of yielding results, even if the research is conservative in the sense that it does not explore new questions. Resource scarcity can also introduce unwanted randomness to the process of determining which research gets funded. A large group of scientists, led by a cancer biologist, has recently mounted a campaign against a change in a policy of the National Institutes of Health (NIH) to allow only one resubmission of an unfunded grant proposal (Wadman, 2011). The core of their argument is that peer reviewers are likely able to distinguish the top 20% of research applications from the rest, but that within that top 20%, distinguishing the top 5% or 10% means asking peer reviewers for a level of precision that is simply not possible. With funding levels in many NIH institutes now within that 5–10% range, the argument is that reviewers are being forced to choose at random which excellent applications do and do not get funding. In addition to the inefficiency of overproduction and excessive competition in terms of their costs to society and opportunity costs to individuals, these institutional incentives might undermine the integrity and quality of science, and reduce the likelihood of breakthroughs.My colleagues and I have expressed such concerns about workforce dynamics and related issues in several publications (Martinson, 2007; Martinson et al, 2005, 2006, 2009, 2010). Early on, we observed that, “missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures” (Martinson et al, 2005). Our more recent publications have been more specific about the institutional and systemic structures concerned. It seems that at least a few important leaders in science share these concerns.In April 2009, the NIH, through the National Institute of General Medical Sciences (NIGMS), issued a request for applications (RFA) calling for proposals to develop computational models of the research workforce (http://grants.nih.gov/grants/guide/rfa-files/RFA-GM-10-003.html). Although such an initiative might be premature given the current level of knowledge, the rationale behind the RFA seems irrefutable: “there is a need to […] pursue a systems-based approach to the study of scientific workforce dynamics.” Roughly four decades after the NIH appeared on the scene, this is, to my knowledge, the first official, public recognition that the biomedical workforce tends not to conform nicely to market forces of supply and demand, despite the fact that others have previously made such arguments.Early last year, Francis Collins, Director of the NIH, published a PolicyForum article in Science, voicing many of the concerns I have expressed about specific influences that have led to growth rates in the science workforce that are undermining the effectiveness of research in general, and biomedical research in particular. He notes the increasing stress in the biomedical research community after the end of the NIH “budget doubling” between 1998 and 2003, and the likelihood of further disruptions when the American Recovery and Reinvestment Act of 2009 (ARRA) funding ends in 2011. Arguing that innovation is crucial to the future success of biomedical research, he notes the tendency towards conservatism of the NIH peer-review process, and how this worsens in fiscally tight times. Collins further highlights the ageing of the NIH workforce—as grants increasingly go to older scientists—and the increasing time that researchers are spending in itinerant and low-paid postdoctoral positions as they stack up in a holding pattern, waiting for faculty positions that may or may not materialize. Having noted these challenging trends, and echoing the central concerns of a 2007 Nature commentary (Martinson, 2007), he concludes that “…it is time for NIH to develop better models to guide decisions about the optimum size and nature of the US workforce for biomedical research. A related issue that needs attention, though it will be controversial, is whether institutional incentives in the current system that encourage faculty to obtain up to 100% of their salary from grants are the best way to encourage productivity.”Similarly, Bruce Alberts, Editor-in-Chief of Science, writing about incentives for innovation, notes that the US biomedical research enterprise includes more than 100,000 graduate students and postdoctoral fellows. He observes that “only a select few will go on to become independent research scientists in academia”, and argues that “assuming that the system supporting this career path works well, these will be the individuals with the most talent and interest in such an endeavor” (Alberts, 2009).His editorial is not concerned with what happens to the remaining majority, but argues that even among the select few who manage to succeed, the funding process for biomedical research “forces them to avoid risk-taking and innovation”. The primary culprit, in his estimation, is the conservatism of the traditional peer-review system for federal grants, which values “research projects that are almost certain to ‘work''”. He continues, “the innovation that is essential for keeping science exciting and productive is replaced by […] research that has little chance of producing the breakthroughs needed to improve human health.”If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edgeAlthough I believe his assessment of the symptoms is correct, I think he has misdiagnosed the cause, in part because he has failed to identify which influence he is concerned with from the network of influences in biomedical research. To contextualize the influences of concern to Alberts, we must consider the remaining majority of doctorally trained individuals so easily dismissed in his editorial, and further examine what drives the dynamics of the biomedical research workforce.Labour economists might argue that market forces will always balance the number of individuals with doctorates with the number of appropriate jobs for them in the long term. Such arguments would ignore, however, the typical information asymmetry between incoming graduate students, whose knowledge about their eventual job opportunities and career options is by definition far more limited than that of those who run the training programmes. They would also ignore the fact that universities are generally not confronted with the externalities resulting from overproduction of PhDs, and have positive financial incentives that encourage overproduction. During the past 40 years, NIH ‘extramural'' funding has become crucial for graduate student training, faculty salaries and university overheads. For their part, universities have embraced NIH extramural funding as a primary revenue source that, for a time, allowed them to implement a business model based on the interconnected assumptions that, as one of the primary ‘outputs'' or ‘products'' of the university, more doctorally trained individuals are always better than fewer, and because these individuals are an excellent source of cheap, skilled labour during their training, they help to contain the real costs of faculty research.“…the current system has succeeded in maximizing the amount of research […] it has also degraded the quality of graduate training and led to an overproduction of PhDs…”However, it has also made universities increasingly dependent on NIH funding. As recently documented by the economist Paula Stephan, most faculty growth in graduate school programmes during the past decade has occurred in medical colleges, with the majority—more than 70%—in non-tenure-track positions. Arguably, this represents a shift of risk away from universities and onto their faculty. Despite perennial cries of concern about shortages in the research workforce (Butz et al, 2003; Kennedy et al, 2004; National Academy of Sciences et al, 2005) a number of commentators have recently expressed concerns that the current system of academic research might be overbuilt (Cech, 2005; Heinig et al, 2007; Martinson, 2007; Stephan, 2007). Some explicitly connect this to structural arrangements between the universities and NIH funding (Cech, 2005; Collins, 2007; Martinson, 2007; Stephan, 2007).In 1995, David Korn pointed out what he saw as some problematic aspects of the business model employed by Academic Medical Centers (AMCs) in the USA during the past few decades (Korn, 1995). He noted the reliance of AMCs on the relatively low-cost, but highly skilled labour represented by postdoctoral fellows, graduate students and others—who quickly start to compete with their own professors and mentors for resources. Having identified the economic dependence of the AMCs on these inexpensive labour pools, he noted additional problems with the graduate training programmes themselves. “These programs are […] imbued with a value system that clearly indicates to all participants that true success is only marked by the attainment of a faculty position in a high-profile research institution and the coveted status of principal investigator on NIH grants.” Pointing to “more than 10 years of severe supply/demand imbalance in NIH funds”, Korn concluded that, “considering the generative nature of each faculty mentor, this enterprise could only sustain itself in an inflationary environment, in which the society''s investment in biomedical research and clinical care was continuously and sharply expanding.” From 1994 to 2003, total funding for biomedical research in the USA increased at an annual rate of 7.8%, after adjustment for inflation. The comparable rate of growth between 2003 and 2007 was 3.4% (Dorsey et al, 2010). These observations resonate with the now classic observation by Derek J. de Solla Price, from more than 30 years before, that growth in science frequently follows an exponential pattern that cannot continue indefinitely; the enterprise must eventually come to a plateau (de Solla Price, 1963).In May 2009, echoing some of Korn''s observations, Nobel laureate Roald Hoffmann caused a stir in the US science community when he argued for a “de-coupling” of the dual roles of graduate students as trainees and cheap labour (Hoffmann, 2009). His suggestion was to cease supporting graduate students with faculty research grants, and to use the money instead to create competitive awards for which graduate students could apply, making them more similar to free agents. During the ensuing discussion, Shirley Tilghman, president of Princeton University, argued that “although the current system has succeeded in maximizing the amount of research performed […] it has also degraded the quality of graduate training and led to an overproduction of PhDs in some areas. Unhitching training from research grants would be a much-needed form of professional ‘birth control''” (Mervis, 2009).The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientistsAlthough the issue of what I will call the ‘academic birth rate'' is the central concern of this analysis, the ‘academic end-of-life'' also warrants some attention. The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientists. A 2008 news item in Science quoted then 70-year-old Robert Wells, a molecular geneticist at Texas A&M University, “‘if I and other old birds continue to land the grants, the [young scientists] are not going to get them.” He worries that the budget will not be able to support “the 100 people ‘I''ve trained […] to replace me''” (Kaiser, 2008). While his claim of 100 trainees might be astonishing, it might be more astonishing that his was the outlying perspective. The majority of senior scientists interviewed for that article voiced intentions to keep doing science—and going after NIH grants—until someone forced them to stop or they died.Some have looked at the current situation with concern, primarily because of the threats it poses to the financial and academic viability of universities (Korn, 1995; Heinig et al, 2007; Korn & Heinig, 2007), although most of those who express such concerns have been distinctly reticent to acknowledge the role of universities in creating and maintaining the situation. Others have expressed concerns about the differential impact of extreme competition and meagre job prospects on the recruitment, development and career survival of young and aspiring scientists (Freeman et al, 2001; Kennedy et al, 2004; Martinson et al, 2006; Anderson et al, 2007a; Martinson, 2007; Stephan, 2007). There seems to be little disagreement, however, that the system has generated excessively high competition for federal research funding, and that this threatens to undermine the very innovation and production of knowledge that is its raison d''etre.The production of knowledge in science, particularly of the ‘revolutionary'' variety, is generally not a linear input–output process with predictable returns on investment, clear timelines and high levels of certainty (Lane, 2009). On the contrary, it is arguable that “revolutionary science is a high risk and long-term endeavour which usually fails” (Charlton & Andras, 2008). Predicting where, when and by whom breakthroughs in understanding will be produced has proven to be an extremely difficult task. In the face of such uncertainty, and denying the realities of finite resources, some have argued that the best bet is to maximize the number of scientists, using that logic to justify a steady-state production of new PhDs, regardless of whether the labour market is sending signals of increasing or decreasing demand for that supply. Only recently have we begun to explore the effects of the current arrangement on the process of knowledge production, and on innovation in particular (Charlton & Andras, 2008; Kolata, 2009).…most of those who express such concerns have been reticent to acknowledge the role of universities themselves in creating and maintaining the situationBruce Alberts, in the above-mentioned editorial, points to several initiatives launched by the NIH that aim to get a larger share of NIH funding into the hands of young scientists with particularly innovative ideas. These include the “New Innovator Award,” the “Pioneer Award” and the “Transformational R01 Awards”. The proportion of NIH funding dedicated to these awards, however, amounts to “only 0.27% of the NIH budget” (Alberts, 2009). Such a small proportion of the NIH budget does not seem likely to generate a large amount of more innovative science. Moreover, to the extent that such initiatives actually succeed in enticing more young investigators to become dependent on NIH funds, any benefit these efforts have in terms of innovation may be offset by further increases in competition for resources that will come when these new ‘innovators'' reach the end of this specialty funding and add to the rank and file of those scrapping for funds through the standard mechanisms.Our studies on research integrity have been mostly oriented towards understanding how the influences within which academic scientists work might affect their behaviour, and thus the quality of the science they produce (Anderson et al, 2007a, 2007b; Martinson et al, 2009, 2010). My colleagues and I have focused on whether biomedical researchers perceive fairness in the various exchange relationships within their work systems. I am persuaded by the argument that expectations of fairness in exchange relationships have been hard-wired into us through evolution (Crockett et al, 2008; Hsu et al, 2008; Izuma et al, 2008; Pennisi, 2009), with the advent of modern markets being a primary manifestation of this. Thus, violations of these expectations strike me as potentially corrupting influences. Such violations might be prime motivators for ill will, possibly engendering bad-faith behaviour among those who perceive themselves to have been slighted, and therefore increasing the risk of research misconduct. They might also corrupt the enterprise by signalling to talented young people that biomedical research is an inhospitable environment in which to develop a career, possibly chasing away some of the most talented individuals, and encouraging a selection of characteristics that might not lead to optimal effectiveness, in terms of scientific innovation and productivity (Charlton, 2009).To the extent that we have an ecology with steep competition that is fraught with high risks of career failure for young scientists after they incur large costs of time, effort and sometimes financial resources to obtain a doctoral degree, why would we expect them to take on the additional, substantial risks involved in doing truly innovative science and asking risky research questions? And why, in such a cut-throat setting, would we not anticipate an increase in corner-cutting, and a corrosion of good scientific practice, collegiality, mentoring and sociability? Would we not also expect a reduction in high-risk, innovative science, and a reversion to a more career-safe type of ‘normal'' science? Would this not reduce the effectiveness of the institution of biomedical research? I do not claim to know the conditions needed to maximize the production of research that is novel, innovative and conducted with integrity. I am fairly certain, however, that putting scientists in tenuous positions in which their careers and livelihoods would be put at risk by pursuing truly revolutionary research is one way to insure against it.  相似文献   

5.
Blurring lines     
The research activities of direct-to-consumer genetic testing companies raise questions about consumers as research subjectsThe recent rise of companies that offer genetic testing directly to consumers, bypassing the traditional face-to-face consultation with a health-care professional, has created a steady stream of debate over the actual and potential value of these services (Hogarth et al, 2008). Despite the debates, however, the reality remains that these services are being offered and have genuine consequences for consumers. As opposed to the issues that have regularly been discussed regarding direct-to-consumer (DTC) genetic testing, the fact that these companies use consumers'' data to perform research has been given relatively little attention. This omission is misconceived as this practice—within the wider realm of DTC genetic testing services—raises its own questions and concerns. In particular, it is blurring the line between consumers and research subjects, which threatens to undermine the public trust and confidence in genetic research that the scientific community has been trying to build over the past decades.Even when a company is relatively transparent about its research activities, one might still be concerned by a lack of consumer awareness of these activitiesWith this in mind, we analysed the websites—including informed consent forms and privacy policies—of five companies that offer DTC full genome testing: 23andMe, deCODE, Navigenics, Gene Essence—the genetic testing service offered by the company BioMarker Pharmaceuticals—and SeqWright. Two questions guided our study: Are consumers aware that the data generated by the company to fulfil the terms of their service will later be used for research? Even if this is the case, is the process of consent provided by companies ethically acceptable from the point of view of academic research?As there are no empirical data available to answer the first question, we turned to the websites of the companies to understand how explicitly they present their research activities. At the time of the study—from July 2009 to January 2010—23andMe, deCODE and Navigenics candidly revealed on their websites that they conduct research using consumer data (Sidebar A). By contrast, SeqWright and Gene Essence provided what we identified as indirect and even ambiguous information about their research activities. For example, in a SeqWright online order form, the company notes: “Please volunteer any diseases from which you currently suffer (this can help us advance medical research by enabling us [sic] discover new SNP [single nucleotide polymorphism]/Disease associations)”. The information in Gene Essence''s privacy policy was similarly vague (http://geneessence.com/our-labs/privacy-policy.html), stating that “electing to provide Optional Profile Information may enable the Company to advance the science of genetics and provide you with an even better understanding of who you are genetically”.

Sidebar A | Information provided by direct-to-consumer genetic testing companies*

23andMe“You understand that your genetic and other contributed personal information will be stored in 23andMe research databases, and authorized personnel of 23andMe will conduct research using said databases.” (https://www.23andme.com/about/consent; accessed 29 January 2010)deCODE“Information that you provide about yourself under the security of your account and privacy of your chosen username may be used by deCODEme only to gather statistical aggregate information about the users of the deCODEme website. Such analysis may include information that we would like to be able to report back to you and other users of deCODEme, such as in counting the number of users grouped by gender or age, or associating genetic variants with any of the self-reported user attributes. In any such analyses and in presenting any such statistical information, deCODE will ensure that user identities are not exposed.” (http://www.decodeme.com/faq; accessed 29 January 2010)Navigenics“Navigenics is continuously improving the quality of our service, and we strive to contribute to scientific and medical research. To that end, we might de-link Your Genetic Data and Your Phenotype Information and combine it with other members'' information so that we can perform research to: […] Discover or validate associations between certain genetic variations and certain health conditions or traits, as well as other insights regarding human health.” (http://www.navigenics.com/visitor/what_we_offer/our_policies/informed_consent/health_compass; accessed 29 January 2010)*See main text for information from SeqWright and Gene Essence.If, as appears to be the case, these statements are the only declarations offered by these two companies alluding to their presumed research activities, it is virtually impossible for consumers to understand that their data will be used for research purposes. Moreover, despite the fact that the three other companies do state that they conduct research using consumer genotypes, even their declarations still give cause for concern. For instance, both Navigenics and deCODE ‘tuck away'' most of the information in their terms of service agreements, privacy policies, or in the informed consent sections of their websites. This is worrisome, as most consumers do not even read and/or understand the ‘legalese'' or ‘small print'' when signing online forms (ICO, 2008).…many studies show that participants who have agreed to have their tissue used for one type of research do not necessarily automatically agree to take part in other studies…Even when a company is relatively transparent about its research activities, one might still be concerned by a lack of consumer awareness of these activities. Between July and September 2009, 23andMe offered a new service called the “23andMe research edition”, which was prominently displayed on the company website. This version of their service, which was part of what the company calls the “23andMe research revolution”, was offered for US$99—one-quarter of the price of their traditional personal genome scan—and it offered less information to consumers than the “traditional” service. For instance, the abridged research edition neither offered information about carrier status, pharmacogenomic information and ancestry, nor could the customer browse or download the raw genomic data (https://www.23andme.com/researchrevolution/compare).At a glance, it seemed that 23andMe were marketing the “research edition” as a more affordable option, owing to the fact that the consumers were being given less information and because its name implied that the data would be used for research. Granted, the company did not explicitly express this last assumption, but the term “research edition” could have easily led consumers to this conclusion. However, what is particularly troubling about the two options—“research edition” and “traditional”, presented as distinct products—is that the consent forms for both services were identical. The issue is therefore whether, by calling one option “research edition”, 23andMe made it less clear to individuals purchasing the “traditional” service that their data would also be used for research purposes.Even were we assured that consumers are at least aware of the research being conducted, we must still ask whether the companies are obtaining adequate consent compared with that required from volunteers for similar research studies? To answer this question, we considered official guidelines covering consent, public views on the topic and information gleaned from the websites of DTC genetic testing companies.Concerning public opinion, many studies show that participants who have agreed to have their tissue used for one type of research do not necessarily automatically agree to take part in other studies (Goodson & Vernon, 2004; Schwartz et al, 2001). Furthermore, in a survey of more than 1,000 patients, 72% considered it important to be notified when leftover blood taken for clinical use was to be used for research (Hull et al, 2008). Most of those patients who wanted to be notified would require the researchers to get permission for other research (Hull et al, 2008).…requesting additional information could still be understood by consumers as an additional service that they purchased and not an explicit invitation to take part in researchAlthough some of the companies in our study do mention the diseases that they might study, they are not specific and do not describe the scope of the research that will be done. Indeed, beyond the initial customer signature required to complete the purchase of the genetic testing service, it is not always clear whether the companies would ever contact consumers to obtain explicit consent for internally conducted research. That said, if they were to send out surveys or questionnaires to request supplementary phenotype information, and consumers were to fill out and return those forms, the companies might consider this as consent to research. We would argue, however, that this blurs the line between individuals as consumers and as research participants: requesting additional information could still be understood by consumers as an additional service that they purchased and not an explicit invitation to take part in research.The issue of the identifiability of genomic data is inextricably related to the issue of consent as “[p]romises of anonymity and privacy are important to a small but significant proportion of potential participants” (Andrews, 2009). In the study performed by Hull and colleagues, 23% of participants differentiated between scenarios where samples and data were stored anonymously or with identifiers (Hull et al, 2008). The issue of anonymity is particularly important under the US Common Rule definition of ‘human subject'' research (HHS, 2009). It dictates that research conducted using samples from people that cannot be identified is not considered human subject research and as such does not require consent. Although this rule applies only to federally funded research, it might become pertinent if companies collaborate with publicly funded institutions, such as universities. More generally, regulations such as the Common Rule and the US Food and Drug Administration''s regulations for the protection of human subjects highlight the importance of the protection of individuals in research. Research activities conducted by companies selling DTC genetic tests should therefore be similarly transparent and accountable to a regulatory body.On the basis of the information from the websites of the companies we surveyed, it is not unambiguously clear whether the data used in their research is anonymized or not. That said, 23andMe claims it will keep consumers informed of future advancements in science and might ask them for additional phenotype information, suggesting that it maintains the link between genotype data and the personal information of its customers. As such, research conducted by 23andMe could be considered to involve human subjects. Thus, if 23andMe were to comply voluntarily with the Common Rule, they would have to obtain adequate informed consent.Even in cases in which data or samples are anonymized, studies show that people do care about what happens to their sample (Hull et al, 2008; Schwartz et al, 2001). Furthermore, it is becoming more and more apparent that there are intrinsic limits to the degree of protection that can be achieved through sample and data de-identification and anonymization in genomic research (Homer et al, 2008; Lin et al, 2004; McGuire & Gibbs, 2006; P3G Consortium et al, 2009). This further weakens the adequacy of companies obtaining broad-sense consent from consumers who, most probably, are not even aware that research is being conducted.The European Society of Human Genetics (ESHG) has recently issued a statement on DTC genetic testing for health-related purposes, which states that “[t]he ESHG is concerned with the inadequate consent process through which customers are enrolled in such research. If samples or data are to be used in any research, this should be clear to consumers, and a separate and unambiguous consent procedure should take place” (ESHG, 2010). Another document was recently drafted by the UK Human Genetics Commission (HGC), entitled ‘Common Framework of Principles for Direct-to-Consumers Genetic Testing Services'' (HGC, 2009). The principles were written with the intention of promoting high standards and consistency in the DTC genetic testing market and to protect the interests of consumers and their families. Although this document is not finalized and the principles themselves cannot control or regulate the market in any tangible way, this framework, along with the ESHG statement, constitute the most up-to-date and exhaustive documents addressing DTC genetic testing activities.On the basis of the information from the websites of the companies we surveyed, it is not unambiguously clear whether the data used in their research is anonymized or not…companies should be completely transparent with the public about whether people purchasing their tests are consumers or research subjects or bothPrinciple 4.5 states: “If a test provider intends to use a consumer''s biological samples and/or associated personal or genetic data for research purposes, the consumer should be informed whether the research has been approved by a research ethics committee or other competent authority, whether the biological sample and data will be transferred to or kept in a biobank or database, and about measures to ensure the security of the sample. The consumer should be informed of any risks or potential benefits associated with participating in the research.” Principle 5.6 of the HGC''s draft states that a “[s]eparate informed consent should be requested by the test provider before biological samples are used for any secondary purposes, e.g research, or before any third party is permitted access to biological samples. Consumers'' biological samples and personal and genetic data should only be used for research that has been approved by a research ethics committee (REC) or other relevant competent authority.”None of the companies we surveyed reveal on their websites whether internal research protocols have been approved by a REC or by an independent “competent authority”. Furthermore, no such independent body exists that deals specifically with the research activities of commercial companies selling DTC genetic tests. Additionally, if a company did claim to have internal ethical oversight, it would be questionable whether such a committee would really have any power to veto or change the company''s research activities.Moreover, while all five companies do state what will happen to the DNA sample—in most cases, unless asked otherwise by the consumer, the DNA sample will be destroyed shortly after testing—not enough is revealed about what will happen to the data. Some companies say where data is kept and comment on the security of the website, but as mentioned previously, companies are not clear about whether data will be anonymized. Traditionally, a great deal of focus has been placed on the fate and storage of biological samples, but genome-wide testing of hundreds of thousands of individuals for thousands or even millions of SNPs generates a lot of data. This information is not equivalent, of course, to a full genome sequence, but it can fuel numerous genomic studies in the immediate and medium-term future. As such, additional issues above and beyond basic informed consent also become a concern. For instance, what will happen to the data if a company goes bankrupt or is sold? Will the participants be sent new consent forms if the nature of the company or the research project changes drastically?The activities of companies offering DTC genetic testing have not only blurred the lines between medical services and consumer products, but also between these two activities and research. As a consequence, the appropriate treatment and autonomy of individuals who purchase DTC genetic testing services could be undermined. Paramount to this issue is the fact that companies should be completely transparent with the public about whether people purchasing their tests are consumers or research subjects or both. Although an individual who reads through the websites of such companies might be considered a simple ‘browser'' of the website, once the terms and conditions are signed—irrespective of an actual reading or comprehension—the curious consumer becomes a client and a research subject.…consumers who become research participants should be treated with the same respect and under the same norms as those involved in biobank researchCompanies using consumer samples and data to conduct research are in essence creating databases of information that can be mined and studied in the same way as biobanks and databases generated by academic institutions. As such, consumers who become research participants should be treated with the same respect and under the same norms as those involved in biobank research. As stated by the Organization for Economic Co-operation and Development, research should “respect the participants and be conducted in a manner that upholds human dignity, fundamental freedoms and human rights and be carried out by responsible researchers” (OECD, 2009). On the basis of our analysis of the websites of five companies offering DTC full genome testing, there is little evidence that the participation of ‘consumers'' in research is fully informed.The analysis of company websites was conducted in 2009 and early 2010. The information offered to consumers by the companies mentioned in this Outlook might have changed following the study''s completion or the article''s publication.? Open in a separate windowHeidi C. HowardOpen in a separate windowPascal BorryOpen in a separate windowBartha Maria Knoppers  相似文献   

6.
L Bornmann 《EMBO reports》2012,13(8):673-676
The global financial crisis has changed how nations and agencies prioritize research investment. There has been a push towards science with expected benefits for society, yet devising reliable tools to predict and measure the social impact of research remains a major challenge.Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier, and it has become the underlying rationale for public support and funding of science.However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [1]. In fact, there is no direct link between the scientific quality of a research project and its societal value. As Paul Nightingale and Alister Scott of the University of Sussex''s Science and Technology Policy Research centre have pointed out: “research that is highly cited or published in top journals may be good for the academic discipline but not for society” [2]. Moreover, it might take years, or even decades, until a particular body of knowledge yields new products or services that affect society. By way of example, in an editorial on the topic in the British Medical Journal, editor Richard Smith cites the original research into apoptosis as work that is of high quality, but that has had “no measurable impact on health” [3]. He contrasts this with, for example, research into “the cost effectiveness of different incontinence pads”, which is certainly not seen as high value by the scientific community, but which has had an immediate and important societal impact.…the growth of scientific research during the past decades has outpaced the public resources available to fund itThe problem actually begins with defining the ‘societal impact of research''. A series of different concepts has been introduced: ‘third-stream activities'' [4], ‘societal benefits'' or ‘societal quality'' [5], ‘usefulness'' [6], ‘public values'' [7], ‘knowledge transfer'' [8] and ‘societal relevance'' [9, 10]. Yet, each of these concepts is ultimately concerned with measuring the social, cultural, environmental and economic returns from publicly funded research, be they products or ideas.In this context, ‘societal benefits'' refers to the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making. ‘Cultural benefits'' are those that add to the cultural capital of a nation, for example, by giving insight into how we relate to other societies and cultures, by providing a better understanding of our history and by contributing to cultural preservation and enrichment. ‘Environmental benefits'' benefit the natural capital of a nation, by reducing waste and pollution, and by increasing natural preserves or biodiversity. Finally, ‘economic benefits'' increase the economic capital of a nation by enhancing its skills base and by improving its productivity [11].Given the variability and the complexity of evaluating the societal impact of research, Barend van der Meulen at the Rathenau Institute for research and debate on science and technology in the Netherlands, and Arie Rip at the School of Management and Governance of the University of Twente, the Netherlands, have noted that “it is not clear how to evaluate societal quality, especially for basic and strategic research” [5]. There is no accepted framework with adequate datasets comparable to,for example, Thomson Reuters'' Web of Science, which enables the calculation of bibliometric values such as the h index [12] or journal impact factor [13]. There are also no criteria or methods that can be applied to the evaluation of societal impact, whilst conventional research and development (R&D) indicators have given little insight, with the exception of patent data. In fact, in many studies, the societal impact of research has been postulated rather than demonstrated [14]. For Benoît Godin at the Institut National de la Recherche Scientifique (INRS) in Quebec, Canada, and co-author Christian Doré, “systematic measurements and indicators [of the] impact on the social, cultural, political, and organizational dimensions are almost totally absent from the literature” [15]. Furthermore, they note, most research in this field is primarily concerned with economic impact.A presentation by Ben Martin from the Science and Technology Policy Research Unit at Sussex University, UK, cites four common problems that arise in the context of societal impact measurements [16]. The first is the causality problem—it is not clear which impact can be attributed to which cause. The second is the attribution problem, which arises because impact can be diffuse or complex and contingent, and it is not clear what should be attributed to research or to other inputs. The third is the internationality problem that arises as a result of the international nature of R&D and innovation, which makes attribution virtually impossible. Finally, the timescale problem arises because the premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impact.…in many studies, the societal impact of research has been postulated rather than demonstratedIn addition, there are four other problems. First, it is hard to find experts to assess societal impact that is based on peer evaluation. As Robert Frodeman and James Britt Holbrook at the University of North Texas, USA, have noted, “[s]cientists generally dislike impacts considerations” and evaluating research in terms of its societal impact “takes scientists beyond the bounds of their disciplinary expertise” [10]. Second, given that the scientific work of an engineer has a different impact than the work of a sociologist or historian, it will hardly be possible to have a single assessment mechanism [4, 17]. Third, societal impact measurement should take into account that there is not just one model of a successful research institution. As such, assessment should be adapted to the institution''s specific strengths in teaching and research, the cultural context in which it exists and national standards. Finally, the societal impact of research is not always going to be desirable or positive. For example, Les Rymer, graduate education policy advisor to the Australian Group of Eight (Go8) network of university vice-chancellors, noted in a report for the Go8 that, “environmental research that leads to the closure of a fishery might have an immediate negative economic impact, even though in the much longer term it will preserve a resource that might again become available for use. The fishing industry and conservationists might have very different views as to the nature of the initial impact—some of which may depend on their view about the excellence of the research and its disinterested nature” [18].Unlike scientific impact measurement, for which there are numerous established methods that are continually refined, research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishments. Even so, governments already conduct budget-relevant measurements, or plan to do so. The best-known national evaluation system is the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. Efforts are under way to set up the Research Excellence Framework (REF), which is set to replace the RAE in 2014 “to support the desire of modern research policy for promoting problem-solving research” [21]. In order to develop the new arrangements for the assessment and funding of research in the REF, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches for evaluating the impact of research [20]. The recommendation from this consultation is that impact should be measured in a quantifiable way, and expert panels should review narrative evidence in case studies supported by appropriate indicators [19,21].…premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impactMany of the studies that have carried out societal impact measurement chose to do so on the basis of case studies. Although this method is labour-intensive and a craft rather than a quantitative activity, it seems to be the best way of measuring the complex phenomenon that is societal impact. The HEFCE stipulates that “case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe” [22]. Claire Donovan at Brunel University, London, UK, considers the preference for a case-study approach in the REF to be “the ‘state of the art'' [for providing] the necessary evidence-base for increased financial support of university research across all fields” [23]. According to Finn Hansson from the Department of Leadership, Policy and Philosophy at the Copenhagen Business School, Denmark, and co-author Erik Ernø-Kjølhede, the new REF is “a clear political signal that the traditional model for assessing research quality based on a discipline-oriented Mode 1 perception of research, first and foremost in the form of publication in international journals, was no longer considered sufficient by the policy-makers” [19]. ‘Mode 1'' describes research governed by the academic interests of a specific community, whereas ‘Mode 2'' is characterized by collaboration—both within the scientific realm and with other stakeholders—transdisciplinarity and basic research that is being conducted in the context of application [19].The new REF will also entail changes in budget allocations. The evaluation of a research unit for the purpose of allocations will determine 20% of the societal influence dimension [19]. The final REF guidance contains lists of examples for different types of societal impact [24].Societal impact is much harder to measure than scientific impact, and there are probably no indicators that can be used across all disciplines and institutions for collation in databases [17]. Societal impact often takes many years to become apparent, and “[t]he routes through which research can influence individual behaviour or inform social policy are often very diffuse” [18].Yet, the practitioners of societal impact measurement should not conduct this exercise alone; scientists should also take part. According to Steve Hanney at Brunel University, an expert in assessing payback or impacts from health research, and his co-authors, many scientists see societal impact measurement as a threat to their scientific freedom and often reject it [25]. If the allocation of funds is increasingly oriented towards societal impact issues, it challenges the long-standing reward system in science whereby scientists receive credits—not only citations and prizes but also funds—for their contributions to scientific advancement. However, given that societal impact measurement is already important for various national evaluations—and other countries will follow probably—scientists should become more concerned with this aspect of their research. In fact, scientists are often unaware that their research has a societal impact. “The case study at BRASS [Centre for Business Relationships, Accountability, Sustainability and Society] uncovered activities that were previously ‘under the radar'', that is, researchers have been involved in activities they realised now can be characterized as productive interactions” [26] between them and societal stakeholders. It is probable that research in many fields already has a direct societal impact, or induces productive interactions, but that it is not yet perceived as such by the scientists conducting the work.…research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishmentsThe involvement of scientists is also necessary in the development of mechanisms to collect accurate and comparable data [27]. Researchers in a particular discipline will be able to identify appropriate indicators to measure the impact of their kind of work. If the approach to establishing measurements is not sufficiently broad in scope, there is a danger that readily available indicators will be used for evaluations, even if they do not adequately measure societal impact [16]. There is also a risk that scientists might base their research projects and grant applications on readily available and ultimately misleading indicators. As Hansson and Ernø-Kjølhede point out, “the obvious danger is that researchers and universities intensify their efforts to participate in activities that can be directly documented rather than activities that are harder to document but in reality may be more useful to society” [19]. Numerous studies have documented that scientists already base their activities on the criteria and indicators that are applied in evaluations [19, 28, 29].Until reliable and robust methods to assess impact are developed, it makes sense to use expert panels to qualitatively assess the societal relevance of research in the first instance. Rymer has noted that, “just as peer review can be useful in assessing the quality of academic work in an academic context, expert panels with relevant experience in different areas of potential impact can be useful in assessing the difference that research has made” [18].Whether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting the public funding and support of basic researchWhether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting public funding and support of basic research. This has always been the case, but new research into measures that can assess the societal impact of research would provide better qualitative and quantitative data on which funding agencies and politicians could base decisions. At the same time, such measurement should not come at the expense of basic, blue-sky research, given that it is and will remain near-impossible to predict the impact of certain research projects years or decades down the line.  相似文献   

7.
8.
9.

Background:

Optimization of systolic blood pressure and lipid levels are essential for secondary prevention after ischemic stroke, but there are substantial gaps in care, which could be addressed by nurse- or pharmacist-led care. We compared 2 types of case management (active prescribing by pharmacists or nurse-led screening and feedback to primary care physicians) in addition to usual care.

Methods:

We performed a prospective randomized controlled trial involving adults with recent minor ischemic stroke or transient ischemic attack whose systolic blood pressure or lipid levels were above guideline targets. Participants in both groups had a monthly visit for 6 months with either a nurse or pharmacist. Nurses measured cardiovascular risk factors, counselled patients and faxed results to primary care physicians (active control). Pharmacists did all of the above as well as prescribed according to treatment algorithms (intervention).

Results:

Most of the 279 study participants (mean age 67.6 yr, mean systolic blood pressure 134 mm Hg, mean low-density lipoprotein [LDL] cholesterol 3.23 mmol/L) were already receiving treatment at baseline (antihypertensives: 78.1%; statins: 84.6%), but none met guideline targets (systolic blood pressure ≤ 140 mm Hg, fasting LDL cholesterol ≤ 2.0 mmol/L). Substantial improvements were observed in both groups after 6 months: 43.4% of participants in the pharmacist case manager group met both systolic blood pressure and LDL guideline targets compared with 30.9% in the nurse-led group (12.5% absolute difference; number needed to treat = 8, p = 0.03).

Interpretation:

Compared with nurse-led case management (risk factor evaluation, counselling and feedback to primary care providers), active case management by pharmacists substantially improved risk factor control at 6 months among patients who had experienced a stroke. Trial registration: ClinicalTrials.gov, no. NCT00931788The risk of cardiovascular events is high for patients who survive a stroke or transient ischemic attack.1,2 Treatment of hypertension and dyslipidemia can substantially reduce this risk.37 However, vascular risk factors are often suboptimally managed after stroke or transient ischemic attack, even among patients admitted to hospital or seen in specialized stroke prevention clinics.810Multiple barriers are responsible for the suboptimal control of risk factors, and traditional means of educating practitioners and patients have limited effectiveness.11 Although it has been suggested that “case managers” may be able to improve the management of risk factors, evidence is sparse and inconsistent between studies.1216 The most recent Cochrane review on this topic concluded that “nurse- or pharmacist-led care may be a promising way forward … but these interventions require further evaluation.”16 Thus, we designed this trial to evaluate whether a pharmacist case manager could improve risk factors among survivors of stroke or transient ischemic attack.17 Because we have previously shown that hypertension control can be improved by monthly evaluation by nurses (with patient counselling and faxing of blood pressure measurements with guideline recommendations to primary care physicians),18 and this is an alternate method of case management implemented in many health organizations, we used this approach as the active control group for this study. Thus, our study represents a controlled comparison of 2 modes of case management: active prescribing (pharmacist-led case management) versus screening and delegating to primary care physicians (nurse-led case management).  相似文献   

10.
11.
The authors of “The anglerfish deception” respond to the criticism of their article.EMBO reports (2012) advanced online publication; doi: 10.1038/embor.2012.70EMBO reports (2012) 13 2, 100–105; doi: 10.1038/embor.2011.254Our respondents, eight current or former members of the EFSA GMO panel, focus on defending the EFSA''s environmental risk assessment (ERA) procedures. In our article for EMBO reports, we actually focused on the proposed EU GMO legislative reform, especially the European Commission (EC) proposal''s false political inflation of science, which denies the normative commitments inevitable in risk assessment (RA). Unfortunately the respondents do not address this problem. Indeed, by insisting that Member States enjoy freedom over risk management (RM) decisions despite the EFSA''s central control over RA, they entirely miss the relevant point. This is the unacknowledged policy—normative commitments being made before, and during, not only after, scientific ERA. They therefore only highlight, and extend, the problem we identified.The respondents complain that we misunderstood the distinction between RA and RM. We did not. We challenged it as misconceived and fundamentally misleading—as though only objective science defined RA, with normative choices cleanly confined to RM. Our point was that (i) the processes of scientific RA are inevitably shaped by normative commitments, which (ii) as a matter of institutional, policy and scientific integrity must be acknowledged and inclusively deliberated. They seem unaware that many authorities [1,2,3,4] have recognized such normative choices as prior matters, of RA policy, which should be established in a broadly deliberative manner “in advance of risk assessment to ensure that [RA] is systematic, complete, unbiased and transparent” [1]. This was neither recognized nor permitted in the proposed EC reform—a central point that our respondents fail to recognize.In dismissing our criticism that comparative safety assessment appears as a ‘first step'' in defining ERA, according to the new EFSA ERA guidelines, which we correctly referred to in our text but incorrectly referenced in the bibliography [5], our respondents again ignore this widely accepted ‘framing'' or ‘problem formulation'' point for science. The choice of comparator has normative implications as it immediately commits to a definition of what is normal and, implicitly, acceptable. Therefore the specific form and purpose of the comparison(s) is part of the validity question. Their claim that we are against comparison as a scientific step is incorrect—of course comparison is necessary. This simply acts as a shield behind which to avoid our and others'' [6] challenge to their self-appointed discretion to define—or worse, allow applicants to define—what counts in the comparative frame. Denying these realities and their difficult but inevitable implications, our respondents instead try to justify their own particular choices as ‘science''. First, they deny the first-step status of comparative safety assessment, despite its clear appearance in their own ERA Guidance Document [5]—in both the representational figure (p.11) and the text “the outcome of the comparative safety assessment allows the determination of those ‘identified'' characteristics that need to be assessed [...] and will further structure the ERA” (p.13). Second, despite their claims to the contrary, ‘comparative safety assessment'', effectively a resurrection of substantial equivalence, is a concept taken from consumer health RA, controversially applied to the more open-ended processes of ERA, and one that has in fact been long-discredited if used as a bottleneck or endpoint for rigorous RA processes [7,8,9,10]. The key point is that normative commitments are being embodied, yet not acknowledged, in RA science. This occurs through a range of similar unaccountable RA steps introduced into the ERA Guidance, such as judgement of ‘biological relevance'', ‘ecological relevance'', or ‘familiarity''. We cannot address these here, but our basic point is that such endless ‘methodological'' elaborations of the kind that our EFSA colleagues perform, only obscure the institutional changes needed to properly address the normative questions for policy-engaged science.Our respondents deny our claim concerning the singular form of science the EC is attempting to impose on GM policy and debate, by citing formal EFSA procedures for consultations with Member States and non-governmental organizations. However, they directly refute themselves by emphasizing that all Member State GM cultivation bans, permitted only on scientific grounds, have been deemed invalid by EFSA. They cannot have it both ways. We have addressed the importance of unacknowledged normativity in quality assessments of science for policy in Europe elsewhere [11]. However, it is the ‘one door, one key'' policy framework for science, deriving from the Single Market logic, which forces such singularity. While this might be legitimate policy, it is not scientific. It is political economy.Our respondents conclude by saying that the paramount concern of the EFSA GMO panel is the quality of its science. We share this concern. However, they avoid our main point that the EC-proposed legislative reform would only exacerbate their problem. Ignoring the normative dimensions of regulatory science and siphoning-off scientific debate and its normative issues to a select expert panel—which despite claiming independence faces an EU Ombudsman challenge [12] and European Parliament refusal to discharge their 2010 budget, because of continuing questions over conflicts of interests [13,14]—will not achieve quality science. What is required are effective institutional mechanisms and cultural norms that identify, and deliberatively address, otherwise unnoticed normative choices shaping risk science and its interpretive judgements. It is not the EFSA''s sole responsibility to achieve this, but it does need to recognize and press the point, against resistance, to develop better EU science and policy.  相似文献   

12.
13.
14.
Greener M 《EMBO reports》2008,9(11):1067-1069
A consensus definition of life remains elusiveIn July this year, the Phoenix Lander robot—launched by NASA in 2007 as part of the Phoenix mission to Mars—provided the first irrefutable proof that water exists on the Red Planet. “We''ve seen evidence for this water ice before in observations by the Mars Odyssey orbiter and in disappearing chunks observed by Phoenix […], but this is the first time Martian water has been touched and tasted,” commented lead scientist William Boynton from the University of Arizona, USA (NASA, 2008). The robot''s discovery of water in a scooped-up soil sample increases the probability that there is, or was, life on Mars.Meanwhile, the Darwin project, under development by the European Space Agency (ESA; Paris, France; www.esa.int/science/darwin), envisages a flotilla of four or five free-flying spacecraft to search for the chemical signatures of life in 25 to 50 planetary systems. Yet, in the vastness of space, to paraphrase the British astrophysicist Arthur Eddington (1822–1944), life might be not only stranger than we imagine, but also stranger than we can imagine. The limits of our current definitions of life raise the possibility that we would not be able to recognize an extra-terrestrial organism.Back on Earth, molecular biologists—whether deliberately or not—are empirically tackling the question of what is life. Researchers at the J Craig Venter Institute (Rockville, MD, USA), for example, have synthesized an artificial bacterial genome (Gibson et al, 2008). Others have worked on ‘minimal cells'' with the aim of synthesizing a ‘bioreactor'' that contains the minimum of components necessary to be self-sustaining, reproduce and evolve. Some biologists regard these features as the hallmarks of life (Luisi, 2007). However, to decide who is first in the ‘race to create life'' requires a consensus definition of life itself. “A definition of the precise boundary between complex chemistry and life will be critical in deciding which group has succeeded in what might be regarded by the public as the world''s first theology practical,” commented Jamie Davies, Professor of Experimental Anatomy at the University of Edinburgh, UK.For most biologists, defining life is a fascinating, fundamental, but largely academic question. It is, however, crucial for exobiologists looking for extra-terrestrial life on Mars, Jupiter''s moon Europa, Saturn''s moon Titan and on planets outside our solar system.In their search for life, exobiologists base their working hypothesis on the only example to hand: life on Earth. “At the moment, we can only assume that life elsewhere is based on the same principles as on Earth,” said Malcolm Fridlund, Secretary for the Exo-Planet Roadmap Advisory Team at the ESA''s European Space Research and Technology Centre (Noordwijk, The Netherlands). “We should, however, always remember that the universe is a peculiar place and try to interpret unexpected results in terms of new physics and chemistry.”The ESA''s Darwin mission will, therefore, search for life-related gases such as carbon dioxide, water, methane and ozone in the atmospheres of other planets. On Earth, the emergence of life altered the balance of atmospheric gases: living organisms produced all of the Earth'' oxygen, which now accounts for one-fifth of the atmosphere. “If all life on Earth was extinguished, the oxygen in our atmosphere would disappear in less than 4 million years, which is a very short time as planets go—the Earth is 4.5 billion years old,” Fridlund said. He added that organisms present in the early phases of life on Earth produced methane, which alters atmospheric composition compared with a planet devoid of life.Although the Darwin project will use a pragmatic and specific definition of life, biologists, philosophers and science-fiction authors have devised numerous other definitions—none of which are entirely satisfactory. Some are based on basic physiological characteristics: a living organism must feed, grow, metabolize, respond to stimuli and reproduce. Others invoke metabolic definitions that define a living organism as having a distinct boundary—such as a membrane—which facilitates interaction with the environment and transfers the raw materials needed to maintain its structure (Wharton, 2002). The minimal cell project, for example, defines cellular life as “the capability to display a concert of three main properties: self-maintenance (metabolism), reproduction and evolution. When these three properties are simultaneously present, we will have a full fledged cellular life” (Luisi, 2007). These concepts regard life as an emergent phenomenon arising from the interaction of non-living chemical components.Cryptobiosis—hidden life, also known as anabiosis—and bacterial endospores challenge the physiological and metabolic elements of these definitions (Wharton, 2002). When the environment changes, certain organisms are able to undergo cryptobiosis—a state in which their metabolic activity either ceases reversibly or is barely discernible. Cryptobiosis allows the larvae of the African fly Polypedilum vanderplanki to survive desiccation for up to 17 years and temperatures ranging from −270 °C (liquid helium) to 106 °C (Watanabe et al, 2002). It also allows the cysts of the brine shrimp Artemia to survive desiccation, ultraviolet radiation, extremes of temperature (Wharton, 2002) and even toyshops, which sell the cysts as ‘sea monkeys''. Organisms in a cryptobiotic state show characteristics that vary markedly from what we normally consider to be life, although they are certainly not dead. “[C]ryptobiosis is a unique state of biological organization”, commented James Clegg, from the Bodega Marine Laboratory at the University of California (Davies, CA, USA), in an article in 2001 (Clegg, 2001). Bacterial endospores, which are the “hardiest known form of life on Earth” (Nicholson et al, 2000), are able to withstand almost any environment—perhaps even interplanetary space. Microbiologists isolated endospores of strict thermophiles from cold lake sediments and revived spores from samples some 100,000 years old (Nicholson et al, 2000).…life might be not only stranger than we imagine, but also stranger than we can imagineAnother problem with the definitions of life is that these can expand beyond biology. The minimal cell project, for example, in common with most modern definitions of life, encompass the ability to undergo Darwinian evolution (Wharton, 2002). “To be considered alive, the organism needs to be able to undergo extensive genetic modification through natural selection,” said Professor Paul Freemont from Imperial College London, UK, whose research interests encompass synthetic biology. But the virtual ‘organisms'' in computer simulations such as the Game of Life (www.bitstorm.org/gameoflife) and Tierra (http://life.ou.edu/tierra) also exhibit life-like characteristics, including growth, death and evolution—similar to robots and other artifical systems that attempt to mimic life (Guruprasad & Sekar, 2006). “At the moment, we have some problems differentiating these approaches from something biologists consider [to be] alive,” Fridlund commented.…to decide who is first in the ‘race to create life'' requires a consensus definition of lifeBoth the genetic code and all computer-programming languages are means of communicating large quantities of codified information, which adds another element to a comprehensive definition of life. Guenther Witzany, an Austrian philosopher, has developed a “theory of communicative nature” that, he claims, differentiates biotic and abiotic life. “Life is distinguished from non-living matter by language and communication,” Witzany said. According to his theory, RNA and DNA use a ‘molecular syntax'' to make sense of the genetic code in a manner similar to language. This paragraph, for example, could contain the same words in a random order; it would be meaningless without syntactic and semantic rules. “The RNA/DNA language follows syntactic, semantic and pragmatic rules which are absent in [a] random-like mixture of nucleic acids,” Witzany explained.Yet, successful communication requires both a speaker using the rules and a listener who is aware of and can understand the syntax and semantics. For example, cells, tissues, organs and organisms communicate with each other to coordinate and organize their activities; in other words, they exchange signals that contain meaning. Noradrenaline binding to a β-adrenergic receptor in the bronchi communicates a signal that says ‘dilate''. “If communication processes are deformed, destroyed or otherwise incorrectly mediated, both coordination and organisation of cellular life is damaged or disturbed, which can lead to disease,” Witzany added. “Cellular life also interprets abiotic environmental circumstances—such as the availability of nutrients, temperature and so on—to generate appropriate behaviour.”Nonetheless, even definitions of life that include all the elements mentioned so far might still be incomplete. “One can make a very complex definition that covers life on the Earth, but what if we find life elsewhere and it is different? My opinion, shared by many, is that we don''t have a clue of how life arose on Earth, even if there are some hypotheses,” Fridlund said. “This underlies many of our problems defining life. Since we do not have a good minimum definition of life, it is hard or impossible to find out how life arose without observing the process. Nevertheless, I''m an optimist who believes the universe is understandable with some hard work and I think we will understand these issues one day.”Both synthetic biology and research on organisms that live in extreme conditions allow biologists to explore biological boundaries, which might help them to reach a consensual minimum definition of life, and understand how it arose and evolved. Life is certainly able to flourish in some remarkably hostile environments. Thermus aquaticus, for example, is metabolically optimal in the springs of Yellowstone National Park at temperatures between 75 °C and 80 °C. Another extremophile, Deinococcus radiodurans, has evolved a highly efficient biphasic system to repair radiation-induced DNA breaks (Misra et al, 2006) and, as Fridlund noted, “is remarkably resistant to gamma radiation and even lives in the cooling ponds of nuclear reactors.”In turn, synthetic biology allows for a detailed examination of the elements that define life, including the minimum set of genes required to create a living organism. Researchers at the J Craig Venter Institute, for example, have synthesized a 582,970-base-pair Mycoplasma genitalium genome containing all the genes of the wild-type bacteria, except one that they disrupted to block pathogenicity and allow for selection. ‘Watermarks'' at intergenic sites that tolerate transposon insertions identify the synthetic genome, which would otherwise be indistinguishable from the wild type (Gibson et al, 2008).Yet, as Pier Luigi Luisi from the University of Roma in Italy remarked, even M. genitalium is relatively complex. “The question is whether such complexity is necessary for cellular life, or whether, instead, cellular life could, in principle, also be possible with a much lower number of molecular components”, he said. After all, life probably did not start with cells that already contained thousands of genes (Luisi, 2007).…researchers will continue their attempts to create life in the test tube—it is, after all, one of the greatest scientific challengesTo investigate further the minimum number of genes required for life, researchers are using minimal cell models: synthetic genomes that can be included in liposomes, which themselves show some life-like characteristics. Certain lipid vesicles are able to grow, divide and grow again, and can include polymerase enzymes to synthesize RNA from external substrates as well as functional translation apparatuses, including ribosomes (Deamer, 2005).However, the requirement that an organism be subject to natural selection to be considered alive could prove to be a major hurdle for current attempts to create life. As Freemont commented: “Synthetic biologists could include the components that go into a cell and create an organism [that is] indistinguishable from one that evolved naturally and that can replicate […] We are beginning to get to grips with what makes the cell work. Including an element that undergoes natural selection is proving more intractable.”John Dupré, Professor of Philosophy of Science and Director of the Economic and Social Research Council (ESRC) Centre for Genomics in Society at the University of Exeter, UK, commented that synthetic biologists still approach the construction of a minimal organism with certain preconceptions. “All synthetic biology research assumes certain things about life and what it is, and any claims to have ‘confirmed'' certain intuitions—such as life is not a vital principle—aren''t really adding empirical evidence for those intuitions. Anyone with the opposite intuition may simply refuse to admit that the objects in question are living,” he said. “To the extent that synthetic biology is able to draw a clear line between life and non-life, this is only possible in relation to defining concepts brought to the research. For example, synthetic biologists may be able to determine the number of genes required for minimal function. Nevertheless, ‘what counts as life'' is unaffected by minimal genomics.”Partly because of these preconceptions, Dan Nicholson, a former molecular biologist now working at the ESRC Centre, commented that synthetic biology adds little to the understanding of life already gained from molecular biology and biochemistry. Nevertheless, he said, synthetic biology might allow us to go boldly into the realms of biological possibility where evolution has not gone before.An engineered synthetic organism could, for example, express novel amino acids, proteins, nucleic acids or vesicular forms. A synthetic organism could use pyranosyl-RNA, which produces a stronger and more selective pairing system than the natural existent furanosyl-RNA (Bolli et al, 1997). Furthermore, the synthesis of proteins that do not exist in nature—so-called never-born proteins—could help scientists to understand why evolutionary pressures only selected certain structures.As Luisi remarked, the ratio between the number of theoretically possible proteins containing 100 amino acids and the real number present in nature is close to the ratio between the space of the universe and the space of a single hydrogen atom, or the ratio between all the sand in the Sahara Desert and a single grain. Exploring never-born proteins could, therefore, allow synthetic biologists to determine whether particular physical, structural, catalytic, thermodynamic and other properties maximized the evolutionary fitness of natural proteins, or whether the current protein repertoire is predominately the result of chance (Luisi, 2007).In the final analysis, as with all science, deep understanding is more important than labelling with words.“Synthetic biology also could conceivably help overcome the ‘n = 1 problem''—namely, that we base biological theorising on terrestrial life only,” Nicholson said. “In this way, synthetic biology could contribute to the development of a more general, broader understanding of what life is and how it might be defined.”No matter the uncertainties, researchers will continue their attempts to create life in the test tube—it is, after all, one of the greatest scientific challenges. Whether or not they succeed will depend partly on the definition of life that they use, though in any case, the research should yield numerous insights that are beneficial to biologists generally. “The process of creating a living system from chemical components will undoubtedly offer many rich insights into biology,” Davies concluded. “However, the definition will, I fear, reflect politics more than biology. Any definition will, therefore, be subject to a lot of inter-lab political pressure. Definitions are also important for bioethical legislation and, as a result, reflect larger politics more than biology. In the final analysis, as with all science, deep understanding is more important than labelling with words.”  相似文献   

15.
16.
The debate about GM crops in Europe holds valuable lessons about risk management and risk communication. These lessons will be helpful for the upcoming debate on GM animals.Biomedical research and biotechnology have grown enormously in the past decades, as nations have heavily invested time and money in these endeavours to reap the benefits of the so-called ‘bioeconomy''. Higher investments on research should increase knowledge, which is expected to translate into applied research and eventually give rise to new products and services that are of economic or social benefit. Many governments have developed ambitious strategies—both economic and political—to accelerate this process and fuel economic growth (http://www.oecd.org/futures/bioeconomy/2030). However, it turns out that social attitudes are a more important factor for translating scientific advances than previously realized; public resistance can effectively slow down or even halt technological progress, and some hoped-for developments have hit roadblocks. Addressing these difficulties has become a major challenge for policy-makers, who have to find the middle ground between promoting innovation and addressing ethical and cultural values.There are many examples of how scientific and technological advances raise broad societal concerns: research that uses human embryonic stem cells, nanotechnology, cloning and genetically modified (GM) organisms are perhaps the most contested ones. The prime example of a promising technology that has failed to reach its full potential owing to ethical, cultural and societal concerns is GM organisms (GMOs); specifically, GM crops. Intense lobbying and communication by ‘anti-GM'' groups, combined with poor public relations from industry and scientists, has turned consumers against GM crops and has largely hampered the application of this technology in most European countries. Despite this negative outcome, however, the decade-long debate has provided important lessons and insight for the management of other controversial technologies: in particular, the use of GM animals.During the early 1990s, ‘anti-GM'' non-governmental organizations (NGOs) and ‘pro-GM'' industry were the main culprits for the irreversible polarization of the GMO debate. Both groups lobbied policy-makers and politicians, but NGOs ultimately proved better at persuading the public, a crucial player in the debate. Nevertheless, the level of public outcry varied significantly, reaching its peak in the European Union (EU). In addition to the values of citizens and effective campaigning by NGOs, the structural organization of the EU had a crucial role in triggering the GMO crisis. Within the EU, the European Commission (EC) is an administrative body the decisions of which have a legal impact on the 27 Member States. The EC is well-aware of its unique position and has compensated its lack of democratic accountability by increasing transparency and making itself accessible to the third sector [1]. This strategy was an important factor in the GMO debate as the EC was willing to listen to the views of environmental groups and consumer organizations.…it turns out that social attitudes are a more important factor for translating scientific advances than previously realized…Environmental NGOs successfully exploited this gap between the European electorate and the EC, and assumed to speak as the vox populi in debates. At the same time, politicians in EU Member States were faced with aggressive anti-GMO campaigns and increasingly polarized debates. To avoid the lobbying pressure and alleviate public concerns, they chose to hide behind science: the result was a proliferation of ‘scientific committees'' charged with assessing the health and environmental risks of GM crops.Scientists soon realized that their so-called ‘expert consultation'' was only a political smoke screen in most cases. Their reports and advice were used as arguments to justify policies—rather than tools for determining policy—that sometimes ignored the actual evidence and scientific results [2,3]. For example, in 2008, French President Nikolas Sarkozy announced that he would not authorize GM pest-resistant MON810 maize for cultivation in France if ‘the experts'' had any concerns over its safety. However, although the scientific committee appointed to assess MON810 concluded that the maize was safe for cultivation, the government''s version of the report eventually claimed that scientists had “serious doubts” on MON810 safety, which was then used as an argument to ban its cultivation. Francoise Hollande''s government has adopted a similar strategy to maintain the ban on MON810 [4].In addition to the values of citizens and effective campaigning by NGOs, the structural organization of the EU had a crucial role in triggering the GMO crisisSuch unilateral decisions by Member States challenged the EC''s authority to approve the cultivation of GM crops in the EU. After intense discussions, the EC and the Member States agreed on a centralized procedure for the approval of GMOs and the distribution of responsibilities for the three stages of the risk management process: risk assessment, risk management and risk communication (Fig 1). The European Food Safety Authority (EFSA) alone would be responsible for carrying out risk assessment, whilst the Member States would deal with risk management through the standard EU comitology procedure, by which policy-makers from Member States reach consensus on existing laws. Finally, both the EC and Member States committed to engage with European citizens in an attempt to gain credibility and promote transparency.Open in a separate windowFigure 1Risk assessment and risk management for GM crops in the EU. The new process for GM crop approval under Regulation (EC) No. 1829/2003, which defines the responsibilities for risk assessment and risk management. EC, European Community; EU, European Union; GM, genetically modified.More than 20 years after this debate, the claims made both for and against GM crops have failed to materialize. GMOs have neither reduced world hunger, nor destroyed entire ecosystems or poisoned humankind, even after widespread cultivation. Most of the negative effects have occurred in international food trade [5], partly owing to a lack of harmonization in international governance. More importantly, given that the EU is the largest commodity market in the world, this is caused by the EU''s chronic resistance to GM crops. The agreed centralized procedure has not been implemented satisfactorily and the blame is laid at the door of risk management (http://ec.europa.eu/food/food/biotechnology/evaluation/index_en.htm). Indeed, the 27 Member States have never reached a consensus on GM crops, which is the only non-functional comitology procedure in the EU [2]. Moreover, even after a GM crop was approved, some member states refused to allow its cultivation, which prompted the USA, Canada and Argentina to file a dispute at the World Trade Organization (WTO) against the EU.The inability to reach agreement through the comitology procedure, has forced the EC to make the final decision for all GMO applications. Given that the EC is an administrative body with no scientific expertise, it has relied heavily on EFSA''s opinion. This has created a peculiar situation in which the EFSA performs both risk assessment and management. Anti-GM groups have therefore focused their efforts on discrediting the EFSA as an expert body. Faced with regular questions related to agricultural management or globalization, EFSA scientists are forced to respond to issues that are more linked to risk management than risk assessment [5]. By repeatedly mixing socio-economic and cultural values with scientific opinions, NGOs have questioned the expertise of EFSA scientists and portrayed them as having vested interests in GMOs.Nevertheless, there is no doubt that science has accumulated enormous knowledge on GM crops, which are the most studied crops in human history [6]. In the EU alone, about 270 million euros have been spent through the Framework Programme to study health and environmental risks [5]. Framework Programme funding is approved by Member State consensus and benefits have never been on the agenda of these studies. Despite this bias in funding, the results show that GM crops do not pose a greater threat to human health and the environment than traditional crops [5,6,7]. In addition, scientists have reached international consensus on the methodology to perform risk assessment of GMOs under the umbrella of the Codex Alimentarius [8]. One might therefore conclude that the scientific risk assessment is solid and, contrary to the views of NGOs, that science has done its homework. However, attention still remains fixed on risk assessment in an attempt to fix risk management. But what about the third stage? Have the EC and Member States done their homework on risk communication?It is generally accepted that risk management in food safety crucially depends on efficient risk communication [9]. However, risk communication has remained the stepchild of the three risk management stages [6]. A review of the GM Food/Feed Regulations noted that public communication by EU authorities had been sparse and sometimes inconsistent between the EC and Member States. Similarly, a review of the EC Directive for the release of GMOs to the environment described the information provided to the public as inadequate because it is highly technical and only published in English (http://ec.europa.eu/food/food/biotechnology/evaluation/index_en.htm). Accordingly, it is not surprising that EU citizens remain averse to GMOs. Moreover, a Eurobarometer poll lists GMOs as one of the top five environmental issues for which EU citizens feel they lack sufficient information [10]. Despite the overwhelming proliferation of scientific evidence, politicians and policy-makers have ignored the most important stakeholder: society. Indeed, the reviews mentioned above recommend that the EC and Member States should improve their risk communication activities.What have we learned from the experience? Is it prudent and realistic to gauge the public''s views on a new technology before it is put into use? Can we move towards a bioeconomy and continue to ignore society? To address these questions, we focus on GM animals, as these organisms are beginning to reach the market, raise many similar issues to GM plants and thus have the potential to re-open the GM debate. GM animals, if brought into use, will involve a similar range and distribution of stakeholders in the EU, with two significant differences: animal welfare organizations will probably take the lead over environmental NGOs in the anti-GM side, and the breeding industry is far more cautious in adopting GM animals than the plant seed industry was to adopt GM crops [11].It is generally accepted that risk management in food safety crucially depends on efficient risk communicationGloFish®—a GM fish that glows when illuminated with UV light and is being sold as a novelty pet—serves as an illustrative example. GloFish® was the first GM animal to reach the market and, more importantly, did so without any negative media coverage. It is also a controversial application of GM technology, as animal welfare organizations and scientists alike consider it a frivolous use of GM, describing it as “complete nonsense” [18]. The GloFish® is not allowed in the EU, but it is commercially available throughout the USA, except in California. One might imagine that consumers in general would not be that interested in GloFish®, as research indicates that consumer acceptance of a new product is usually higher when there are clear perceived benefits [13,14]. It is difficult to imagine the benefit of GloFish® beyond its novelty, and yet it has been found illegally in the Netherlands, Germany and the UK [15]. This highlights the futility of predicting the public''s views without consulting them.Consumer attitudes and behaviour—including in regard to GMOs—are complex and change over time [13,14]. During the past years, the perception from academia and governments of the public has moved away from portraying them as a ‘victim'' of industry towards recognizing consumers as an important factor for change. Still, such arguments put citizens at the end of the production chain where they can only exert their influence by choosing to buy or to ignore certain products. Indeed, one of the strongest arguments against GM crops has been that the public never asked for them in the first place.With GM animals, the use of recombinant DNA technologies in animal breeding would rekindle an old battle between animal welfare organizations and the meat industryWith GM animals, the use of recombinant DNA technologies in animal breeding would rekindle an old battle between animal welfare organizations and the meat industry. Animal welfare organizations claim that European consumers demand better treatment for farm animals, whilst industry maintains that price remains one of the most important factors for consumers [12]. Both sides have facts to support their claims: animal welfare issues take a prominent role in the political agenda and animal welfare organizations are growing in both number and influence; industry can demonstrate a competitive disadvantage over countries in which animal welfare regulations are more relaxed and prices are lower, such as Argentina. However, the public is absent in this debate.Consumers have been described as wearing two hats: one that supports animal welfare and one that looks at the price ticket at the supermarket [16]. This situation has an impact on the breeding of livestock and the meat industry, which sees consumer prices decreasing whilst production costs increase. This trend is believed to reflect the increasing detachment of consumers from the food production chain [17]. Higher demands on animal welfare standards, environmental protection and competing international meat producers all influence the final price of meat. To remain competitive, the meat industry has to increase production per unit; it can therefore be argued that one of the main impetuses to develop GM animals was created by the behaviour—not belief—of consumers. This second example illustrates once again that society cannot be ignored when discussing any strategy to move towards the bioeconomy.The EU''s obsession with assessing risk and side-lining benefits has not facilitated an open dialogueIn conclusion, we believe that functional risk management requires all three components, including risk communication. For applications of biotechnology, a disproportionate amount of emphasis has been placed on risk assessment. The result is that the GMO debate has been framed as black and white, as either safe or unsafe, leaving policy-makers with the difficult task of educating the public about the many shades of grey. However, there are a wide range of issues that a citizen will want take into account when deciding about GM, and not all of them can be answered by science. Citizens might trust what scientists say, but “when scientists and politicians are brought together, we may well not trust that the quality of science will remain intact” [18]. By reducing the debate to scientific matters, it is a free card for the misuse of science and has a negative impact on science itself. Whilst scientists publishing pro-GM results have been attacked by NGOs, scientific publications that highlighted potential risks of GM crops came under disproportionate attacks from the scientific community [19].Flexible governance and context need to work hand-in-hand if investments in biotechnology are ultimately to benefit society. The EU''s obsession with assessing risk and side-lining benefits has not facilitated an open dialogue. The GMO experience has also shown that science cannot provide all the answers. Democratically elected governments should therefore take the lead in communicating the risks and benefits of technological advances to their electorate, and should discuss what the bioeconomy really means and the role of new technologies, including GMOs. We need to move the spotlight away from the science alone to take in the bigger picture. Ultimately, do consumers feel that paying a few extra cents for a dozen eggs is worth it if they know the chicken is happy whether it is so-called ‘natural'' or GM?? Open in a separate windowNúria Vàzquez-SalatOpen in a separate windowLouis-Marie Houdebine  相似文献   

17.
Geoffrey Miller 《EMBO reports》2012,13(10):880-884
Runaway consumerism imposes social and ecological costs on humans in much the same way that runaway sexual ornamentation imposes survival costs and extinction risks on other animals.Sex and marketing have been coupled for a very long time. At the cultural level, their relationship has been appreciated since the 1960s ‘Mad Men'' era, when the sexual revolution coincided with the golden age of advertising, and marketers realized that ‘sex sells''. At the biological level, their interplay goes much further back to the Cambrian explosion around 530 million years ago. During this period of rapid evolutionary expansion, multicellular organisms began to evolve elaborate sexual ornaments to advertise their genetic quality to the most important consumers of all in the great mating market of life: the opposite sex.Maintaining the genetic quality of one''s offspring had already been a problem for billions of years. Ever since life originated around 3.7 billion years ago, RNA and DNA have been under selection to copy themselves as accurately as possible [1]. Yet perfect self-replication is biochemically impossible, and almost all replication errors are harmful rather than helpful [2]. Thus, mutations have been eroding the genomic stability of single-celled organisms for trillions of generations, and countless lineages of asexual organisms have suffered extinction through mutational meltdown—the runaway accumulation of copying errors [3]. Only through wildly profligate self-cloning could such organisms have any hope of leaving at least a few offspring with no new harmful mutations, so they could best survive and reproduce.Around 1.5 billion years ago, bacteria evolved the most basic form of sex to minimize mutation load: bacterial conjugation [4]. By swapping bits of DNA across the pilus (a tiny intercellular bridge) a bacterium can replace DNA sequences compromised by copying errors with intact sequences from its peers. Bacteria finally had some defence against mutational meltdown, and they thrived and diversified.Then, with the evolution of genuine sexual reproduction through meiosis, perhaps around 1.2 billion years ago, eukaryotes made a great advance in their ability to purge mutations. By combining their genes with a mate''s genes, they could produce progeny with huge genetic variety—and crucially with a wider range of mutation loads [5]. The unlucky offspring who happened to inherit an above-average number of harmful mutations from both parents would die young without reproducing, taking many mutations into oblivion with them. The lucky offspring who happened to inherit a below-average number of mutations from both parents would live long, prosper and produce offspring of higher genetic quality. Sexual recombination also made it easier to spread and combine the rare mutations that happened to be useful, opening the way for much faster evolutionary advances [6]. Sex became the foundation of almost all complex life because it was so good at both short-term damage limitation (purging bad mutations) and long-term innovation (spreading good mutations).Sex became the foundation of almost all complex life because it was so good at both short-term damage limitation […] and long-term innovation…Yet, single-celled organisms always had a problem with sex: they were not very good at choosing sexual partners with the best genes, that is, the lowest mutation loads. Given bacterial capabilities for chemical communication such as quorum-sensing [7], perhaps some prokaryotes and eukaryotes paid attention to short-range chemical cues of genetic quality before swapping genes. However, mating was mainly random before the evolution of longer-range senses and nervous systems.All of this changed profoundly with the Cambrian explosion, which saw organisms undergoing a genetic revolution that increased the complexity of gene regulatory networks, and a morphological revolution that increased the diversity of multicellular body plans. It was also a neurological and psychological revolution. As organisms became increasingly mobile, they evolved senses such as vision [8] and more complex nervous systems [9] to find food and evade predators. However, these new senses also empowered a sexual revolution, as they gave animals new tools for choosing sexual partners. Rather than hooking up randomly with the nearest mate, animals could now select mates based on visible cues of genetic quality such as body size, energy level, bright coloration and behavioural competence. By choosing the highest quality mates, they could produce higher quality offspring with lower mutation loads [10]. Such mate choice imposed selection on all of those quality cues to become larger, brighter and more conspicuous, amplifying them into true sexual ornaments: biological luxury goods such as the guppy''s tail and the peacock''s train that function mainly to impress and attract females [11]. These sexual ornaments evolved to have a complex genetic architecture, to capture a larger share of the genetic variation across individuals and to reveal mutation load more accurately [12].Ever since the Cambrian, the mating market for sexually reproducing animal species has been transformed to some degree into a consumerist fantasy world of conspicuous quality, status, fashion, beauty and romance. Individuals advertise their genetic quality and phenotypic condition through reliable, hard-to-fake signals or ‘fitness indicators'' such as pheromones, songs, ornaments and foreplay. Mates are chosen on the basis of who displays the largest, costliest, most precise, most popular and most salient fitness indicators. Mate choice for fitness indicators is not restricted to females choosing males, but often occurs in both sexes [13], especially in socially monogamous species with mutual mate choice such as humans [14].Thus, for 500 million years, animals have had to straddle two worlds in perpetual tension: natural selection and sexual selection. Each type of selection works through different evolutionary principles and dynamics, and each yields different types of adaptation and biodiversity. Neither fully dominates the other, because sexual attractiveness without survival is a short-lived vanity, whereas ecological competence without reproduction is a long-lived sterility. Natural selection shapes species to fit their geographical habitats and ecological niches, and favours efficiency in growth, foraging, parasite resistance, predator evasion and social competition. Sexual selection shapes each sex to fit the needs, desires and whims of the other sex, and favours conspicuous extravagance in all sorts of fitness indicators. Animal life walks a fine line between efficiency and opulence. More than 130,000 plant species also play the sexual ornamentation game, having evolved flowers to attract pollinators [15].The sexual selection world challenges the popular misconception that evolution is blind and dumb. In fact, as Darwin emphasized, sexual selection is often perceptive and clever, because animal senses and brains mediate mate choice. This makes sexual selection closer in spirit to artificial selection, which is governed by the senses and brains of human breeders. In so far as sexual selection shaped human bodies, minds and morals, we were also shaped by intelligent designers—who just happened to be romantic hominids rather than fictional gods [16].Thus, mate choice for genetic quality is analogous in many ways to consumer choice for brand quality [17]. Mate choice and consumer choice are both semi-conscious—partly instinctive, partly learned through trial and error and partly influenced by observing the choices made by others. Both are partly focused on the objective qualities and useful features of the available options, and partly focused on their arbitrary, aesthetic and fashionable aspects. Both create the demand that suppliers try to understand and fulfil, with each sex striving to learn the mating preferences of the other, and marketers striving to understand consumer preferences through surveys, focus groups and social media data mining.…single-celled organisms always had a problem with sex: they were not very good at choosing the sexual partners with the best genes…Mate choice and consumer choice can both yield absurdly wasteful outcomes: a huge diversity of useless, superficial variations in the biodiversity of species and the economic diversity of brands, products and packaging. Most biodiversity seems to be driven by sexual selection favouring whimsical differences across populations in the arbitrary details of fitness indicators, not just by naturally selected adaptation to different ecological niches [18]. The result is that within each genus, a species can be most easily identified by its distinct mating calls, sexual ornaments, courtship behaviours and genital morphologies [19], not by different foraging tactics or anti-predator defences. Similarly, much of the diversity in consumer products—such as shirts, cars, colleges or mutual funds—is at the level of arbitrary design details, branding, packaging and advertising, not at the level of objective product features and functionality.These analogies between sex and marketing run deep, because both depend on reliable signals of quality. Until recently, two traditions of signalling theory developed independently in the biological and social sciences. The first landmark in biological signalling theory was Charles Darwin''s analysis of mate choice for sexual ornaments as cues of good fitness and fertility in his book, The Descent of Man, and Selection in Relation to Sex (1871). Ronald Fisher analysed the evolution of mate preferences for fitness indicators in 1915 [20]. Amotz Zahavi proposed the ‘handicap principle'', arguing that only costly signals could be reliable, hard-to-fake indicators of genetic quality or phenotypic condition in 1975 [21]. Richard Dawkins and John Krebs applied game theory to analyse the reliability of animal signals, and the co-evolution of signallers and receivers in 1978 [22]. In 1990, Alan Grafen eventually proposed a formal model of the ‘handicap principle'' [23], and Richard Michod and Oren Hasson analysed ‘reliable indicators of fitness'' [24]. Since then, biological signalling theory has flourished and has informed research on sexual selection, animal communication and social behaviour.…new senses also empowered a sexual revolution […] Rather than hooking up randomly with the nearest mate, animals could now select mates based on visible cues of genetic quality…The parallel tradition of signalling theory in the social sciences and philosophy goes back to Aristotle, who argued that ethical and rational acts are reliable signals of underlying moral and cognitive virtues (ca 350–322 BC). Friedrich Nietzsche analysed beauty, creativity, morality and even cognition as expressions of biological vigour by using signalling logic (1872–1888). Thorstein Veblen proposed that conspicuous luxuries, quality workmanship and educational credentials act as reliable signals of wealth, effort and taste in The Theory of the Leisure Class (1899), The Instinct of Workmanship (1914) and The Higher Learning in America (1922). Vance Packard used signalling logic to analyse social class, runaway consumerism and corporate careerism in The Status Seekers (1959), The Waste Makers (1960) and The Pyramid Climbers (1962), and Ernst Gombrich analysed beauty in art as a reliable signal of the artist''s skill and effort in Art and Illusion (1977) and A Sense of Order (1979). Michael Spence developed formal models of educational credentials as reliable signals of capability and conscientiousness in Market Signalling (1974). Robert Frank used signalling logic to analyse job titles, emotions, career ambitions and consumer luxuries in Choosing the Right Pond (1985), Passions within Reason (1988), The Winner-Take-All-Society (1995) and Luxury Fever (2000).Evolutionary psychology and evolutionary anthropology have been integrating these two traditions to better understand many puzzles in human evolution that defy explanation in terms of natural selection for survival. For example, signalling theory has illuminated the origins and functions of facial beauty, female breasts and buttocks, body ornamentation, clothing, big game hunting, hand-axes, art, music, humour, poetry, story-telling, courtship gifts, charity, moral virtues, leadership, status-seeking, risk-taking, sports, religion, political ideologies, personality traits, adaptive self-deception and consumer behaviour [16,17,25,26,27,28,29].Building on signalling theory and sexual selection theory, the new science of evolutionary consumer psychology [30] has been making big advances in understanding consumer goods as reliable signals—not just signals of monetary wealth and elite taste, but signals of deeper traits such as intelligence, moral virtues, mating strategies and the ‘Big Five'' personality traits: openness, conscientiousness, agreeableness, extraversion and emotional stability [17]. These individual traits are deeper than wealth and taste in several ways: they are found in the other great apes, are heritable across generations, are stable across life, are important in all cultures and are naturally salient when interacting with mates, friends and kin [17,27,31]. For example, consumers seek elite university degrees as signals of intelligence; they buy organic fair-trade foods as signals of agreeableness; and they value foreign travel and avant-garde culture as signals of openness [17]. New molecular genetics research suggests that mutation load accounts for much of the heritable variation in human intelligence [32] and personality [33], so consumerist signals of these traits might be revealing genetic quality indirectly. If so, conspicuous consumption can be seen as just another ‘good-genes indicator'' favoured by mate choice.…sexual attractiveness without survival is a short-lived vanity, whereas ecological competence without reproduction is a long-lived sterilityIndeed, studies suggest that much conspicuous consumption, especially by young single people, functions as some form of mating effort. After men and women think about potential dates with attractive mates, men say they would spend more money on conspicuous luxury goods such as prestige watches, whereas women say they would spend more time doing conspicuous charity activities such as volunteering at a children''s hospital [34]. Conspicuous consumption by males reveals that they are pursuing a short-term mating strategy [35], and this activity is most attractive to women at peak fertility near ovulation [36]. Men give much higher tips to lap dancers who are ovulating [37]. Ovulating women choose sexier and more revealing clothes, shoes and fashion accessories [38]. Men living in towns with a scarcity of women compete harder to acquire luxuries and accumulate more consumer debt [39]. Romantic gift-giving is an important tactic in human courtship and mate retention, especially for men who might be signalling commitment [40]. Green consumerism—preferring eco-friendly products—is an effective form of conspicuous conservation, signalling both status and altruism [41].Findings such as these challenge traditional assumptions in economics. For example, ever since the Marginal Revolution—the development of economic theory during the 1870s—mainstream economics has made the ‘Rational Man'' assumption that consumers maximize their expected utility from their product choices, without reference to what other consumers are doing or desiring. This assumption was convenient both analytically—as it allowed easier mathematical modelling of markets and price equilibria—and ideologically in legitimizing free markets and luxury goods. However, new research from evolutionary consumer psychology and behavioural economics shows that consumers often desire ‘positional goods'' such as prestige-branded luxuries that signal social position and status through their relative cost, exclusivity and rarity. Positional goods create ‘positional externalities''—the harmful social side-effects of runaway status-seeking and consumption arms races [42].…biodiversity seems driven by sexual selection favouring whimsical differences […] Similarly […] diversity in consumer products […] is at the level of arbitrary design…These positional externalities are important because they undermine the most important theoretical justification for free markets—the first fundamental theorem of welfare economics, a formalization of Adam Smith''s ‘invisible hand'' argument, which says that competitive markets always lead to efficient distributions of resources. In the 1930s, the British Marxist biologists Julian Huxley and J.B.S. Haldane were already wary of such rationales for capitalism, and understood that runaway consumerism imposes social and ecological costs on humans in much the same way that runaway sexual ornamentation imposes survival costs and extinction risks on other animals [16]. Evidence shows that consumerist status-seeking leads to economic inefficiencies and costs to human welfare [42]. Runaway consumerism might be one predictable result of a human nature shaped by sexual selection, but we can display desirable traits in many other ways, such as green consumerism, conspicuous charity, ethical investment and through social media such as Facebook [17,43].Future work in evolutionary consumer psychology should give further insights into the links between sex, mutations, evolution and marketing. These links have been important for at least 500 million years and probably sparked the evolution of human intelligence, language, creativity, beauty, morality and ideology. A better understanding of these links could help us nudge global consumerist capitalism into a more sustainable form that imposes lower costs on the biosphere and yields higher benefits for future generations.? Open in a separate windowGeoffrey Miller  相似文献   

18.
Elucidating the temporal order of silencing   总被引:1,自引:0,他引:1  
Izaurralde E 《EMBO reports》2012,13(8):662-663
  相似文献   

19.
Li Y  Zheng H  Witt CM  Roll S  Yu SG  Yan J  Sun GJ  Zhao L  Huang WJ  Chang XR  Zhang HX  Wang DJ  Lan L  Zou R  Liang FR 《CMAJ》2012,184(4):401-410

Background:

Acupuncture is commonly used to treat migraine. We assessed the efficacy of acupuncture at migraine-specific acupuncture points compared with other acupuncture points and sham acupuncture.

Methods:

We performed a multicentre, single-blind randomized controlled trial. In total, 480 patients with migraine were randomly assigned to one of four groups (Shaoyang-specific acupuncture, Shaoyang-nonspecific acupuncture, Yangming-specific acupuncture or sham acupuncture [control]). All groups received 20 treatments, which included electrical stimulation, over a period of four weeks. The primary outcome was the number of days with a migraine experienced during weeks 5–8 after randomization. Our secondary outcomes included the frequency of migraine attack, migraine intensity and migraine-specific quality of life.

Results:

Compared with patients in the control group, patients in the acupuncture groups reported fewer days with a migraine during weeks 5–8, however the differences between treatments were not significant (p > 0.05). There was a significant reduction in the number of days with a migraine during weeks 13–16 in all acupuncture groups compared with control (Shaoyang-specific acupuncture v. control: difference –1.06 [95% confidence interval (CI) –1.77 to –0.5], p = 0.003; Shaoyang-nonspecific acupuncture v. control: difference –1.22 [95% CI –1.92 to –0.52], p < 0.001; Yangming-specific acupuncture v. control: difference –0.91 [95% CI –1.61 to –0.21], p = 0.011). We found that there was a significant, but not clinically relevant, benefit for almost all secondary outcomes in the three acupuncture groups compared with the control group. We found no relevant differences between the three acupuncture groups.

Interpretation:

Acupuncture tested appeared to have a clinically minor effect on migraine prophylaxis compared with sham acupuncture.

Trial Registration:

Clinicaltrials.gov NCT00599586About 6%–8% of men and 16%–18% of women in the United States and England experience migraines, with or without an aura.1,2 A prevalence of 1% has been reported in mainland China,3 compared with 4.7% in Hong Kong and 9.1% in Taiwan.4,5 A recent Cochrane meta-analysis suggests that acupuncture as migraine prophylaxis is safe and effective and should be considered as a treatment option for willing patients.6Although the specific effects acupuncture are controversial, acupuncture, as it is currently practised, clearly differentiates between real acupuncture points and nonacupuncture points. The Chinese Government launched the National Basic Research Program to obtain more data about the specificity of acupuncture points.7Trials from Italy and Brazil8,9 showed that acupuncture was more effective than sham acupuncture in preventing migraines, but other trials have reported no differences.1013 There is no evidence that one acupuncture strategy is more effective than another for treating migraines. According to acupuncture theory, a headache on the lateral side is usually defined as a Shaoyang headache. In Jinkuiyi,14 migraines are said to affect the yang meridians (including the Taiyang, Yangming and Shaoyang meridians). In Lingshu,15 the Shaoyang meridians are said to go through the lateral side of the body, therefore the Shaoyang meridians are thought to be superior for treating migraines. Some points on the Shaoyang meridians are regarded as being more specific for migraines than other points.16Our aim was to investigate whether acupuncture at specific acupuncture points was more efficacious in preventing migraine than sham acupuncture at nonacupuncture points. We also investigated whether the efficacy varied when acupuncture points along different meridians or points along the same meridian were used.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号