首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Geoffrey Miller 《EMBO reports》2012,13(10):880-884
Runaway consumerism imposes social and ecological costs on humans in much the same way that runaway sexual ornamentation imposes survival costs and extinction risks on other animals.Sex and marketing have been coupled for a very long time. At the cultural level, their relationship has been appreciated since the 1960s ‘Mad Men'' era, when the sexual revolution coincided with the golden age of advertising, and marketers realized that ‘sex sells''. At the biological level, their interplay goes much further back to the Cambrian explosion around 530 million years ago. During this period of rapid evolutionary expansion, multicellular organisms began to evolve elaborate sexual ornaments to advertise their genetic quality to the most important consumers of all in the great mating market of life: the opposite sex.Maintaining the genetic quality of one''s offspring had already been a problem for billions of years. Ever since life originated around 3.7 billion years ago, RNA and DNA have been under selection to copy themselves as accurately as possible [1]. Yet perfect self-replication is biochemically impossible, and almost all replication errors are harmful rather than helpful [2]. Thus, mutations have been eroding the genomic stability of single-celled organisms for trillions of generations, and countless lineages of asexual organisms have suffered extinction through mutational meltdown—the runaway accumulation of copying errors [3]. Only through wildly profligate self-cloning could such organisms have any hope of leaving at least a few offspring with no new harmful mutations, so they could best survive and reproduce.Around 1.5 billion years ago, bacteria evolved the most basic form of sex to minimize mutation load: bacterial conjugation [4]. By swapping bits of DNA across the pilus (a tiny intercellular bridge) a bacterium can replace DNA sequences compromised by copying errors with intact sequences from its peers. Bacteria finally had some defence against mutational meltdown, and they thrived and diversified.Then, with the evolution of genuine sexual reproduction through meiosis, perhaps around 1.2 billion years ago, eukaryotes made a great advance in their ability to purge mutations. By combining their genes with a mate''s genes, they could produce progeny with huge genetic variety—and crucially with a wider range of mutation loads [5]. The unlucky offspring who happened to inherit an above-average number of harmful mutations from both parents would die young without reproducing, taking many mutations into oblivion with them. The lucky offspring who happened to inherit a below-average number of mutations from both parents would live long, prosper and produce offspring of higher genetic quality. Sexual recombination also made it easier to spread and combine the rare mutations that happened to be useful, opening the way for much faster evolutionary advances [6]. Sex became the foundation of almost all complex life because it was so good at both short-term damage limitation (purging bad mutations) and long-term innovation (spreading good mutations).Sex became the foundation of almost all complex life because it was so good at both short-term damage limitation […] and long-term innovation…Yet, single-celled organisms always had a problem with sex: they were not very good at choosing sexual partners with the best genes, that is, the lowest mutation loads. Given bacterial capabilities for chemical communication such as quorum-sensing [7], perhaps some prokaryotes and eukaryotes paid attention to short-range chemical cues of genetic quality before swapping genes. However, mating was mainly random before the evolution of longer-range senses and nervous systems.All of this changed profoundly with the Cambrian explosion, which saw organisms undergoing a genetic revolution that increased the complexity of gene regulatory networks, and a morphological revolution that increased the diversity of multicellular body plans. It was also a neurological and psychological revolution. As organisms became increasingly mobile, they evolved senses such as vision [8] and more complex nervous systems [9] to find food and evade predators. However, these new senses also empowered a sexual revolution, as they gave animals new tools for choosing sexual partners. Rather than hooking up randomly with the nearest mate, animals could now select mates based on visible cues of genetic quality such as body size, energy level, bright coloration and behavioural competence. By choosing the highest quality mates, they could produce higher quality offspring with lower mutation loads [10]. Such mate choice imposed selection on all of those quality cues to become larger, brighter and more conspicuous, amplifying them into true sexual ornaments: biological luxury goods such as the guppy''s tail and the peacock''s train that function mainly to impress and attract females [11]. These sexual ornaments evolved to have a complex genetic architecture, to capture a larger share of the genetic variation across individuals and to reveal mutation load more accurately [12].Ever since the Cambrian, the mating market for sexually reproducing animal species has been transformed to some degree into a consumerist fantasy world of conspicuous quality, status, fashion, beauty and romance. Individuals advertise their genetic quality and phenotypic condition through reliable, hard-to-fake signals or ‘fitness indicators'' such as pheromones, songs, ornaments and foreplay. Mates are chosen on the basis of who displays the largest, costliest, most precise, most popular and most salient fitness indicators. Mate choice for fitness indicators is not restricted to females choosing males, but often occurs in both sexes [13], especially in socially monogamous species with mutual mate choice such as humans [14].Thus, for 500 million years, animals have had to straddle two worlds in perpetual tension: natural selection and sexual selection. Each type of selection works through different evolutionary principles and dynamics, and each yields different types of adaptation and biodiversity. Neither fully dominates the other, because sexual attractiveness without survival is a short-lived vanity, whereas ecological competence without reproduction is a long-lived sterility. Natural selection shapes species to fit their geographical habitats and ecological niches, and favours efficiency in growth, foraging, parasite resistance, predator evasion and social competition. Sexual selection shapes each sex to fit the needs, desires and whims of the other sex, and favours conspicuous extravagance in all sorts of fitness indicators. Animal life walks a fine line between efficiency and opulence. More than 130,000 plant species also play the sexual ornamentation game, having evolved flowers to attract pollinators [15].The sexual selection world challenges the popular misconception that evolution is blind and dumb. In fact, as Darwin emphasized, sexual selection is often perceptive and clever, because animal senses and brains mediate mate choice. This makes sexual selection closer in spirit to artificial selection, which is governed by the senses and brains of human breeders. In so far as sexual selection shaped human bodies, minds and morals, we were also shaped by intelligent designers—who just happened to be romantic hominids rather than fictional gods [16].Thus, mate choice for genetic quality is analogous in many ways to consumer choice for brand quality [17]. Mate choice and consumer choice are both semi-conscious—partly instinctive, partly learned through trial and error and partly influenced by observing the choices made by others. Both are partly focused on the objective qualities and useful features of the available options, and partly focused on their arbitrary, aesthetic and fashionable aspects. Both create the demand that suppliers try to understand and fulfil, with each sex striving to learn the mating preferences of the other, and marketers striving to understand consumer preferences through surveys, focus groups and social media data mining.…single-celled organisms always had a problem with sex: they were not very good at choosing the sexual partners with the best genes…Mate choice and consumer choice can both yield absurdly wasteful outcomes: a huge diversity of useless, superficial variations in the biodiversity of species and the economic diversity of brands, products and packaging. Most biodiversity seems to be driven by sexual selection favouring whimsical differences across populations in the arbitrary details of fitness indicators, not just by naturally selected adaptation to different ecological niches [18]. The result is that within each genus, a species can be most easily identified by its distinct mating calls, sexual ornaments, courtship behaviours and genital morphologies [19], not by different foraging tactics or anti-predator defences. Similarly, much of the diversity in consumer products—such as shirts, cars, colleges or mutual funds—is at the level of arbitrary design details, branding, packaging and advertising, not at the level of objective product features and functionality.These analogies between sex and marketing run deep, because both depend on reliable signals of quality. Until recently, two traditions of signalling theory developed independently in the biological and social sciences. The first landmark in biological signalling theory was Charles Darwin''s analysis of mate choice for sexual ornaments as cues of good fitness and fertility in his book, The Descent of Man, and Selection in Relation to Sex (1871). Ronald Fisher analysed the evolution of mate preferences for fitness indicators in 1915 [20]. Amotz Zahavi proposed the ‘handicap principle'', arguing that only costly signals could be reliable, hard-to-fake indicators of genetic quality or phenotypic condition in 1975 [21]. Richard Dawkins and John Krebs applied game theory to analyse the reliability of animal signals, and the co-evolution of signallers and receivers in 1978 [22]. In 1990, Alan Grafen eventually proposed a formal model of the ‘handicap principle'' [23], and Richard Michod and Oren Hasson analysed ‘reliable indicators of fitness'' [24]. Since then, biological signalling theory has flourished and has informed research on sexual selection, animal communication and social behaviour.…new senses also empowered a sexual revolution […] Rather than hooking up randomly with the nearest mate, animals could now select mates based on visible cues of genetic quality…The parallel tradition of signalling theory in the social sciences and philosophy goes back to Aristotle, who argued that ethical and rational acts are reliable signals of underlying moral and cognitive virtues (ca 350–322 BC). Friedrich Nietzsche analysed beauty, creativity, morality and even cognition as expressions of biological vigour by using signalling logic (1872–1888). Thorstein Veblen proposed that conspicuous luxuries, quality workmanship and educational credentials act as reliable signals of wealth, effort and taste in The Theory of the Leisure Class (1899), The Instinct of Workmanship (1914) and The Higher Learning in America (1922). Vance Packard used signalling logic to analyse social class, runaway consumerism and corporate careerism in The Status Seekers (1959), The Waste Makers (1960) and The Pyramid Climbers (1962), and Ernst Gombrich analysed beauty in art as a reliable signal of the artist''s skill and effort in Art and Illusion (1977) and A Sense of Order (1979). Michael Spence developed formal models of educational credentials as reliable signals of capability and conscientiousness in Market Signalling (1974). Robert Frank used signalling logic to analyse job titles, emotions, career ambitions and consumer luxuries in Choosing the Right Pond (1985), Passions within Reason (1988), The Winner-Take-All-Society (1995) and Luxury Fever (2000).Evolutionary psychology and evolutionary anthropology have been integrating these two traditions to better understand many puzzles in human evolution that defy explanation in terms of natural selection for survival. For example, signalling theory has illuminated the origins and functions of facial beauty, female breasts and buttocks, body ornamentation, clothing, big game hunting, hand-axes, art, music, humour, poetry, story-telling, courtship gifts, charity, moral virtues, leadership, status-seeking, risk-taking, sports, religion, political ideologies, personality traits, adaptive self-deception and consumer behaviour [16,17,25,26,27,28,29].Building on signalling theory and sexual selection theory, the new science of evolutionary consumer psychology [30] has been making big advances in understanding consumer goods as reliable signals—not just signals of monetary wealth and elite taste, but signals of deeper traits such as intelligence, moral virtues, mating strategies and the ‘Big Five'' personality traits: openness, conscientiousness, agreeableness, extraversion and emotional stability [17]. These individual traits are deeper than wealth and taste in several ways: they are found in the other great apes, are heritable across generations, are stable across life, are important in all cultures and are naturally salient when interacting with mates, friends and kin [17,27,31]. For example, consumers seek elite university degrees as signals of intelligence; they buy organic fair-trade foods as signals of agreeableness; and they value foreign travel and avant-garde culture as signals of openness [17]. New molecular genetics research suggests that mutation load accounts for much of the heritable variation in human intelligence [32] and personality [33], so consumerist signals of these traits might be revealing genetic quality indirectly. If so, conspicuous consumption can be seen as just another ‘good-genes indicator'' favoured by mate choice.…sexual attractiveness without survival is a short-lived vanity, whereas ecological competence without reproduction is a long-lived sterilityIndeed, studies suggest that much conspicuous consumption, especially by young single people, functions as some form of mating effort. After men and women think about potential dates with attractive mates, men say they would spend more money on conspicuous luxury goods such as prestige watches, whereas women say they would spend more time doing conspicuous charity activities such as volunteering at a children''s hospital [34]. Conspicuous consumption by males reveals that they are pursuing a short-term mating strategy [35], and this activity is most attractive to women at peak fertility near ovulation [36]. Men give much higher tips to lap dancers who are ovulating [37]. Ovulating women choose sexier and more revealing clothes, shoes and fashion accessories [38]. Men living in towns with a scarcity of women compete harder to acquire luxuries and accumulate more consumer debt [39]. Romantic gift-giving is an important tactic in human courtship and mate retention, especially for men who might be signalling commitment [40]. Green consumerism—preferring eco-friendly products—is an effective form of conspicuous conservation, signalling both status and altruism [41].Findings such as these challenge traditional assumptions in economics. For example, ever since the Marginal Revolution—the development of economic theory during the 1870s—mainstream economics has made the ‘Rational Man'' assumption that consumers maximize their expected utility from their product choices, without reference to what other consumers are doing or desiring. This assumption was convenient both analytically—as it allowed easier mathematical modelling of markets and price equilibria—and ideologically in legitimizing free markets and luxury goods. However, new research from evolutionary consumer psychology and behavioural economics shows that consumers often desire ‘positional goods'' such as prestige-branded luxuries that signal social position and status through their relative cost, exclusivity and rarity. Positional goods create ‘positional externalities''—the harmful social side-effects of runaway status-seeking and consumption arms races [42].…biodiversity seems driven by sexual selection favouring whimsical differences […] Similarly […] diversity in consumer products […] is at the level of arbitrary design…These positional externalities are important because they undermine the most important theoretical justification for free markets—the first fundamental theorem of welfare economics, a formalization of Adam Smith''s ‘invisible hand'' argument, which says that competitive markets always lead to efficient distributions of resources. In the 1930s, the British Marxist biologists Julian Huxley and J.B.S. Haldane were already wary of such rationales for capitalism, and understood that runaway consumerism imposes social and ecological costs on humans in much the same way that runaway sexual ornamentation imposes survival costs and extinction risks on other animals [16]. Evidence shows that consumerist status-seeking leads to economic inefficiencies and costs to human welfare [42]. Runaway consumerism might be one predictable result of a human nature shaped by sexual selection, but we can display desirable traits in many other ways, such as green consumerism, conspicuous charity, ethical investment and through social media such as Facebook [17,43].Future work in evolutionary consumer psychology should give further insights into the links between sex, mutations, evolution and marketing. These links have been important for at least 500 million years and probably sparked the evolution of human intelligence, language, creativity, beauty, morality and ideology. A better understanding of these links could help us nudge global consumerist capitalism into a more sustainable form that imposes lower costs on the biosphere and yields higher benefits for future generations.? Open in a separate windowGeoffrey Miller  相似文献   

3.
The differentiation of pluripotent stem cells into various progeny is perplexing. In vivo, nature imposes strict fate constraints. In vitro, PSCs differentiate into almost any phenotype. Might the concept of ‘cellular promiscuity'' explain these surprising behaviours?John Gurdon''s [1] and Shinya Yamanaka''s [2] Nobel Prize involves discoveries that vex fundamental concepts about the stability of cellular identity [3,4], ageing as a rectified path and the differences between germ cells and somatic cells. The differentiation of pluripotent stem cells (PSCs) into progeny, including spermatids [5] and oocytes [6], is perplexing. In vivo, nature imposes strict fate constraints. Yet in vitro, reprogrammed PSCs liberated from the body government freely differentiate into any phenotype—except placenta—violating even somatic cell against germ cell segregations. Albeit that it is anthropomorphic, might the concept of ‘cellular promiscuity'' explain these surprising behaviours?Fidelity to one''s differentiated state is nearly universal in vivo—even cancers retain some allegiance. Appreciating the mechanisms in vitro that liberate reprogrammed cells from the numerous constraints governing development in vivo might provide new insights. Similarly to highway guiderails, a range of constraints preclude progeny cells within embryos and organisms from travelling too far away from the trajectory set by their ancestors. Restrictions are imposed externally—basement membranes and intercellular adhesions; internally—chromatin, cytoskeleton, endomembranes and mitochondria; and temporally by ageing.‘Cellular promiscuity'' was glimpsed previously during cloning; it was seen when somatic cells successfully ‘fertilized'' enucleated oocytes in amphibians [1] and later with ‘Dolly'' [7]. Embryonic stem cells (ESCs) corroborate this. The inner cell mass of the blastocyst cells develops faithfully, but liberation from the trophoectoderm generates pluripotent ESCs in vitro, which are freed from fate and polarity restrictions. These freedom-seeking ESCs still abide by three-dimensional rules as they conform to chimaera body patterning when injected into blastocysts. Yet if transplanted elsewhere, this results in chaotic teratomas or helter-skelter in vitro differentiation—that is, pluripotency.August Weismann''s germ plasm theory, 130 years ago, recognized that gametes produce somatic cells, never the reverse. Primordial germ cell migrations into fetal gonads, and parent-of-origin imprints, explain how germ cells are sequestered, retaining genomic and epigenomic purity. Left uncontaminated, these future gametes are held in pristine form to parent the next generation. However, the cracks separating germ and somatic lineages in vitro are widening [5,6]. Perhaps, they are restrained within gonads not for their purity but to prevent wild, uncontrolled misbehaviours resulting in germ cell tumours.The ‘cellular promiscuity'' concept regarding PSCs in vitro might explain why cells of nearly any desired lineage can be detected using monospecific markers. Are assays so sensitive that rare cells can be detected in heterogeneous cultures? Certainly population heterogeneity is considered for transplantable cells—dopaminergic neurons and islet cells—compared with applications needing few cells—sperm and oocytes. This dilemma of maintaining cellular identity in vitro after reprogramming is significant. If not addressed, the value of unrestrained induced PSCs (iPSCs) as reliable models for ‘diseases in a dish'', let alone for subsequent therapeutic transplantations, might be diminished. X-chromosome re-inactivation variants in differentiating human PSCs, epigenetic imprint errors and copy number variations are all indicators of in vitro infidelity. PSCs, which are held to be undifferentiated cells, are artefacts after all, as they undergo their programmed development in vivo.If correct, the hypothesis accounts for concerns raised about the inherent genomic and epigenomic unreliability of iPSCs; they are likely to be unfaithful to their in vivo differentiation trajectories due to both the freedom from in vivo developmental programmes, as well as poorly characterized modifications in culture conditions. ‘Memory'' of the PSC''s identity in vivo might need to be improved by using approaches that might not fully erase imprints. Regulatory authorities, including the Food & Drug Administration, require evidence that cultured PSCs do retain their original cellular identity. Notwithstanding fidelity lapses at the organismal level, the recognition that our cells have intrinsic freedom-loving tendencies in vitro might generate better approaches for only partly releasing somatic cells into probation, rather than full emancipation.  相似文献   

4.
The authors of “The anglerfish deception” respond to the criticism of their article.EMBO reports (2012) advanced online publication; doi: 10.1038/embor.2012.70EMBO reports (2012) 13 2, 100–105; doi: 10.1038/embor.2011.254Our respondents, eight current or former members of the EFSA GMO panel, focus on defending the EFSA''s environmental risk assessment (ERA) procedures. In our article for EMBO reports, we actually focused on the proposed EU GMO legislative reform, especially the European Commission (EC) proposal''s false political inflation of science, which denies the normative commitments inevitable in risk assessment (RA). Unfortunately the respondents do not address this problem. Indeed, by insisting that Member States enjoy freedom over risk management (RM) decisions despite the EFSA''s central control over RA, they entirely miss the relevant point. This is the unacknowledged policy—normative commitments being made before, and during, not only after, scientific ERA. They therefore only highlight, and extend, the problem we identified.The respondents complain that we misunderstood the distinction between RA and RM. We did not. We challenged it as misconceived and fundamentally misleading—as though only objective science defined RA, with normative choices cleanly confined to RM. Our point was that (i) the processes of scientific RA are inevitably shaped by normative commitments, which (ii) as a matter of institutional, policy and scientific integrity must be acknowledged and inclusively deliberated. They seem unaware that many authorities [1,2,3,4] have recognized such normative choices as prior matters, of RA policy, which should be established in a broadly deliberative manner “in advance of risk assessment to ensure that [RA] is systematic, complete, unbiased and transparent” [1]. This was neither recognized nor permitted in the proposed EC reform—a central point that our respondents fail to recognize.In dismissing our criticism that comparative safety assessment appears as a ‘first step'' in defining ERA, according to the new EFSA ERA guidelines, which we correctly referred to in our text but incorrectly referenced in the bibliography [5], our respondents again ignore this widely accepted ‘framing'' or ‘problem formulation'' point for science. The choice of comparator has normative implications as it immediately commits to a definition of what is normal and, implicitly, acceptable. Therefore the specific form and purpose of the comparison(s) is part of the validity question. Their claim that we are against comparison as a scientific step is incorrect—of course comparison is necessary. This simply acts as a shield behind which to avoid our and others'' [6] challenge to their self-appointed discretion to define—or worse, allow applicants to define—what counts in the comparative frame. Denying these realities and their difficult but inevitable implications, our respondents instead try to justify their own particular choices as ‘science''. First, they deny the first-step status of comparative safety assessment, despite its clear appearance in their own ERA Guidance Document [5]—in both the representational figure (p.11) and the text “the outcome of the comparative safety assessment allows the determination of those ‘identified'' characteristics that need to be assessed [...] and will further structure the ERA” (p.13). Second, despite their claims to the contrary, ‘comparative safety assessment'', effectively a resurrection of substantial equivalence, is a concept taken from consumer health RA, controversially applied to the more open-ended processes of ERA, and one that has in fact been long-discredited if used as a bottleneck or endpoint for rigorous RA processes [7,8,9,10]. The key point is that normative commitments are being embodied, yet not acknowledged, in RA science. This occurs through a range of similar unaccountable RA steps introduced into the ERA Guidance, such as judgement of ‘biological relevance'', ‘ecological relevance'', or ‘familiarity''. We cannot address these here, but our basic point is that such endless ‘methodological'' elaborations of the kind that our EFSA colleagues perform, only obscure the institutional changes needed to properly address the normative questions for policy-engaged science.Our respondents deny our claim concerning the singular form of science the EC is attempting to impose on GM policy and debate, by citing formal EFSA procedures for consultations with Member States and non-governmental organizations. However, they directly refute themselves by emphasizing that all Member State GM cultivation bans, permitted only on scientific grounds, have been deemed invalid by EFSA. They cannot have it both ways. We have addressed the importance of unacknowledged normativity in quality assessments of science for policy in Europe elsewhere [11]. However, it is the ‘one door, one key'' policy framework for science, deriving from the Single Market logic, which forces such singularity. While this might be legitimate policy, it is not scientific. It is political economy.Our respondents conclude by saying that the paramount concern of the EFSA GMO panel is the quality of its science. We share this concern. However, they avoid our main point that the EC-proposed legislative reform would only exacerbate their problem. Ignoring the normative dimensions of regulatory science and siphoning-off scientific debate and its normative issues to a select expert panel—which despite claiming independence faces an EU Ombudsman challenge [12] and European Parliament refusal to discharge their 2010 budget, because of continuing questions over conflicts of interests [13,14]—will not achieve quality science. What is required are effective institutional mechanisms and cultural norms that identify, and deliberatively address, otherwise unnoticed normative choices shaping risk science and its interpretive judgements. It is not the EFSA''s sole responsibility to achieve this, but it does need to recognize and press the point, against resistance, to develop better EU science and policy.  相似文献   

5.
Elixirs of death     
Substandard and fake drugs are increasingly threatening lives in both the developed and developing world, but governments and industry are struggling to improve the situation.When people take medicine, they assume that it will make them better. However many patients cannot trust their drugs to be effective or even safe. Fake or substandard medicine is a major public health problem and it seems to be growing. More than 200 heart patients died in Pakistan in 2012 after taking a contaminated drug against hypertension [1]. In 2006, cough syrup that contained diethylene glycol as a cheap substitute for pharmaceutical-grade glycerin was distributed in Panama, causing the death of at least 219 people [2,3]. However, the problem is not restricted to developing countries. In 2012, more than 500 patients came down with fungal meningitis and several dozens died after receiving contaminated steroid injections from a compounding pharmacy in Massachusetts [4]. The same year, a fake version of the anti-cancer drug Avastin, which contained no active ingredient, was sold in the USA. The drug seemed to have entered the country through Turkey, Switzerland, Denmark and the UK [5].…many patients cannot trust their drugs to be effective or even safeThe extent of the problem is not really known, as companies and governments do not always report incidents [6]. However, the information that is available is alarming enough, especially in developing countries. One study found that 20% of antihypertensive drugs collected from pharmacies in Rwanda were substandard [7]. Similarly, in a survey of anti-malaria drugs in Southeast Asia and sub-Saharan Africa, 20–42% were found to be either of poor quality or outright fake [8], whilst 56% of amoxicillin capsules sampled in different Arab countries did not meet the US Pharmacopeia requirements [9].Developing countries are particularly susceptible to substandard and fake medicine. Regulatory authorities do not have the means or human resources to oversee drug manufacturing and distribution. A country plagued by civil war or famine might have more pressing problems—including shortages of medicine in the first place. The drug supply chain is confusingly complex with medicines passing through many different hands before they reach the patient, which creates many possible entry points for illegitimate products. Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insurance. Instead, they buy cheap medicine from street vendors at the market or on the bus (Fig 1; [2,10,11]). “People do not have the money to buy medicine at a reasonable price. But quality comes at a price. A reasonable margin is required to pay for a quality control system,” explained Hans Hogerzeil, Professor of Global Health at Groningen University in the Netherlands. In some countries, falsifying medicine has developed into a major business. The low risk of being detected combined with relatively low penalties has turned falsifying medicine into the “perfect crime” [2].Open in a separate windowFigure 1Women sell smuggled, counterfeit medicine on the Adjame market in Abidjan, Ivory Coast, in 2007. Fraudulent street medecine sales rose by 15–25% in the past two years in Ivory Coast.Issouf Sanogo/AFP Photo/Getty Images.There are two main categories of illegitimate drugs. ‘Substandard'' medicines might result from poor-quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intent. It might be manufactured outside the regulatory system, perhaps in an illegitimate production shack that blends chalk with other ingredients and presses it into pills [10]. Whilst falsified medicines do not typically contain any active ingredients, substandard medicine might contain subtherapeutic amounts. This is particularly problematic when it comes to anti-infectious drugs, as it facilitates the emergence and spread of drug resistance [12]. A sad example is the emergence of artemisinin-resistant Plasmodium strains at the Thai–Cambodia border [8] and the Thai–Myanmar border [13], and increasing multidrug-resistant tuberculosis might also be attributed to substandard medication [11].Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insuranceEven if a country effectively prosecutes falsified and substandard medicine within its borders, it is still vulnerable to fakes and low-quality drugs produced elsewhere where regulations are more lax. To address this problem, international initiatives are urgently required [10,14,15], but there is no internationally binding law to combat counterfeit and substandard medicine. Although drug companies, governments and NGOs are interested in good-quality medicines, the different parties seem to have difficulties coming to terms with how to proceed. What has held up progress is a conflation of health issues and economic interests: innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting quality of medicine [14,16].The concern that intellectual property (IP) interests threaten public health dates back to the ‘Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement'' of the World Trade Organization (WTO), adopted in 1994, to establish global protection of intellectual property rights, including patents for pharmaceuticals. The TRIPS Agreement had devastating consequences during the acquired immunodeficiency syndrome epidemic, as it blocked patients in developing countries from access to affordable medicine. Although it includes flexibility, such as the possibility for governments to grant compulsory licenses to manufacture or import a generic version of a patented drug, it has not always been clear how these can be used by countries [14,16,17].In response to public concerns over the public health consequences of TRIPS, the Doha Declaration on the TRIPS Agreement and Public Health was adopted at the WTO''s Ministerial Conference in 2001. It reaffirmed the right of countries to use TRIPS flexibilities and confirmed the primacy of public health over the enforcement of IP rights. Although things have changed for the better, the Doha Declaration did not solve all the problems associated with IP protection and public health. For example, anti-counterfeit legislation, encouraged by multi-national pharmaceutical industries and the EU, threatened to impede the availability of generic medicines in East Africa [14,16,18]. In 2008–2009, European customs authorities seized shipments of legitimate generic medicines in transit from India to other developing countries because they infringed European IP laws [14,16,17]. “We''re left with decisions being taken based on patents and trademarks that should be taken based on health,” commented Roger Bate, a global health expert and resident scholar at the American Enterprise Institute in Washington, USA. “The health community is shooting themselves in the foot.”Conflating health care and IP issues are reflected in the unclear use of the term ‘counterfeit'' [2,14]. “Since the 1990s the World Health Organization (WHO) has used the term ‘counterfeit'' in the sense we now use ‘falsified'',” explained Hogerzeil. “The confusion started in 1995 with the TRIPS agreement, through which the term ‘counterfeit'' got the very narrow meaning of trademark infringement.” As a consequence, an Indian generic, for example, which is legal in some countries but not in others, could be labelled as ‘counterfeit''—and thus acquire the negative connotation of bad quality. “The counterfeit discussion was very much used as a way to block the market of generics and to put them in a bad light,” Hogerzeil concluded.The rifts between the stakeholders have become so deep during the course of these discussions that progress is difficult to achieve. “India is not at all interested in any international regulation. And, unfortunately, it wouldn''t make much sense to do anything without them,” Hogerzeil explained. Indeed, India is a core player: not only does it have a large generics industry, but also the country seems to be, together with China, the biggest source of fake medical products [19,20]. The fact that India is so reluctant to react is tragically ironic, as this stance hampers the growth of its own generic companies like Ranbaxy, Cipla or Piramal. “I certainly don''t believe that Indian generics would lose market share if there was stronger action on public health,” Bate said. Indeed, stricter regulations and control systems would be advantageous, because they would keep fakers at bay. The Indian generic industry is a common target for fakers, because their products are broadly distributed. “The most likely example of a counterfeit product I have come across in emerging markets is a counterfeit Indian generic,” Bate said. Such fakes can damage a company''s reputation and have a negative impact on its revenues when customers stop buying the product.The WHO has had a key role in attempting to draft international regulations that would contain the spread of falsified and substandard medicine. It took a lead in 2006 with the launch of the International Medical Products Anti-Counterfeiting Taskforce (IMPACT). But IMPACT was not a success. Concerns were raised over the influence of multi-national drug companies and the possibility that issues on quality of medicines were conflated with the attempts to enforce stronger IP measures [17]. The WHO distanced itself from IMPACT after 2010. For example, it no longer hosts IMPACT''s secretariat at its headquarters in Geneva [2].‘Substandard'' medicines might result from poor quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intentIn 2010, the WHO''s member states established a working group to further investigate how to proceed, which led to the establishment of a new “Member State mechanism on substandard/spurious/falsely labelled/falsified/counterfeit medical products” (http://www.who.int/medicines/services/counterfeit/en/index.html). However, according to a publication by Amir Attaran from the University of Ottawa, Canada, and international colleagues, the working group “still cannot agree how to define the various poor-quality medicines, much less settle on any concrete actions” [14]. The paper''s authors demand more action and propose a binding legal framework: a treaty. “Until we have stronger public health law, I don''t think that we are going to resolve this problem,” Bate, who is one of the authors of the paper, said.Similarly, the US Food and Drug Administration (FDA) commissioned the Institute of Medicine (IOM) to convene a consensus committee on understanding the global public health implications of falsified and substandard pharmaceuticals [2]. Whilst others have called for a treaty, the IOM report calls on the World Health Assembly—the governing body of the WHO—to develop a code of practice such as a “voluntary soft law” that countries can sign to express their will to do better. “At the moment, there is not yet enough political interest in a treaty. A code of conduct may be more realistic,” Hogerzeil, who is also on the IOM committee, commented. Efforts to work towards a treaty should nonetheless be pursued, Bate insisted: “The IOM is right in that we are not ready to sign a treaty yet, but that does not mean you don''t start negotiating one.”Whilst a treaty might take some time, there are several ideas from the IOM report and elsewhere that could already be put into action to deal with this global health threat [10,12,14,15,19]. Any attempts to safeguard medicines need to address both falsified and substandard medicines, but the counter-measures are different [14]. Falsifying medicine is, by definition, a criminal act. To counteract fakers, action needs to be taken to ensure that the appropriate legal authorities deal with criminals. Substandard medicine, on the other hand, arises when mistakes are made in genuine manufacturing companies. Such mistakes can be reduced by helping companies do better and by improving quality control of drug regulatory authorities.Manufacturing pharmaceuticals is a difficult and costly business that requires clean water, high-quality chemicals, expensive equipment, technical expertise and distribution networks. Large and multi-national companies benefit from economies of scale to cope with these problems. But smaller companies often struggle and compromise in quality [2,21]. “India has 20–40 big companies and perhaps nearly 20,000 small ones. To me, it seems impossible for them to produce at good quality, if they remain so small,” Hogerzeil explained. “And only by being strict, can you force them to combine and to become bigger industries that can afford good-quality assurance systems.” Clamping down on drug quality will therefore lead to a consolidation of the industry, which is an essential step. “If you look at Europe and the US, there were hundreds of drug companies—now there are dozens. And if you look at the situation in India and China today, there are thousands and that will have to come down to dozens as well,” Bate explained.…innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting […] medicineIn addition to consolidating the market by applying stricter rules, the IOM has also suggested measures for supporting companies that observe best practices [2]. For example, the IOM proposes that the International Finance Corporation and the Overseas Private Investment Corporation, which promote private-sector development to reduce poverty, should create separate investment vehicles for pharmaceutical manufacturers who want to upgrade to international standards. Another suggestion is to harmonize market registration of pharmaceutical products, which would ease the regulatory burden for generic producers in developing countries and improve the efficiency of regulatory agencies.Once the medicine leaves the manufacturer, controlling distribution systems becomes another major challenge in combatting falsified and substandard medicine. Global drug supply chains have grown increasingly complicated; drugs cross borders, are sold back and forth between wholesalers and distributers, and are often repackaged. Still, there is a main difference between developing and developed countries. In the latter case, relatively few companies dominate the market, whereas in poorer nations, the distribution system is often fragmented and uncontrolled with parallel schemes, too few pharmacies, even fewer pharmacists and many unlicensed medical vendors. Every transaction creates an opportunity for falsified or substandard medicine to enter the market [2,10,19]. More streamlined and transparent supply chains and stricter licensing requirements would be crucial to improve drug quality. “And we can start in the US,” Hogerzeil commented.…India is a core player: not only does it have a large generics industry, but the country also seems to be, together with China, the biggest source of fake medical productsDistribution could be improved at different levels, starting with the import of medicine. “There are states in the USA where the regulation for medicine importation is very lax. Anyone can import; private clinics can buy medicine from Lebanon or elsewhere and fly them in,” Hogerzeil explained. The next level would be better control over the distribution system within the country. The IOM suggests that state boards should license wholesalers and distributors that meet the National Association of Boards of Pharmacy accreditation standards. “Everybody dealing with medicine has to be licensed,” Hogerzeil said. “And there should be a paper trail of who buys what from whom. That way you close the entry points for illegal drugs and prevent that falsified medicines enter the legal supply chain.” The last level would be a track-and-trace system to identify authentic drugs [2]. Every single package of medicine should be identifiable through an individual marker, such as a 3D bar code. Once it is sold, it is ticked off in a central database, so the marker cannot be reused.According to Hogerzeil, equivalent measures at these different levels should be established in every country. “I don''t believe in double standards”, he said. “Don''t say to Uganda: ‘you can''t do that''. Rather, indicate to them what a cost-effective system in the West looks like and help them, and give them the time, to create something in that direction that is feasible in their situation.”Nigeria, for instance, has demonstrated that with enough political will, it is possible to reduce the proliferation of falsified and substandard medicine. Nigeria had been a major source for falsified products, but things changed in 2001, when Dora Akunyili was appointed Director General of the National Agency for Food and Drug Administration and Control. Akunyili has a personal motivation for fighting falsified drugs: her sister Vivian, a diabetic patient, lost her life to fake insulin in 1988. Akunyili strengthened import controls, campaigned for public awareness, clamped down on counterfeit operations and pushed for harsher punishments [10,19]. Paul Orhii, Akunyili''s successor, is committed to continuing her work [10]. Although there are no exact figures, various surveys indicate that the rate of bad-quality medicine has dropped considerably in Nigeria [10].China is also addressing its drug-quality problems. In a highly publicized event, the former head of China''s State Food and Drug Administration, Zheng Xiaoyu, was executed in 2007 after he was found guilty of accepting bribes to approve untested medicine. Since then, China''s fight against falsified medicine has continued. As a result of heightened enforcement, the number of drug companies in China dwindled from 5,000 in 2004 to about 3,500 this year [2]. Moreover, in July 2012, more than 1,900 suspects were arrested for the sale of fake or counterfeit drugs.Quality comes at a price, however. It is expensive to produce high-quality medicine, and it is expensive to control the production and distribution of drugs. Many low- and middle-income countries might not have the resources to tackle the problem and might not see quality of medicine as a priority. But they should, and affluent countries should help. Not only because health is a human right, but also for economic reasons. A great deal of time and money is invested into testing the safety and efficacy of medicine during drug development, and these resources are wasted when drugs do not reach patients. Falsified and substandard medicines are a financial burden to health systems and the emergence of drug-resistant pathogens might make invaluable medications useless. Investing in the safety of medicine is therefore a humane and an economic imperative.  相似文献   

6.
7.
L Bornmann 《EMBO reports》2012,13(8):673-676
The global financial crisis has changed how nations and agencies prioritize research investment. There has been a push towards science with expected benefits for society, yet devising reliable tools to predict and measure the social impact of research remains a major challenge.Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier, and it has become the underlying rationale for public support and funding of science.However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [1]. In fact, there is no direct link between the scientific quality of a research project and its societal value. As Paul Nightingale and Alister Scott of the University of Sussex''s Science and Technology Policy Research centre have pointed out: “research that is highly cited or published in top journals may be good for the academic discipline but not for society” [2]. Moreover, it might take years, or even decades, until a particular body of knowledge yields new products or services that affect society. By way of example, in an editorial on the topic in the British Medical Journal, editor Richard Smith cites the original research into apoptosis as work that is of high quality, but that has had “no measurable impact on health” [3]. He contrasts this with, for example, research into “the cost effectiveness of different incontinence pads”, which is certainly not seen as high value by the scientific community, but which has had an immediate and important societal impact.…the growth of scientific research during the past decades has outpaced the public resources available to fund itThe problem actually begins with defining the ‘societal impact of research''. A series of different concepts has been introduced: ‘third-stream activities'' [4], ‘societal benefits'' or ‘societal quality'' [5], ‘usefulness'' [6], ‘public values'' [7], ‘knowledge transfer'' [8] and ‘societal relevance'' [9, 10]. Yet, each of these concepts is ultimately concerned with measuring the social, cultural, environmental and economic returns from publicly funded research, be they products or ideas.In this context, ‘societal benefits'' refers to the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making. ‘Cultural benefits'' are those that add to the cultural capital of a nation, for example, by giving insight into how we relate to other societies and cultures, by providing a better understanding of our history and by contributing to cultural preservation and enrichment. ‘Environmental benefits'' benefit the natural capital of a nation, by reducing waste and pollution, and by increasing natural preserves or biodiversity. Finally, ‘economic benefits'' increase the economic capital of a nation by enhancing its skills base and by improving its productivity [11].Given the variability and the complexity of evaluating the societal impact of research, Barend van der Meulen at the Rathenau Institute for research and debate on science and technology in the Netherlands, and Arie Rip at the School of Management and Governance of the University of Twente, the Netherlands, have noted that “it is not clear how to evaluate societal quality, especially for basic and strategic research” [5]. There is no accepted framework with adequate datasets comparable to,for example, Thomson Reuters'' Web of Science, which enables the calculation of bibliometric values such as the h index [12] or journal impact factor [13]. There are also no criteria or methods that can be applied to the evaluation of societal impact, whilst conventional research and development (R&D) indicators have given little insight, with the exception of patent data. In fact, in many studies, the societal impact of research has been postulated rather than demonstrated [14]. For Benoît Godin at the Institut National de la Recherche Scientifique (INRS) in Quebec, Canada, and co-author Christian Doré, “systematic measurements and indicators [of the] impact on the social, cultural, political, and organizational dimensions are almost totally absent from the literature” [15]. Furthermore, they note, most research in this field is primarily concerned with economic impact.A presentation by Ben Martin from the Science and Technology Policy Research Unit at Sussex University, UK, cites four common problems that arise in the context of societal impact measurements [16]. The first is the causality problem—it is not clear which impact can be attributed to which cause. The second is the attribution problem, which arises because impact can be diffuse or complex and contingent, and it is not clear what should be attributed to research or to other inputs. The third is the internationality problem that arises as a result of the international nature of R&D and innovation, which makes attribution virtually impossible. Finally, the timescale problem arises because the premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impact.…in many studies, the societal impact of research has been postulated rather than demonstratedIn addition, there are four other problems. First, it is hard to find experts to assess societal impact that is based on peer evaluation. As Robert Frodeman and James Britt Holbrook at the University of North Texas, USA, have noted, “[s]cientists generally dislike impacts considerations” and evaluating research in terms of its societal impact “takes scientists beyond the bounds of their disciplinary expertise” [10]. Second, given that the scientific work of an engineer has a different impact than the work of a sociologist or historian, it will hardly be possible to have a single assessment mechanism [4, 17]. Third, societal impact measurement should take into account that there is not just one model of a successful research institution. As such, assessment should be adapted to the institution''s specific strengths in teaching and research, the cultural context in which it exists and national standards. Finally, the societal impact of research is not always going to be desirable or positive. For example, Les Rymer, graduate education policy advisor to the Australian Group of Eight (Go8) network of university vice-chancellors, noted in a report for the Go8 that, “environmental research that leads to the closure of a fishery might have an immediate negative economic impact, even though in the much longer term it will preserve a resource that might again become available for use. The fishing industry and conservationists might have very different views as to the nature of the initial impact—some of which may depend on their view about the excellence of the research and its disinterested nature” [18].Unlike scientific impact measurement, for which there are numerous established methods that are continually refined, research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishments. Even so, governments already conduct budget-relevant measurements, or plan to do so. The best-known national evaluation system is the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. Efforts are under way to set up the Research Excellence Framework (REF), which is set to replace the RAE in 2014 “to support the desire of modern research policy for promoting problem-solving research” [21]. In order to develop the new arrangements for the assessment and funding of research in the REF, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches for evaluating the impact of research [20]. The recommendation from this consultation is that impact should be measured in a quantifiable way, and expert panels should review narrative evidence in case studies supported by appropriate indicators [19,21].…premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impactMany of the studies that have carried out societal impact measurement chose to do so on the basis of case studies. Although this method is labour-intensive and a craft rather than a quantitative activity, it seems to be the best way of measuring the complex phenomenon that is societal impact. The HEFCE stipulates that “case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe” [22]. Claire Donovan at Brunel University, London, UK, considers the preference for a case-study approach in the REF to be “the ‘state of the art'' [for providing] the necessary evidence-base for increased financial support of university research across all fields” [23]. According to Finn Hansson from the Department of Leadership, Policy and Philosophy at the Copenhagen Business School, Denmark, and co-author Erik Ernø-Kjølhede, the new REF is “a clear political signal that the traditional model for assessing research quality based on a discipline-oriented Mode 1 perception of research, first and foremost in the form of publication in international journals, was no longer considered sufficient by the policy-makers” [19]. ‘Mode 1'' describes research governed by the academic interests of a specific community, whereas ‘Mode 2'' is characterized by collaboration—both within the scientific realm and with other stakeholders—transdisciplinarity and basic research that is being conducted in the context of application [19].The new REF will also entail changes in budget allocations. The evaluation of a research unit for the purpose of allocations will determine 20% of the societal influence dimension [19]. The final REF guidance contains lists of examples for different types of societal impact [24].Societal impact is much harder to measure than scientific impact, and there are probably no indicators that can be used across all disciplines and institutions for collation in databases [17]. Societal impact often takes many years to become apparent, and “[t]he routes through which research can influence individual behaviour or inform social policy are often very diffuse” [18].Yet, the practitioners of societal impact measurement should not conduct this exercise alone; scientists should also take part. According to Steve Hanney at Brunel University, an expert in assessing payback or impacts from health research, and his co-authors, many scientists see societal impact measurement as a threat to their scientific freedom and often reject it [25]. If the allocation of funds is increasingly oriented towards societal impact issues, it challenges the long-standing reward system in science whereby scientists receive credits—not only citations and prizes but also funds—for their contributions to scientific advancement. However, given that societal impact measurement is already important for various national evaluations—and other countries will follow probably—scientists should become more concerned with this aspect of their research. In fact, scientists are often unaware that their research has a societal impact. “The case study at BRASS [Centre for Business Relationships, Accountability, Sustainability and Society] uncovered activities that were previously ‘under the radar'', that is, researchers have been involved in activities they realised now can be characterized as productive interactions” [26] between them and societal stakeholders. It is probable that research in many fields already has a direct societal impact, or induces productive interactions, but that it is not yet perceived as such by the scientists conducting the work.…research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishmentsThe involvement of scientists is also necessary in the development of mechanisms to collect accurate and comparable data [27]. Researchers in a particular discipline will be able to identify appropriate indicators to measure the impact of their kind of work. If the approach to establishing measurements is not sufficiently broad in scope, there is a danger that readily available indicators will be used for evaluations, even if they do not adequately measure societal impact [16]. There is also a risk that scientists might base their research projects and grant applications on readily available and ultimately misleading indicators. As Hansson and Ernø-Kjølhede point out, “the obvious danger is that researchers and universities intensify their efforts to participate in activities that can be directly documented rather than activities that are harder to document but in reality may be more useful to society” [19]. Numerous studies have documented that scientists already base their activities on the criteria and indicators that are applied in evaluations [19, 28, 29].Until reliable and robust methods to assess impact are developed, it makes sense to use expert panels to qualitatively assess the societal relevance of research in the first instance. Rymer has noted that, “just as peer review can be useful in assessing the quality of academic work in an academic context, expert panels with relevant experience in different areas of potential impact can be useful in assessing the difference that research has made” [18].Whether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting the public funding and support of basic researchWhether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting public funding and support of basic research. This has always been the case, but new research into measures that can assess the societal impact of research would provide better qualitative and quantitative data on which funding agencies and politicians could base decisions. At the same time, such measurement should not come at the expense of basic, blue-sky research, given that it is and will remain near-impossible to predict the impact of certain research projects years or decades down the line.  相似文献   

8.
9.
Martinson BC 《EMBO reports》2011,12(8):758-762
Universities have been churning out PhD students to reap financial and other rewards for training biomedical scientists. This deluge of cheap labour has created unhealthy competition, which encourages scientific misconduct.Most developed nations invest a considerable amount of public money in scientific research for a variety of reasons: most importantly because research is regarded as a motor for economic progress and development, and to train a research workforce for both academia and industry. Not surprisingly, governments are occasionally confronted with questions about whether the money invested in research is appropriate and whether taxpayers are getting the maximum value for their investments.…questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientistsThe training and maintenance of the research workforce is a large component of these investments. Yet discussions in the USA about the appropriate size of this workforce have typically been contentious, owing to an apparent lack of reliable data to tell us whether the system yields academic ‘reproduction rates'' that are above, below or at replacement levels. In the USA, questions about the size and composition of the research workforce have historically been driven by concerns that the system produces an insufficient number of scientists. As Donald Kennedy, then Editor-in-Chief of Science, noted several years ago, leaders in prestigious academic institutions have repeatedly rung alarm bells about shortages in the science workforce. Less often does one see questions raised about whether too many scientists are being produced or concerns about unintended consequences that may result from such overproduction. Yet recognizing that resources are finite, it seems reasonable to ask what level of competition for resources is productive, and at what level does competition become counter-productive.Finding a proper balance between the size of the research workforce and the resources available to sustain it has other important implications. Unhealthy competition—too many people clamouring for too little money and too few desirable positions—creates its own problems, most notably research misconduct and lower-quality, less innovative research. If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edge. Moreover, many in the science community worry that every publicized case of research misconduct could jeopardize those resources, if politicians and taxpayers become unwilling to invest in a research system that seems to be riddled with fraud and misconduct.The biomedical research enterprise in the USA provides a useful context in which to examine the level of competition for resources among academic scientists. My thesis is that the system of publicly funded research in the USA as it is currently configured supports a feedback system of institutional incentives that generate excessive competition for resources in biomedical research. These institutional incentives encourage universities to overproduce graduate students and postdoctoral scientists, who are both trainees and a cheap source of skilled labour for research while in training. However, once they have completed their training, they become competitors for money and positions, thereby exacerbating competitive pressures.Questions raised about whether too many scientists are being produced or concerns about the unintended consequences of such overproduction are less commonThe resulting scarcity of resources, partly through its effect on peer review, leads to a shunting of resources away from both younger researchers and the most innovative ideas, which undermines the effectiveness of the research enterprise as a whole. Faced with an increasing number of grant applications and the consequent decrease in the percentage of projects that can be funded, reviewers tend to ‘play it safe'' and favour projects that have a higher likelihood of yielding results, even if the research is conservative in the sense that it does not explore new questions. Resource scarcity can also introduce unwanted randomness to the process of determining which research gets funded. A large group of scientists, led by a cancer biologist, has recently mounted a campaign against a change in a policy of the National Institutes of Health (NIH) to allow only one resubmission of an unfunded grant proposal (Wadman, 2011). The core of their argument is that peer reviewers are likely able to distinguish the top 20% of research applications from the rest, but that within that top 20%, distinguishing the top 5% or 10% means asking peer reviewers for a level of precision that is simply not possible. With funding levels in many NIH institutes now within that 5–10% range, the argument is that reviewers are being forced to choose at random which excellent applications do and do not get funding. In addition to the inefficiency of overproduction and excessive competition in terms of their costs to society and opportunity costs to individuals, these institutional incentives might undermine the integrity and quality of science, and reduce the likelihood of breakthroughs.My colleagues and I have expressed such concerns about workforce dynamics and related issues in several publications (Martinson, 2007; Martinson et al, 2005, 2006, 2009, 2010). Early on, we observed that, “missing from current analyses of scientific integrity is a consideration of the wider research environment, including institutional and systemic structures” (Martinson et al, 2005). Our more recent publications have been more specific about the institutional and systemic structures concerned. It seems that at least a few important leaders in science share these concerns.In April 2009, the NIH, through the National Institute of General Medical Sciences (NIGMS), issued a request for applications (RFA) calling for proposals to develop computational models of the research workforce (http://grants.nih.gov/grants/guide/rfa-files/RFA-GM-10-003.html). Although such an initiative might be premature given the current level of knowledge, the rationale behind the RFA seems irrefutable: “there is a need to […] pursue a systems-based approach to the study of scientific workforce dynamics.” Roughly four decades after the NIH appeared on the scene, this is, to my knowledge, the first official, public recognition that the biomedical workforce tends not to conform nicely to market forces of supply and demand, despite the fact that others have previously made such arguments.Early last year, Francis Collins, Director of the NIH, published a PolicyForum article in Science, voicing many of the concerns I have expressed about specific influences that have led to growth rates in the science workforce that are undermining the effectiveness of research in general, and biomedical research in particular. He notes the increasing stress in the biomedical research community after the end of the NIH “budget doubling” between 1998 and 2003, and the likelihood of further disruptions when the American Recovery and Reinvestment Act of 2009 (ARRA) funding ends in 2011. Arguing that innovation is crucial to the future success of biomedical research, he notes the tendency towards conservatism of the NIH peer-review process, and how this worsens in fiscally tight times. Collins further highlights the ageing of the NIH workforce—as grants increasingly go to older scientists—and the increasing time that researchers are spending in itinerant and low-paid postdoctoral positions as they stack up in a holding pattern, waiting for faculty positions that may or may not materialize. Having noted these challenging trends, and echoing the central concerns of a 2007 Nature commentary (Martinson, 2007), he concludes that “…it is time for NIH to develop better models to guide decisions about the optimum size and nature of the US workforce for biomedical research. A related issue that needs attention, though it will be controversial, is whether institutional incentives in the current system that encourage faculty to obtain up to 100% of their salary from grants are the best way to encourage productivity.”Similarly, Bruce Alberts, Editor-in-Chief of Science, writing about incentives for innovation, notes that the US biomedical research enterprise includes more than 100,000 graduate students and postdoctoral fellows. He observes that “only a select few will go on to become independent research scientists in academia”, and argues that “assuming that the system supporting this career path works well, these will be the individuals with the most talent and interest in such an endeavor” (Alberts, 2009).His editorial is not concerned with what happens to the remaining majority, but argues that even among the select few who manage to succeed, the funding process for biomedical research “forces them to avoid risk-taking and innovation”. The primary culprit, in his estimation, is the conservatism of the traditional peer-review system for federal grants, which values “research projects that are almost certain to ‘work''”. He continues, “the innovation that is essential for keeping science exciting and productive is replaced by […] research that has little chance of producing the breakthroughs needed to improve human health.”If an increasing number of scientists are scrambling for jobs and resources, some might begin to cut corners in order to gain a competitive edgeAlthough I believe his assessment of the symptoms is correct, I think he has misdiagnosed the cause, in part because he has failed to identify which influence he is concerned with from the network of influences in biomedical research. To contextualize the influences of concern to Alberts, we must consider the remaining majority of doctorally trained individuals so easily dismissed in his editorial, and further examine what drives the dynamics of the biomedical research workforce.Labour economists might argue that market forces will always balance the number of individuals with doctorates with the number of appropriate jobs for them in the long term. Such arguments would ignore, however, the typical information asymmetry between incoming graduate students, whose knowledge about their eventual job opportunities and career options is by definition far more limited than that of those who run the training programmes. They would also ignore the fact that universities are generally not confronted with the externalities resulting from overproduction of PhDs, and have positive financial incentives that encourage overproduction. During the past 40 years, NIH ‘extramural'' funding has become crucial for graduate student training, faculty salaries and university overheads. For their part, universities have embraced NIH extramural funding as a primary revenue source that, for a time, allowed them to implement a business model based on the interconnected assumptions that, as one of the primary ‘outputs'' or ‘products'' of the university, more doctorally trained individuals are always better than fewer, and because these individuals are an excellent source of cheap, skilled labour during their training, they help to contain the real costs of faculty research.“…the current system has succeeded in maximizing the amount of research […] it has also degraded the quality of graduate training and led to an overproduction of PhDs…”However, it has also made universities increasingly dependent on NIH funding. As recently documented by the economist Paula Stephan, most faculty growth in graduate school programmes during the past decade has occurred in medical colleges, with the majority—more than 70%—in non-tenure-track positions. Arguably, this represents a shift of risk away from universities and onto their faculty. Despite perennial cries of concern about shortages in the research workforce (Butz et al, 2003; Kennedy et al, 2004; National Academy of Sciences et al, 2005) a number of commentators have recently expressed concerns that the current system of academic research might be overbuilt (Cech, 2005; Heinig et al, 2007; Martinson, 2007; Stephan, 2007). Some explicitly connect this to structural arrangements between the universities and NIH funding (Cech, 2005; Collins, 2007; Martinson, 2007; Stephan, 2007).In 1995, David Korn pointed out what he saw as some problematic aspects of the business model employed by Academic Medical Centers (AMCs) in the USA during the past few decades (Korn, 1995). He noted the reliance of AMCs on the relatively low-cost, but highly skilled labour represented by postdoctoral fellows, graduate students and others—who quickly start to compete with their own professors and mentors for resources. Having identified the economic dependence of the AMCs on these inexpensive labour pools, he noted additional problems with the graduate training programmes themselves. “These programs are […] imbued with a value system that clearly indicates to all participants that true success is only marked by the attainment of a faculty position in a high-profile research institution and the coveted status of principal investigator on NIH grants.” Pointing to “more than 10 years of severe supply/demand imbalance in NIH funds”, Korn concluded that, “considering the generative nature of each faculty mentor, this enterprise could only sustain itself in an inflationary environment, in which the society''s investment in biomedical research and clinical care was continuously and sharply expanding.” From 1994 to 2003, total funding for biomedical research in the USA increased at an annual rate of 7.8%, after adjustment for inflation. The comparable rate of growth between 2003 and 2007 was 3.4% (Dorsey et al, 2010). These observations resonate with the now classic observation by Derek J. de Solla Price, from more than 30 years before, that growth in science frequently follows an exponential pattern that cannot continue indefinitely; the enterprise must eventually come to a plateau (de Solla Price, 1963).In May 2009, echoing some of Korn''s observations, Nobel laureate Roald Hoffmann caused a stir in the US science community when he argued for a “de-coupling” of the dual roles of graduate students as trainees and cheap labour (Hoffmann, 2009). His suggestion was to cease supporting graduate students with faculty research grants, and to use the money instead to create competitive awards for which graduate students could apply, making them more similar to free agents. During the ensuing discussion, Shirley Tilghman, president of Princeton University, argued that “although the current system has succeeded in maximizing the amount of research performed […] it has also degraded the quality of graduate training and led to an overproduction of PhDs in some areas. Unhitching training from research grants would be a much-needed form of professional ‘birth control''” (Mervis, 2009).The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientistsAlthough the issue of what I will call the ‘academic birth rate'' is the central concern of this analysis, the ‘academic end-of-life'' also warrants some attention. The greying of the NIH research workforce is another important driver of workforce dynamics, and it is integrally linked to the fate of young scientists. A 2008 news item in Science quoted then 70-year-old Robert Wells, a molecular geneticist at Texas A&M University, “‘if I and other old birds continue to land the grants, the [young scientists] are not going to get them.” He worries that the budget will not be able to support “the 100 people ‘I''ve trained […] to replace me''” (Kaiser, 2008). While his claim of 100 trainees might be astonishing, it might be more astonishing that his was the outlying perspective. The majority of senior scientists interviewed for that article voiced intentions to keep doing science—and going after NIH grants—until someone forced them to stop or they died.Some have looked at the current situation with concern, primarily because of the threats it poses to the financial and academic viability of universities (Korn, 1995; Heinig et al, 2007; Korn & Heinig, 2007), although most of those who express such concerns have been distinctly reticent to acknowledge the role of universities in creating and maintaining the situation. Others have expressed concerns about the differential impact of extreme competition and meagre job prospects on the recruitment, development and career survival of young and aspiring scientists (Freeman et al, 2001; Kennedy et al, 2004; Martinson et al, 2006; Anderson et al, 2007a; Martinson, 2007; Stephan, 2007). There seems to be little disagreement, however, that the system has generated excessively high competition for federal research funding, and that this threatens to undermine the very innovation and production of knowledge that is its raison d''etre.The production of knowledge in science, particularly of the ‘revolutionary'' variety, is generally not a linear input–output process with predictable returns on investment, clear timelines and high levels of certainty (Lane, 2009). On the contrary, it is arguable that “revolutionary science is a high risk and long-term endeavour which usually fails” (Charlton & Andras, 2008). Predicting where, when and by whom breakthroughs in understanding will be produced has proven to be an extremely difficult task. In the face of such uncertainty, and denying the realities of finite resources, some have argued that the best bet is to maximize the number of scientists, using that logic to justify a steady-state production of new PhDs, regardless of whether the labour market is sending signals of increasing or decreasing demand for that supply. Only recently have we begun to explore the effects of the current arrangement on the process of knowledge production, and on innovation in particular (Charlton & Andras, 2008; Kolata, 2009).…most of those who express such concerns have been reticent to acknowledge the role of universities themselves in creating and maintaining the situationBruce Alberts, in the above-mentioned editorial, points to several initiatives launched by the NIH that aim to get a larger share of NIH funding into the hands of young scientists with particularly innovative ideas. These include the “New Innovator Award,” the “Pioneer Award” and the “Transformational R01 Awards”. The proportion of NIH funding dedicated to these awards, however, amounts to “only 0.27% of the NIH budget” (Alberts, 2009). Such a small proportion of the NIH budget does not seem likely to generate a large amount of more innovative science. Moreover, to the extent that such initiatives actually succeed in enticing more young investigators to become dependent on NIH funds, any benefit these efforts have in terms of innovation may be offset by further increases in competition for resources that will come when these new ‘innovators'' reach the end of this specialty funding and add to the rank and file of those scrapping for funds through the standard mechanisms.Our studies on research integrity have been mostly oriented towards understanding how the influences within which academic scientists work might affect their behaviour, and thus the quality of the science they produce (Anderson et al, 2007a, 2007b; Martinson et al, 2009, 2010). My colleagues and I have focused on whether biomedical researchers perceive fairness in the various exchange relationships within their work systems. I am persuaded by the argument that expectations of fairness in exchange relationships have been hard-wired into us through evolution (Crockett et al, 2008; Hsu et al, 2008; Izuma et al, 2008; Pennisi, 2009), with the advent of modern markets being a primary manifestation of this. Thus, violations of these expectations strike me as potentially corrupting influences. Such violations might be prime motivators for ill will, possibly engendering bad-faith behaviour among those who perceive themselves to have been slighted, and therefore increasing the risk of research misconduct. They might also corrupt the enterprise by signalling to talented young people that biomedical research is an inhospitable environment in which to develop a career, possibly chasing away some of the most talented individuals, and encouraging a selection of characteristics that might not lead to optimal effectiveness, in terms of scientific innovation and productivity (Charlton, 2009).To the extent that we have an ecology with steep competition that is fraught with high risks of career failure for young scientists after they incur large costs of time, effort and sometimes financial resources to obtain a doctoral degree, why would we expect them to take on the additional, substantial risks involved in doing truly innovative science and asking risky research questions? And why, in such a cut-throat setting, would we not anticipate an increase in corner-cutting, and a corrosion of good scientific practice, collegiality, mentoring and sociability? Would we not also expect a reduction in high-risk, innovative science, and a reversion to a more career-safe type of ‘normal'' science? Would this not reduce the effectiveness of the institution of biomedical research? I do not claim to know the conditions needed to maximize the production of research that is novel, innovative and conducted with integrity. I am fairly certain, however, that putting scientists in tenuous positions in which their careers and livelihoods would be put at risk by pursuing truly revolutionary research is one way to insure against it.  相似文献   

10.
The temptation to silence dissenters whose non-mainstream views negatively affect public policies is powerful. However, silencing dissent, no matter how scientifically unsound it might be, can cause the public to mistrust science in general.Dissent is crucial for the advancement of science. Disagreement is at the heart of peer review and is important for uncovering unjustified assumptions, flawed methodologies and problematic reasoning. Enabling and encouraging dissent also helps to generate alternative hypotheses, models and explanations. Yet, despite the importance of dissent in science, there is growing concern that dissenting voices have a negative effect on the public perception of science, on policy-making and public health. In some cases, dissenting views are deliberately used to derail certain policies. For example, dissenting positions on climate change, environmental toxins or the hazards of tobacco smoke [1,2] seem to laypeople as equally valid conflicting opinions and thereby create or increase uncertainty. Critics often use legitimate scientific disagreements about narrow claims to reinforce the impression of uncertainty about general and widely accepted truths; for instance, that a given substance is harmful [3,4]. This impression of uncertainty about the evidence is then used to question particular policies [1,2,5,6].The negative effects of dissent on establishing public polices are present in cases in which the disagreements are scientifically well-grounded, but the significance of the dissent is misunderstood or blown out of proportion. A study showing that many factors affect the size of reef islands, to the effect that they will not necessarily be reduced in size as sea levels rise [7], was simplistically interpreted by the media as evidence that climate change will not have a negative impact on reef islands [8].In other instances, dissenting voices affect the public perception of and motivation to follow public-health policies or recommendations. For example, the publication of a now debunked link between the measles, mumps and rubella vaccine and autism [9], as well as the claim that the mercury preservative thimerosal, which was used in childhood vaccines, was a possible risk factor for autism [10,11], created public doubts about the safety of vaccinating children. Although later studies showed no evidence for these claims, doubts led many parents to reject vaccinations for their children, risking the herd immunity for diseases that had been largely eradicated from the industrialized world [12,13,14,15]. Many scientists have therefore come to regard dissent as problematic if it has the potential to affect public behaviour and policy-making. However, we argue that such concerns about dissent as an obstacle to public policy are both dangerous and misguided.Whether dissent is based on genuine scientific evidence or is unfounded, interested parties can use it to sow doubt, thwart public policies, promote problematic alternatives and lead the public to ignore sound advice. In response, scientists have adopted several strategies to limit these negative effects of dissent—masking dissent, silencing dissent and discrediting dissenters. The first strategy aims to present a united front to the public. Scientists mask existing disagreements among themselves by presenting only those claims or pieces of evidence about which they agree [16]. Although there is nearly universal agreement among scientists that average global temperatures are increasing, there are also legitimate disagreements about how much warming will occur, how quickly it will occur and the impact it might have [7,17,18,19]. As presenting these disagreements to the public probably creates more doubt and uncertainty than is warranted, scientists react by presenting only general claims [20].A second strategy is to silence dissenting views that might have negative consequences. This can take the form of self-censorship when scientists are reluctant to publish or publicly discuss research that might—incorrectly—be used to question existing scientific knowledge. For example, there are genuine disagreements about how best to model cloud formation, water vapour feedback and aerosols in general circulation paradigms, all of which have significant effects on the magnitude of global climate change predictions [17,19]. Yet, some scientists are hesitant to make these disagreements public, for fear that they will be accused of being denialists, faulted for confusing the public and policy-makers, censured for abating climate-change deniers, or criticized for undermining public policy [21,22,23,24].…there is growing concern that dissenting voices can have a negative effect on the public perception of science, on policy-making and public healthAnother strategy is to discredit dissenters, especially in cases in which the dissent seems to be ideologically motivated. This could involve publicizing the financial or political ties of the dissenters [2,6,25], which would call attention to their probable bias. In other cases, scientists might discredit the expertise of the dissenter. One such example concerns a 2007 study published in the Proceedings of the National Academy of Sciences USA, which claimed that cadis fly larvae consuming Bt maize pollen die at twice the rate of flies feeding on non-Bt maize pollen [26]. Immediately after publication, both the authors and the study itself became the target of relentless and sometimes scathing attacks from a group of scientists who were concerned that anti-GMO (genetically modified organism) interest groups would seize on the study to advance their agenda [27]. The article was criticized for its methodology and its conclusions, the Proceedings of the National Academy of Sciences USA was criticized for publishing the article and the US National Science Foundation was criticized for funding the study in the first place.Public policies, health advice and regulatory decisions should be based on the best available evidence and knowledge. As the public often lack the expertise to assess the quality of dissenting views, disagreements have the potential to cast doubt over the reliability of scientific knowledge and lead the public to question relevant policies. Strategies to block dissent therefore seem reasonable as a means to protect much needed or effective health policies, advice and regulations. However, even if the public were unable to evaluate the science appropriately, targeting dissent is not the most appropriate strategy to prevent negative side effects for several reasons. Chiefly, it contributes to the problems that the critics of dissent seek to address, namely increasing the cacophony of dissenting voices that only aim to create doubt. Focusing on dissent as a problematic activity sends the message to policy-makers and the public that any dissent undermines scientific knowledge. Reinforcing this false assumption further incentivizes those who seek merely to create doubt to thwart particular policies. Not surprisingly, think-tanks, industry and other organizations are willing to manufacture dissent simply to derail policies that they find economically or ideologically undesirable.Another danger of targeting dissent is that it probably stifles legitimate crucial voices that are needed for both advancing science and informing sound policy decisions. Attacking dissent makes scientists reluctant to voice genuine doubts, especially if they believe that doing so might harm their reputations, damage their careers and undermine prevailing theories or policies needed. For instance, a panel of scientists for the US National Academy of Sciences, when presenting a risk assessment of radiation in 1956, omitted wildly different predictions about the potential genetic harm of radiation [16]. They did not include this wide range of predictions in their final report precisely because they thought the differences would undermine confidence in their recommendations. Yet, this information could have been relevant to policy-makers. As such, targeting dissent as an obstacle to public policy might simply reinforce self-censorship and stifle legitimate and scientifically informed debate. If this happens, scientific progress is hindered.Second, even if the public has mistaken beliefs about science or the state of the knowledge of the science in question, focusing on dissent is not an effective way to protect public policy from false claims. It fails to address the presumed cause of the problem—the apparent lack of understanding of the science by the public. A better alternative would be to promote the public''s scientific literacy. If the public were educated to better assess the quality of the dissent and thus disregard instances of ideological, unsupported or unsound dissent, dissenting voices would not have such a negative effect. Of course, one might argue that educating the public would be costly and difficult, and that therefore, the public should simply listen to scientists about which dissent to ignore and which to consider. This is, however, a paternalistic attitude that requires the public to remain ignorant ‘for their own good''; a position that seems unjustified on many levels as there are better alternatives for addressing the problem.Moreover, silencing dissent, rather than promoting scientific literacy, risks undermining public trust in science even if the dissent is invalid. This was exemplified by the 2009 case of hacked e-mails from a computer server at the University of East Anglia''s Climate Research Unit (CRU). After the selective leaking of the e-mails, climate scientists at the CRU came under fire because some of the quotes, which were taken out of context, seemed to suggest that they were fudging data or suppressing dissenting views [28,29,30,31]. The stolen e-mails gave further ammunition to those opposing policies to reduce greenhouse emissions as they could use accusations of data ‘cover up'' as proof that climate scientists were not being honest with the public [29,30,31]. It also allowed critics to present climate scientists as conspirators who were trying to push a political agenda [32]. As a result, although there was nothing scientifically inappropriate revealed in the ‘climategate'' e-mails, it had the consequence of undermining the public''s trust in climate science [33,34,35,36].A significant amount of evidence shows that the ‘deficit model'' of public understanding of science, as described above, is too simplistic to account correctly for the public''s reluctance to accept particular policy decisions [37,38,39,40]. It ignores other important factors such as people''s attitudes towards science and technology, their social, political and ethical values, their past experiences and the public''s trust in governmental institutions [41,42,43,44]. The development of sound public policy depends not only on good science, but also on value judgements. One can agree with the scientific evidence for the safety of GMOs, for instance, but still disagree with the widespread use of GMOs because of social justice concerns about the developing world''s dependence on the interests of the global market. Similarly, one need not reject the scientific evidence about the harmful health effects of sugar to reject regulations on sugary drinks. One could rationally challenge such regulations on the grounds that informed citizens ought to be able to make free decisions about what they consume. Whether or not these value judgements are justified is an open question, but the focus on dissent hinders our ability to have that debate.Focusing on dissent as a problematic activity sends the message to policy-makers and the public that any dissent undermines scientific knowledgeAs such, targeting dissent completely fails to address the real issues. The focus on dissent, and the threat that it seems to pose to public policy, misdiagnoses the problem as one of the public misunderstanding science, its quality and its authority. It assumes that scientific or technological knowledge is the only relevant factor in the development of policy and it ignores the role of other factors, such as value judgements about social benefits and harms, and institutional trust and reliability [45,46]. The emphasis on dissent, and thus on scientific knowledge, as the only or main factor in public policy decisions does not give due attention to these legitimate considerations.Furthermore, by misdiagnosing the problem, targeting dissent also impedes more effective solutions and prevents an informed debate about the values that should guide public policy. By framing policy debates solely as debates over scientific facts, the normative aspects of public policy are hidden and neglected. Relevant ethical, social and political values fail to be publicly acknowledged and openly discussed.Controversies over GMOs and climate policies have called attention to the negative effects of dissent in the scientific community. Based on the assumption that the public''s reluctance to support particular policies is the result of their inability to properly understand scientific evidence, scientists have tried to limit dissenting views that create doubt. However, as outlined above, targeting dissent as an obstacle to public policy probably does more harm than good. It fails to focus on the real problem at stake—that science is not the only relevant factor in sound policy-making. Of course, we do not deny that scientific evidence is important to the develop.ment of public policy and behavioural decisions. Rather, our claim is that this role is misunderstood and often oversimplified in ways that actually contribute to problems in developing sound science-based policies.? Open in a separate windowInmaculada de Melo-MartínOpen in a separate windowKristen Intemann  相似文献   

11.
12.
Elucidating the temporal order of silencing   总被引:1,自引:0,他引:1  
Izaurralde E 《EMBO reports》2012,13(8):662-663
  相似文献   

13.
Of mice and men     
Thomas Erren and colleagues point out that studies on light and circadian rhythmicity in humans have their own interesting pitfalls, of which all researchers should be mindful.We would like to compliment, and complement, the recent Opinion in EMBO reports by Stuart Peirson and Russell Foster (2011), which calls attention to the potential obstacles associated with linking observations on light and circadian rhythmicity made on nocturnal mice to diurnally active humans. Pitfalls to consider include that qualitative extrapolations from short-lived rodents to long-lived humans, quantitative extrapolations of very different doses (Gold et al, 1992), and the varying sensitivities of each species to experimental optical radiation as a circadian stimulus (Bullough et al, 2006) can all have a critical influence on an experiment. Thus, Peirson & Foster remind us that “humans are not big mice”. We certainly agree, but we also thought it worthwhile to point out that human studies have their own interesting pitfalls, of which all researchers should be mindful.Many investigations with humans—such as testing the effects of different light exposures on alertness, cognitive performance, well-being and depression—can suffer from what has been coined as the ‘Hawthorne effect''. The term is derived from a series of studies conducted at the Western Electric Company''s Hawthorne Works near Chicago, Illinois, between 1924 and 1932, to test whether the productivity of workers would change with changing illumination levels. One important punch line was that productivity increased with almost any change that was made at the workplaces. One prevailing interpretation of these findings is that humans who know that they are being studied—and in most investigations they cannot help but notice—might exhibit responses that have little or nothing to do with what was intended as the experiment. Those who conduct circadian biology studies in humans try hard to eliminate possible ‘Hawthorne effects'', but every so often, all they can do is to hope for the best and expect the Hawthorne effect to be insignificant.Even so, and despite the obstacles to circadian experiments with both mice and humans, the wealth of information from work in both species is indispensable. To exemplify, in the last handful of years alone, experimental research in mice has substantially contributed to our understanding of the retinal interface between visible light and circadian circuitry (Chen et al, 2011); has shown that disturbances of the circadian systems through manipulations of the light–dark cycles might accelerate carcinogenesis (Filipski et al, 2009); and has suggested that perinatal light exposure—through an imprinting of the stability of circadian systems (Ciarleglio et al, 2011)—might be related to a human''s susceptibility to mood disorders (Erren et al, 2011a) and internal cancer developments later in life (Erren et al, 2011b). Future studies in humans must now examine whether, and to what extent, what was found in mice is applicable to and relevant for humans.The bottom line is that we must be aware of, and first and foremost exploit, evolutionary legacies, such as the seemingly ubiquitous photoreceptive clockwork that marine and terrestrial vertebrates—including mammals such as mice and humans—share (Erren et al, 2008). Translating insights from studies in animals to humans (Erren et al, 2011a,b), and vice versa, into testable research can be a means to one end: to arrive at sensible answers to pressing questions about light and circadian clockworks that, no doubt, play key roles in human health and disease. Pitfalls, however, abound on either side, and we agree with Peirson & Foster that they have to be recognized and monitored.  相似文献   

14.
15.
EMBO J (2013) 32 23, 3017–3028 10.1038/emboj.2013.224; published online October182013Commensal gut bacteria benefit their host in many ways, for instance by aiding digestion and producing vitamins. In a new study in The EMBO Journal, Jones et al (2013) report that commensal bacteria can also promote intestinal epithelial renewal in both flies and mice. Interestingly, among commensals this effect is most specific to Lactobacilli, the friendly bacteria we use to produce cheese and yogurt. Lactobacilli stimulate NADPH oxidase (dNox/Nox1)-dependent ROS production by intestinal enterocytes and thereby activate intestinal stem cells.The human gut contains huge numbers of bacteria (∼1014/person) that play beneficial roles for our health, including digestion, building our immune system and competing with harmful microbes (Sommer and Backhed, 2013). Both commensal and pathogenic bacteria can elicit antimicrobial responses in the intestinal epithelium and also stimulate epithelial turnover (Buchon et al, 2013; Sommer and Backhed, 2013). In contrast to gut pathogens, relatively little is known about how commensal bacteria influence intestinal turnover. In a simple yet elegant study reported recently in The EMBO Journal, Jones et al (2013) show that among several different commensal bacteria tested, only Lactobacilli promoted much intestinal stem cell (ISC) proliferation, and it did so by stimulating reactive oxygen species (ROS) production. Interestingly, the specific effect of Lactobacilli was similar in both Drosophila and mice. In addition to distinguishing functional differences between species of commensals, this work suggests how the ingestion of Lactobacillus-containing probiotic supplements or food (e.g., yogurt) might support epithelial turnover and health.In both mammals and insects, ISCs give rise to intestinal enterocytes, which not only absorb nutrients from the diet but must also interact with the gut microbiota (Jiang and Edgar, 2012). The metazoan intestinal epithelium has developed conserved responses to enteric bacteria, for instance the expression of antimicrobial peptides (AMPs; Gallo and Hooper, 2012; Buchon et al, 2013), presumably to kill harmful bacteria while allowing symbiotic commensals to flourish. In addition to AMPs, intestinal epithelial cells use NADPH family oxidases to generate ROS that are used as microbicides (Lambeth and Neish, 2013). High ROS levels during enteric infections likely act non-discriminately against both commensals and pathogens, but controlled, low-level ROS can act as signalling molecules that regulate various cellular processes including proliferation (Lambeth and Neish, 2013). In flies, exposure to pathogenic Gram-negative bacteria has been reported to result in ROS (H2O2) production by an enzyme called dual oxidase (Duox; Ha et al, 2005). Duox activity in the fly intestine (and likely also the mammalian one) has recently been discovered to be stimulated by uracil secretion by pathogenic bacteria (Lee et al, 2013). In the mammalian intestine another enzyme, NADPH oxidase (Nox), has also been shown to produce ROS in the form of superoxide (O2), in this case in response to formylated bacterial peptides (Lambeth and Neish, 2013). A conserved role for Nox in the Drosophila intestinal epithelium had not until now been explored.Jones et al (2013) checked seven different commensal bacterial to see which would stimulate ROS production by the fly''s intestinal epithelium, and found that only one species, a Gram-positive Lactobacillus, could stimulate significant production of ROS in intestinal enterocytes. Five bacterial species were checked in mice or cultured intestinal cells, and again it was a Lactobacillus that generated the strongest ROS response. Although not all of the most prevalent enteric bacteria were assayed, those others that were—such as E. coli—induced only mild, barely detectable levels of ROS in enterocytes. Surprisingly, although bacteria pathogenic to Drosophila, like Erwinia caratovora, were expected to stimulate ROS production via Duox, Jones et al (2013) did not observe this using the ROS detecting dye hydrocyanine-Cy3, or a ROS-sensitive transgene reporter, Glutatione S-transferase-GFP, in flies. Further, Jones et al (2013) found that genetically suppressing Nox in either Drosophila or mice decreased ROS production after Lactobacillus ingestion. Consistent with the important role of Nox, Duox appeared not to be required for ROS production after Lactobacillus ingestion. In addition, Jones et al (2013) found that Lactobacilli also promoted DNA replication—a metric of cell proliferation and epithelial renewal—in the fly''s intestine, and that this was also ROS- and Nox-dependent. Again, the same relationship was found in the mouse small intestine. Together, these results suggest a conserved mechanism by which Lactobacilli can stimulate Nox-dependent ROS production in intestinal enterocytes and thereby promote ISC proliferation and enhance gut epithelial renewal.In the fly midgut, uracil produced by pathogenic bacteria can stimulate Duox-dependent ROS production, which is thought to act as a microbicide (Lee et al, 2013), and can also promote ISC proliferation (Buchon et al, 2009). However, Duox-produced ROS may also damage the intestinal epithelium itself and thereby promote epithelial regeneration indirectly through stress responses. In this disease scenario, ROS appears to be sensed by the stress-activated Jun N-terminal Kinase (JNK; Figure 1A), which can induce pro-proliferative cytokines of the Leptin/IL-6 family (Unpaireds, Upd1–3) (Buchon et al, 2009; Jiang et al, 2009). These cytokines activate JAK/STAT signalling in the ISCs, promoting their growth and proliferation, and accelerating regenerative repair of the gut epithelium (Buchon et al, 2009; Jiang et al, 2009). It is also possible, however, that low-level ROS, or specific types of ROS (e.g., H2O2) might induce ISC proliferation directly by acting as a signal between enterocytes and ISCs. Since commensal Lactobacillus stimulates ROS production via Nox rather than Duox, this might be a case in which a non-damaging ROS signal promotes intestinal epithelial renewal without stress signalling or a microbicidal effect (Figure 1B). However, Jones et al (2013) stopped short of ruling out a role for oxidative damage, cell death or stress signalling in the intestinal epithelium following colonization by Lactobacilli, and so these parameters must be checked in future studies. Perhaps even the friendliest symbiotes cause a bit of ‘healthy'' damage to the gut lining, stimulating it to refresh and renew. Whether damage-dependent or not, the stimulation of Drosophila ISC proliferation by commensals and pathogens alike appears to involve the same cytokine (Upd3; Buchon et al, 2009), and so some of the differences between truly pathogenic and ‘friendly'' gut microbes might be ascribed more to matters of degree than qualitative distinctions. Future studies exploring exactly how different types of ROS signals stimulate JNK activity, gut cytokine expression and epithelial renewal should be able to sort this out, and perhaps help us learn how to better manage the ecosystems in our own bellies. From the lovely examples reported by Jones et al (2013), an experimental back-and-forth between the Drosophila and mouse intestine seems an informative way to go.Open in a separate windowFigure 1Metazoan intestinal epithelial responses to commensal and pathogenic bacteria. (A) High reactive oxygen species (ROS) levels generated by dual oxidase (Duox) in response to uracil secretion by pathogenic bacteria. (B) Low ROS levels generated by NADPH oxidase (Nox) in response to commensal bacteria. In addition to acting as a microbiocide, ROS in flies may stimulate JNK signaling and cytokine (Upd 1–3) expression in enterocytes, thereby stimulating ISC proliferation and epithelial turnover or regeneration. Whether this stimulation required damage to or loss of enterocytes has yet to be explored.  相似文献   

16.
EMBO J (2012) 31 22, 4276–4288 doi:10.1038/emboj.2012.250; published online September182012AgRP/NPY neurons are critical regulators of body weight and food intake. Concordant with their orexigenic effects, it is expected that AgRP ablation leads to the appearance of a lean phenotype. In the current issue of The EMBO Journal, Joly-Amado et al (2012) describe an obese phenotype in a model of AgRP-ablated mice, and link it to a shift in metabolic profile in efferent tissues such as the liver, muscle and pancreas.Hypothalamic AgRP/NPY neurons are known to play a key role in the regulation of body weight and food intake. With the advent of ‘toxin-receptor mediated cell knockout'' technology, several reports have tried to address the physiological relevance of this set of neurons. The ablation of Agrp-expressing neurons has different consequences depending on the age of the mice. Thus, in the neonatal stage, temporal deletion of AgRP neurons by diphtheria toxin (DT) injection has no major effects on energy balance (Luquet et al, 2005, 2007). However, in adult mice, as expected due to the potent orexigenic role of AgRP neurons, DT administration to these AgRPDTR mice promotes a huge reduction in body weight and food intake in a short time (Bewick et al, 2005; Gropp et al, 2005; Luquet et al, 2005), which can even lead to starvation (Luquet et al, 2005). It has been hypothesized that the absence of effect in the neonates could be due to ‘compensatory mechanisms'' developed during the neonatal stage, when the neurocircuitry is not fully formed (Luquet et al, 2005). It has been shown that DT injection to adult mice, which have been previously treated with DT during the neonatal stage, does not have the same drastic effect as seen in starvation-induced DT treatment (Luquet et al, 2005). In this issue of The EMBO Journal, Joly-Amado et al (2012) report an increase in feeding efficiency in 3-month-old AgRPDTR mice after DT injection during the neonatal stage. This change in feeding efficiency leads to an obese phenotype related to a decrease in locomotor activity. These results, contrary to those expected due to the orexigenic function of AgRP neurons, provide clues about the existence of a ‘compensatory mechanism''. Furthermore, in this study, the development of obesity is concomitant with an increase in fat depot weight.It is known that AgRP released from the AgRP/NPY neurons in the hypothalamus acts like an endogenous inhibitor of melanocortin receptors (MCRs) in the melanocortin system (Cone 2005). AgRP antagonizes the effects of POMC cleavage subproducts (e.g., α-msh) on these receptors to affect energy balance. It has been reported by Nogueiras et al (2007) that central manipulation of MCRs (using inhibitors or agonists) is able to control adiposity by modifying lipogenesis in WAT. Moreover, they found that hypothalamic MCRs act like a switch between carbohydrate and fat utilization: the blockade of MCRs decreases the percentage of fat utilized (Nogueiras et al, 2007). In the current issue, Joly-Amado et al (2012) describe changes in the same direction as found in this previous study; they report evidence of AgRP/NPY neuron involvement in the control of nutrient partitioning and lipid metabolism in peripheral tissues, in agreement with another report that demonstrated the importance of Sirt1 in AgRP neurons to modulate substrate utilization during fasting (Dietrich et al, 2010). The AgRPDTR mice used in this study show a shift in substrate utilization. AgRP deficiency stimulates lipid utilization, that is, these mice obtain energy from stored fat. Despite this shift in metabolic profile, adult AgRPDTR mice present more adiposity than controls. This can be explained by the fact that these mice show a potent increase in lipogenesis and triglyceride (TG) content in the liver. Post-pandrial plasma TG levels are raised, but they are normalized by fasting, providing evidence that the peripheral tissues of these AgRP-ablated mice obtain the energy required to maintain their functions from lipids. Furthermore, the authors describe a ‘paradoxical benefit'' in HFD-exposed mice. These animals are protected against the effects of HFD—their body weight and fat content are indistinguishable from wild-type mice, probably due to the fact that the mice utilize the excess fat found in the diet for energy.To investigate the cause of this increase in fat utilization, the authors looked into the possibility that the muscles use TG as fuel. They uncovered different results between oxidative (soleus) and fast glycolytic (white gastrocnemius) muscles. The former showed an increase in lipid utilization correlated with a decrease in the maximal OXPHOS complex I respiration rate. No changes were found in the latter. Taken together, these results indicate that the ability to oxidize lipids to obtain energy is ameliorated in the oxidative muscles of adult AgRP-ablated mice.Joly-Amado et al (2012) address the question about how the hypothalamic AgRP/NPY neurons can affect peripheral tissues. It is well known that the autonomic nervous system connects hypothalamic areas with different tissues (Nogueiras et al, 2007). Coinciding with previous studies that examined the sympathetic nervous system (SNS), the main mediator between the hypothalamus and WAT (Nogueiras et al, 2007), the current authors found that the SNS also mediates the response of the efferent tissues. They show that all of the effects described in liver, muscle and pancreas are dependent upon the SNS outflow from the hypothalamus.AgRP/NPY neurons also release γ-aminobutyric acid (GABA; Horvath et al, 1997). It has been demonstrated that GABA is necessary for the normal regulation of body weight (Tong et al, 2008). Constitutive inactivation of Vgat in AgRP neurons provokes body weight loss associated with an increase in locomotor activity (Tong et al, 2008). It has been reported that bretazenil (GABA agonist) replacement in adult AgRPDTR mice after DT injection rescues the anorexic phenotype and minimizes changes in body weight and food intake (Wu et al, 2009). Furthermore, that study showed that bretazenil administration solely into the parabrachial nucleus is enough to prevent the anorexia. Joly-Amado et al (2012), following the same strategy as in previous reports, investigated the role of GABA in the phenotype reported. They found that subcutaneous bretazenil treatment rescues the obese phenotype in adult AgRPDTR mice, increasing the RQ, which means an increase in carbohydrate utilization, and significantly decreasing the body fat content, thus emphasizing the importance of GABA release by the AgRP/NPY neurons to modulate energy balance.In summary (see Figure 1), Joly-Amado et al (2012) in a series of elegant experiments describe a novel phenotype observed in a mouse model of neonatal depletion of AgRP neurons. They link the obese phenotype observed in these mice with an increase in lipogenesis in the liver and an increase in lipid utilization by oxidative muscles. Furthermore, the authors show that these changes in the lipid profile of the peripheral tissues are due to AgRP ablation and are mediated by the SNS, and are not a consequence of adiposity in these mice. This study provides more clues about the existence of a ‘compensatory mechanism'' developed during the postnatal period. In spite of this, the mice are still sensitive to AgRP and GABA treatment. It is apparent that more effort is required to completely and comprehensively elucidate and understand the exact mechanism by which this model of AgRP-ablated mice become obese in adulthood.Open in a separate windowFigure 1AgRP neurons are essential for normal energy homeostasis. AgRP neurons play a critical role in the regulation of energy balance. Manipulation of these hypothalamic neurons causes changes in the normal phenotype. Joly-Amado et al (2012) describe that AgRP deletion in the neonatal stage brings about the appearance of an obese phenotype in adulthood. AgRP ablation promotes changes in efferent tissues that lead to an increase in adiposity.  相似文献   

17.
Lessons from science studies for the ongoing debate about ‘big'' versus ‘little'' research projectsDuring the past six decades, the importance of scientific research to the developed world and the daily lives of its citizens has led many industrialized countries to rebrand themselves as ‘knowledge-based economies''. The increasing role of science as a main driver of innovation and economic growth has also changed the nature of research itself. Starting with the physical sciences, recent decades have seen academic research increasingly conducted in the form of large, expensive and collaborative ‘big science'' projects that often involve multidisciplinary, multinational teams of scientists, engineers and other experts.Although laboratory biology was late to join the big science trend, there has nevertheless been a remarkable increase in the number, scope and complexity of research collaborations…Although laboratory biology was late to join the big science trend, there has nevertheless been a remarkable increase in the number, scope and complexity of research collaborations and projects involving biologists over the past two decades (Parker et al, 2010). The Human Genome Project (HGP) is arguably the most well known of these and attracted serious scientific, public and government attention to ‘big biology''. Initial exchanges were polarized and often polemic, as proponents of the HGP applauded the advent of big biology and argued that it would produce results unattainable through other means (Hood, 1990). Critics highlighted the negative consequences of massive-scale research, including the industrialization, bureaucratization and politicization of research (Rechsteiner, 1990). They also suggested that it was not suited to generating knowledge at all; Nobel laureate Sydney Brenner joked that sequencing was so boring it should be done by prisoners: “the more heinous the crime, the bigger the chromosome they would have to decipher” (Roberts, 2001).A recent Opinion in EMBO reports summarized the arguments against “the creeping hegemony” of ‘big science'' over ‘little science'' in biomedical research. First, many large research projects are of questionable scientific and practical value. Second, big science transfers the control of research topics and goals to bureaucrats, when decisions about research should be primarily driven by the scientific community (Petsko, 2009). Gregory Petsko makes a valid point in his Opinion about wasteful research projects and raises the important question of how research goals should be set and by whom. Here, we contextualize Petsko''s arguments by drawing on the history and sociology of science to expound the drawbacks and benefits of big science. We then advance an alternative to the current antipodes of ‘big'' and ‘little'' biology, which offers some of the benefits and avoids some of the adverse consequences.Big science is not a recent development. Among the first large, collaborative research projects were the Manhattan Project to develop the atomic bomb, and efforts to decipher German codes during the Second World War. The concept itself was put forward in 1961 by physicist Alvin Weinberg, and further developed by historian of science Derek De Solla Price in his pioneering book, Little Science, Big Science. “The large-scale character of modern science, new and shining and all powerful, is so apparent that the happy term ‘Big Science'' has been coined to describe it” (De Solla Price, 1963). Weinberg noted that science had become ‘big'' in two ways. First, through the development of elaborate research instrumentation, the use of which requires large research teams, and second, through the explosive growth of scientific research in general. More recently, big science has come to refer to a diverse but strongly related set of changes in the organization of scientific research. This includes expensive equipment and large research teams, but also the increasing industrialization of research activities, the escalating frequency of interdisciplinary and international collaborations, and the increasing manpower needed to achieve research goals (Galison & Hevly, 1992). Many areas of biological research have shifted in these directions in recent years and have radically altered the methods by which biologists generate scientific knowledge.Despite this long history of collaboration, laboratory biology remained ‘small-scale'' until the rising prominence of molecular biology changed the research landscapeUnderstanding the implications of this change begins with an appreciation of the history of collaborations in the life sciences—biology has long been a collaborative effort. Natural scientists accompanied the great explorers in the grand alliance between science and exploration during the sixteenth and seventeenth centuries (Capshew & Rader, 1992), which not only served to map uncharted territories, but also contributed enormously to knowledge of the fauna and flora discovered. These early expeditions gradually evolved into coordinated, multidisciplinary research programmes, which began with the International Polar Years, intended to concentrate international research efforts at the North and South Poles (1882–1883; 1932–1933). The Polar Years became exemplars of large-scale life science collaboration, begetting the International Geophysical Year (1957–1958) and the International Biological Programme (1968–1974).For Weinberg, the potentially negative consequences associated with big science were “adminstratitis, moneyitis, and journalitis”…Despite this long history of collaboration, laboratory biology remained ‘small-scale'' until the rising prominence of molecular biology changed the research landscape. During the late 1950s and early 1960s, many research organizations encouraged international collaboration in the life sciences, spurring the creation of, among other things, the European Molecular Biology Organization (1964) and the European Molecular Biology Laboratory (1974). In addition, international mapping and sequencing projects were developed around model organisms such as Drosophila and Caenorhabditis elegans, and scientists formed research networks, exchanged research materials and information, and divided labour across laboratories. These new ways of working set the stage for the HGP, which is widely acknowledged as the cornerstone of the current ‘post-genomics era''. As an editorial on ‘post-genomics cultures'' put it in the journal Nature, “Like it or not, big biology is here to stay” (Anon, 2001).Just as big science is not new, neither are concerns about its consequences. As early as 1948, the sociologist Max Weber worried that as equipment was becoming more expensive, scientists were losing autonomy and becoming more dependent on external funding (Weber, 1948). Similarly, although Weinberg and De Solla Price expressed wonder at the scope of the changes they were witnessing, they too offered critical evaluations. For Weinberg, the potentially negative consequences associated with big science were “adminstratitis, moneyitis, and journalitis”; meaning the dominance of science administrators over practitioners, the tendency to view funding increases as a panacea for solving scientific problems, and progressively blurry lines between scientific and popular writing in order to woo public support for big research projects (Weinberg, 1961). De Solla Price worried that the bureaucracy associated with big science would fail to entice the intellectual mavericks on which science depends (De Solla Price, 1963). These concerns remain valid and have been voiced time and again.As big science represents a major investment of time, money and manpower, it tends to determine and channel research in particular directions that afford certain possibilities and preclude others (Cook & Brown, 1999). In the worst case, this can result in entire scientific communities following false leads, as was the case in the 1940s and 1950s for Soviet agronomy. Huge investments were made to demonstrate the superiority of Lamarckian over Mendelian theories of heritability, which held back Russian biology for decades (Soyfer, 1994). Such worst-case scenarios are, however, rare. A more likely consequence is that big science can diminish the diversity of research approaches. For instance, plasma fusion scientists are now under pressure to design projects that are relevant to the large-scale International Thermonuclear Experimental Reactor, despite the potential benefits of a wide array of smaller-scale machines and approaches (Hackett et al, 2004). Big science projects can also involve coordination challenges, take substantial time to realize success, and be difficult to evaluate (Neal et al, 2008).Importantly, big science projects allow for the coordination and activation of diverse forms of expertise across disciplinary, national and professional boundariesAnother danger of big science is that researchers will lose the intrinsic satisfaction that arises from having personal control over their work. Dissatisfaction could lower research productivity (Babu & Singh, 1998) and might create the concomitant danger of losing talented young researchers to other, more engaging callings. Moreover, the alienation of scientists from their work as a result of big science enterprises can lead to a loss of personal responsibility for research. In turn, this can increase the likelihood of misconduct, as effective social control is eroded and “the satisfactions of science are overshadowed by organizational demands, economic calculations, and career strategies” (Hackett, 1994).Practicing scientists are aware of these risks. Yet, they remain engaged in large-scale projects because they must, but also because of the real benefits these projects offer. Importantly, big science projects allow for the coordination and activation of diverse forms of expertise across disciplinary, national and professional boundaries to solve otherwise intractable basic and applied problems. Although calling for international and interdisciplinary collaboration is popular, practicing it is notably less popular and much harder (Weingart, 2000). Big science projects can act as a focal point that allows researchers from diverse backgrounds to cooperate, and simultaneously advances different scientific specialties while forging interstitial connections among them. Another major benefit of big science is that it facilitates the development of common research standards and metrics, allowing for the rapid development of nascent research frontiers (Fujimura, 1996). Furthermore, the high profile of big science efforts such as the HGP and CERN draw public attention to science, potentially enhancing scientific literacy and the public''s willingness to support research.Rather than arguing for or against big science, molecular biology would best benefit from strategic investments in a diverse portfolio of big, little and ‘mezzo'' research projectsBig science can also ease some of the problems associated with scientific management. In terms of training, graduate students and junior researchers involved in big science projects can gain additional skills in problem-solving, communication and team working (Court & Morris, 1994). The bureaucratic structure and well-defined roles of big science projects also make leadership transitions and researcher attrition easier to manage compared with the informal, refractory organization of most small research projects. Big science projects also provide a visible platform for resource acquisition and the recruitment of new scientific talent. Moreover, through their sheer size, diversity and complexity, they can also increase the frequency of serendipitous social interactions and scientific discoveries (Hackett et al, 2008). Finally, large-scale research projects can influence scientific and public policy. Big science creates organizational structures in which many scientists share responsibility for, and expectations of, a scientific problem (Van Lente, 1993). This shared ownership and these shared futures help coordinate communication and enable researchers to present a united front when advancing the potential benefits of their projects to funding bodies.Given these benefits and pitfalls of big science, how might molecular biology best proceed? Petsko''s response is that, “[s]cientific priorities must, for the most part, be set by the free exchange of ideas in the scientific literature, at meetings and in review panels. They must be set from the bottom up, from the community of scientists, not by the people who control the purse strings.” It is certainly the case, as Petsko also acknowledges, that science has benefited from a combination of generous public support and professional autonomy. However, we are less sanguine about his belief that the scientific community alone has the capacity to ascertain the practical value of particular lines of inquiry, determine the most appropriate scale of research, and bring them to fruition. In fact, current mismatches between the production of scientific knowledge and the information needs of public policy-makers strongly suggest that the opposite is true (Sarewitz & Pielke, 2007).Instead, we maintain that these types of decision should be determined through collective decision-making that involves researchers, governmental funding agencies, science policy experts and the public. In fact, the highly successful HGP involved such collaborations (Lambright, 2002). Taking into account the opinions and attitudes of these stakeholders better links knowledge production to the public good (Cash et al, 2003)—a major justification for supporting big biology. We do agree with Petsko, however, that large-scale projects can develop pathological characteristics, and that all programmes should therefore undergo regular assessments to determine their continuing worth.Rather than arguing for or against big science, molecular biology would best benefit from strategic investments in a diverse portfolio of big, little and ‘mezzo'' research projects. Their size, duration and organizational structure should be determined by the research question, subject matter and intended goals (Westfall, 2003). Parties involved in making these decisions should, in turn, aim at striking a profitable balance between differently sized research projects to garner the benefits of each and allow practitioners the autonomy to choose among them.This will require new, innovative methods for supporting and coordinating research. An important first step is ensuring that funding is made available for all kinds of research at a range of scales. For this to happen, the current funding model needs to be modified. The practice of allocating separate funds for individual investigator-driven and collective research projects is a positive step in the right direction, but it does not discriminate between projects of different sizes at a sufficiently fine resolution. Instead, multiple funding pools should be made available for projects of different sizes and scales, allowing for greater accuracy in project planning, funding and evaluation.It is up to scientists and policymakers to discern how to benefit from the advantages that ‘bigness'' has to offer, while avoiding the pitfalls inherent in doing soSecond, science policy should consciously facilitate the ‘scaling up'', ‘scaling down'' and concatenation of research projects when needed. For instance, special funds might be established for supporting small-scale but potentially transformative research with the capacity to be scaled up in the future. Alternatively, small-scale satellite research projects that are more nimble, exploratory and risky, could complement big science initiatives or be generated by them. This is also in line with Petsko''s statement that “the best kind of big science is the kind that supports and generates lots of good little science.” Another potentially fruitful strategy we suggest would be to fund independent, small-scale research projects to work on co-relevant research with the later objective of consolidating them into a single project in a kind of building-block assembly. By using these and other mechanisms for organizing research at different scales, it could help to ameliorate some of the problems associated with big science, while also accruing its most important benefits.Within the life sciences, the field of ecology perhaps best exemplifies this strategy. Although it encompasses many small-scale laboratory and field studies, ecologists now collaborate in a variety of novel organizations that blend elements of big, little and mezzo science and that are designed to catalyse different forms of research. For example, the US National Center for Ecological Analysis and Synthesis brings together researchers and data from many smaller projects to synthesize their findings. The Long Term Ecological Research Network consists of dozens of mezzo-scale collaborations focused on specific sites, but also leverages big science through cross-site collaborations. While investments are made in classical big science projects, such as the National Ecological Observatory Network, no one project or approach has dominated—nor should it. In these ways, ecologists have been able to reap the benefits of big science whilst maintaining diverse research approaches and individual autonomy and still being able to enjoy the intrinsic satisfaction associated with scientific work.Big biology is here to stay and is neither a curse nor a blessing. It is up to scientists and policy-makers to discern how to benefit from the advantages that ‘bigness'' has to offer, while avoiding the pitfalls inherent in so doing. The challenge confronting molecular biology in the coming years is to decide which kind of research projects are best suited to getting the job done. Molecular biology itself arose, in part, from the migration of physicists to biology; as physics research projects and collaborations grew and became more dependent on expensive equipment, appreciating the saliency of one''s own work became increasingly difficult, which led some to seek refuge in the comparatively little science of biology (Dev, 1990). The current situation, which Petsko criticizes in his Opinion article, is thus the result of an organizational and intellectual cycle that began more than six decades ago. It would certainly behoove molecular biologists to heed his warnings and consider the best paths forward.? Open in a separate windowNiki VermeulenOpen in a separate windowJohn N. ParkerOpen in a separate windowBart Penders  相似文献   

18.
19.
Hesketh T  Min JM 《EMBO reports》2012,13(6):487-492
The use of reproductive technology to service a preference for male offspring has created an artificial gender imbalance, notably in Asian countries. The social effects of this large surplus of young men are not yet clear, but concerted action might be necessary to address the problemOne of the problems of sexual reproduction, especially in predominantly monogamous species that pair ‘for life'', is to ensure a balance between the birth rate of males and females. In humans, this balance has been remarkably even, but the past few decades have seen a substantial shift towards men, notably in some Asian countries. The reason, however, is not biological; there has simply been a cultural preference for sons in the affected societies, which together with recent availability of prenatal sex-selection technologies has led to widespread female feticide. The result has been a huge excess of males in several countries. Whilst it is not yet fully clear how a surplus of millions of men will affect these societies—perhaps even leading to civil unrest—some countries have already taken steps to alleviate the problem by addressing the underlying cultural factors. However, the problem is about to come to a crisis point, as a large surplus of men reach reproductive age. It will take many decades to reach a balanced representation of both sexes again.The sex ratio at birth (SRB) is defined as the number of boys born to every 100 girls. It is remarkably consistent in human populations, with around 103–107 male babies for every 100 female ones. John Graunt first documented this slight excess of male births in 1710 for the population of London, and many studies have since confirmed his finding [1]. Higher mortality from disease, compounded by the male tendency towards risky behaviours and violence, means that the initial surplus of boys decreases to roughly equal number of males and females during the all-important reproductive years in most populations.Researchers have studied a large number of demographic and environmental factors that could affect the SRB, including family size, parental age and occupation, birth order, race, coital rate, hormonal treatments, environmental toxins, several diseases and, perhaps most intriguingly, war [2,3,4]. It is well documented that wars are associated with a small increase in the sex ratio. This phenomenon occurs both during the war and for a short period afterwards. The best examples of this were reported for the First and Second World Wars in both the USA and Europe, and for the Korean and Vietnam Wars in the USA [5,6]. However, these findings were not reproduced in the more recent Balkan Wars and the Iran–Iraq war [7]. There have been several biological explanations for these increases. It has been proposed, for example, that the stress of war adversely affects the viability of XY-bearing sperm. Alternatively, a higher frequency of intercourse after prolonged separation during times of war is thought to lead to conception earlier in the menstrual cycle, which has been shown to result in more males [4,8]. There have been evolutionary explanations, such as the loss of large numbers of men in war leading to an adaptive correction of the sex ratio [4,9]. Nonetheless, the real causes of the altered SRB during war remain elusive: all of the discussed biological and social factors have been shown to cause only marginal deviations from the normal sex ratio.Whilst war has only slightly shifted SRB towards more male babies and only for a limited time period, cultural factors, namely a strong preference for sons, has been causing large distortions of gender balance during the past decades. Son preference is most prevalent in a band of countries from East Asia through to South Asia and the Middle East to North Africa [9]. For centuries, sons have been regarded as more valuable, because males can earn higher wages especially in agrarian economies, they generally continue the family line, are recipients of inheritance and are responsible for their parents in illness and old age. By contrast, daughters often become members of the husband''s family after marriage, no longer having responsibility for their biological parents [10]. There are also location-specific reasons for son preference: in India, the expense of the dowry, and in South Korea and China, deep-rooted Confucian values and patriarchal family systems [11].… cultural factors, namely a strong preference for sons, has been causing large distortions of gender balance during the past decadesUntil recently, son preference was manifest post-natally through female infanticide, abandonment of newborn girls, poorer nutrition and neglect of health care, all causing higher female mortality [12]. Studies have shown that unequal access to health care is the most important factor in differential gender mortality [13,14], especially in countries where health care costs are borne by the family [15]. As early as 1990, the Indian economist Amaryta Sen estimated that differential female mortality had resulted in around 100 million missing females across the developing world with the overwhelming majority of these in China, India, Pakistan and Bangladesh [16].

Science & Society Series on Sex and Science

Sex is the greatest invention of all time: not only has sexual reproduction facilitated the evolution of higher life forms, it has had a profound influence on human history, culture and society. This series explores our attempts to understand the influence of sex in the natural world, and the biological, medical and cultural aspects of sexual reproduction, gender and sexual pleasure.To make matters worse, during the 1980s, diagnostic ultrasound technology became available in many Asian countries, and the opportunity to use the new technology for prenatal sex selection was soon exploited. Indeed, the highest SRBs are seen in countries with a combination of son preference, easy access to sex-selection technologies and abortion, and a small family culture. The latter is important because where larger families are the norm, couples will continue to have children until they have a boy. If the couple plan, or are legally restricted, as in China, to only one or two children, they will use sex selection to ensure the birth of a son [17]. This combination has resulted in serious and unprecedented sex ratio imbalances that are now affecting the reproductive age groups in several countries, most notably China, South Korea and parts of India.South Korea was the first country to report a very high SRB, because the widespread uptake of sex-selection technology preceded other Asian countries. The sex ratios started to rise in the mid-1980s in cities; ultrasound was already widely available even in rural areas by 1990 [17]. By 1992, the SRB was reported to be as high as 125 in some cities.South Korea was the first country to report a very high SRB, because the widespread uptake of sex-selection technology preceded other Asian countriesChina soon followed. Here, the situation was further complicated by the one-child policy introduced in 1979. This has undoubtedly contributed to the steady increase in the reported SRB from 106 in 1979 to 111 in 1990, 117 in 2001, 121 in 2005 and as high as 130 in some rural counties [18]. The latest figures for 2010 report an SRB of 118 [19] (National Bureau of Statistics of China 2011), the first drop in three decades, suggesting an incipient downturn. However, the number of excess males in the reproductive age group will continue to increase for at least another two decades. Because of China''s huge population, these ratios translate into massive numbers: in 2005, an estimated 1.1 million excess males were born across the country and the number of males under the age of 20 might exceed females by around 30 million [18].These overall figures conceal wide variations across the country (Fig 1): the SRB is higher than 130 in a strip of heavily populated provinces from Henan in the north to Hainan in the south, but close to normal in the large sparsely populated provinces of Xinjiang, Inner Mongolia and Tibet. Some are sceptical about these high SRB figures or have suggested that, under the constraints of the one-child policy, parents might fail to register a newborn girl, so that they might go on to have a boy [20]. However, recent evidence shows that such under-registration explains only a small proportion of missing females and that sex-selective abortion undoubtedly accounts for the overwhelming majority [18].Open in a separate windowFigure 1Sex ratio at birth for China''s provinces in 2005.There are marked regional differences in SRB in India. Because incomplete birth registrations make the SRB difficult to calculate accurately, the closely related ratio of boys to girls under the age of six is used, showing distinct regional differences across the country with much higher levels in the north and west. According to the most recent census in 2010, the SRB for the whole country was 109, a marginal increase on the previous census in 2001, which showed an SRB of 108. These national figures, however, hide wide differences from a low SRB of 98 in the state of Kerala to 119 in Haryana State. The highest SRBs at district level for the whole of India are in two districts of Haryana state, where the SRBs are both 129 [21]. The Indian figures contrast with the Chinese in two ways: nowhere in China is the sex ratio low, and in India the sex ratio is higher in rural than urban areas, whereas the reverse is true for China [22].A consistent pattern in all three countries is a clear trend across birth order, that is first, second and subsequent children, and the sex of the preceding child. This is driven by the persistence of the ‘at least one boy'' imperative in these cultures. Where high fertility is the norm, couples will continue to reproduce until they have a boy. Where couples aim to restrict their family size, they might be content if the first child is a girl, but will often use sex selection to ensure a boy in the second pregnancy. This was shown in a large Indian study: the SRB was 132 for second births with a preceding girl, and 139 for third births with two previous girls. By contrast, the sex ratios were normal when the first born was a boy [23].The sex ratio by birth order is particularly interesting in China (18].

Table 1

Sex ratio at birth for China''s provinces in 2005.
 TotalFirst orderSecond orderThird order
Total120108143157
Urban115110138146
Rural
123
107
146
157
Open in a separate windowAdapted from Zhu et al 2009 [18].South Koreans are inclined to use sex selection, even in their first pregnancy, as there is a traditional preference for the first-born to be a son. This tendency towards sex selection rises for third and fourth births as parents try to ensure they produce a son. In the peak years of the early 1990s, when the overall SRB was 114, the sex ratio for fourth births was 229 [17].… it is clear that large parts of China and India will have a 15–20% excess of young men during the next 20 yearsSince prenatal sex determination only became accessible during the mid-1980s, and even later still in rural areas, the large cohorts of surplus young men have only now started to reach reproductive age. The consequences of this male surplus in the all-important reproductive age group are therefore still speculative and the existing literature about the consequences of distorted sex ratios is predominantly theoretical with few hypothesis-testing investigations [24,25]. In addition, most research focuses on countries in which sex ratios differ only marginally from biological norms [26]; few researchers have systematically examined the massive sex ratio distortion in China and India. However, it is clear that large parts of China and India will have a 15–20% excess of young men during the next 20 years. These men will be unable to get married, in societies in which marriage is regarded as virtually universal, and where social status depends, in large part, on being married and having children. An additional problem is the fact that most of these men will come from the lowest echelons of society: a shortage of women in the marriage market enables women to ‘marry-up'', inevitably leaving the least desirable men with no marriage prospects [27]. As a result, most of these unmarriageable men are poor, uneducated peasants.One hypothesis assumes that not being able to meet the traditional expectations of marriage and childbearing will cause low self-esteem and increased susceptibility to psychological difficulties, including suicidal tendencies [28]. A recent study using in-depth interviews with older unmarried men in Guizhou province, in south west China, found that most of these men have low self-esteem, with many describing themselves as depressed, unhappy and hopeless [29].The combination of psychological vulnerability and sexual frustration might lead to aggression and violence. There is empirical support for this prediction: gender is a well-established individual-level correlate of crime, especially violent crime [30,31]. A consistent finding across cultures is that most crime is perpetrated by young, single males, of low socioeconomic status [32]. A particularly intriguing study carried out in India in the early 1980s showed that the sex ratio at the state level correlated strongly with homicide rates, and the relationship persisted after controlling for confounders such as urbanization and poverty [33]. The authors had expected to find that the high sex ratio would lead to increased violence against women, but their conclusion was that high sex ratios are a cause of violence of all types in society.However, no other study has found similar results. The study mentioned above from rural Guizhou, for example, could find no evidence that unmarried men were especially prone to violence and aggression. Rather, the men were characterized as shy and withdrawn, rather than aggressive [29]. In addition, reports of crime and disorder are not higher in areas with a known excess of young, single men. This might be because there is not yet a large enough crucial mass of unmarriageable men to have an impact, or assumptions about male aggression do not apply in this context.A consistent finding across cultures is that most crime is perpetrated by young, single males, of low socioeconomic statusIn China and parts of India, the sheer numbers of single men have raised other concerns. Because these men might lack a stake in the existing social order, it is feared that they will bind together in an outcast culture, turning to antisocial behaviour and organized crime [34], thereby threatening societal stability and security [35]. Some theorize that it could lead to intergroup conflict and civil war could erupt [32]; other authors go further, predicting that such men will be attracted to military-type organizations, potentially triggering large-scale domestic and international conflicts [36]. However, there is no evidence yet to support these scenarios. Crime rates are relatively low in India and China compared with other countries [37]. Such outcomes are probably multifactorial in their causes, and therefore the role of sex imbalance is difficult to determine.An excess of men, however, should be beneficial for women, especially in those Asian societies in which women have traditionally low social status. In fact, much of the literature on sex ratios has focused on women''s status and role in society, and on mating strategies; but again the literature has come from scenarios in which the sex ratio is only marginally distorted [38,39]. It is intuitive to see that women are a valuable commodity when sex ratios are high [40,41]. Because women generally prefer long-term monogamous relationships [42], it is predicted that monogamy will be more prevalent in high sex ratio societies, with less premarital and extramarital sex [43], lower divorce rates [38,24] and less illegitimacy [31]. In India and China, tradition militates against some of these eventualities; for example, divorce and illegitimacy are rare in both countries, owing to the traditional values of these societies. But other effects can be explored. If women are more highly valued, it is predicted that they will have higher self-esteem, resulting in lower rates of depression and suicide [24]. In China, where suicide rates in rural women have been among the highest in the world [28], women now show improved self-esteem and self-efficacy: 47% of university graduates are female and women account for 48% of the labour force [19].However, this increase in the value of women could also have paradoxically adverse effects on women, especially in rural societies. Benefits might accrue to men, such as fathers, husbands, traffickers and pimps, who control many female lives [35]. Increases in prostitution, kidnapping and trafficking of women in China have already been attributed to high sex ratios [44]. Hudson and Den Boer [36] cite the increase in kidnapping and trafficking of women, which has been reported from many parts of Asia, and the recent large increases in dowry prices in parts of India.Despite the negative and potentially damaging culturally driven use of prenatal sex selection, there might be some positive aspects of easy access to this technology. First, access to prenatal sex determination probably increases the proportion of wanted births, leading to less discrimination against girls and lower postnatal female mortality. India, South Korea and China have all reported reductions in differential mortality [45]. Second, it has been argued that an imbalance in the sex ratio could be a means to reduce population growth [46]. Third, the improved status of women should result in reduced son preference with fewer sex-selective abortions and an ultimate rebalancing of the sex ratio [4].Other consequences of an excess of men have been described, but the evidence for causation is limited. Much has been made of the impact on the sex industry. It is assumed that the sexual needs of large numbers of single men will lead to an expansion of the sex industry, including the more unacceptable practices of coercion and trafficking. During the past 20 years the sex industry has in fact expanded in both India and China [47,48], but the role that the high sex ratio has played is impossible to isolate. The marked rise in the number of sex workers in China, albeit from a low baseline, has been attributed more to a relaxation in sexual attitudes, increased inequality, and much greater mobility in the country, than an increase in the sex ratio. For example, the sex ratio is close to normal in border areas of Yunnan Province, where there is known to be the highest number of sex workers [49].Similarly, it is impossible to say whether gender imbalance is a contributory factor to the reported, largely anecdotal, increases in trafficking for the sex industry and for marriage. Most unmarried men in China and India are in the poorest echelons of society, and thus unable to buy a bride. In addition, trafficking is probably far more common in parts of Eastern Europe and Africa where the sex ratio is normal [50]. Several commentators have suggested that an excess of men might encourage an increase in homosexual behaviour [17]. This is clearly highly contentious, and begs questions about the aetiology of sexual orientation. However, if this leads to increased tolerance towards homosexuality in societies where homophobia is still highly prevalent, it is perhaps a positive consequence of the high sex ratio.There is clear concern at the governmental level about high sex ratios in the affected countries. In 2004, clearly risible with hindsight, China set a target to lower the SRB to normal levels by 2010 [51]. The Chinese government expressed concerns recently about the potential consequences of excess men for societal stability and security [52]. In the short term, little can be done to address the problem. There have been some extreme suggestions, for example recruiting men into the armed forces and posting them to remote areas [35], but such suggestions are clearly not feasible or realistic.However, much can be done to reduce sex selection, which would have clear benefits for the next generation. There are two obvious policy approaches: to outlaw sex selection, and to address the underlying problem of son preference. In China and India, laws forbidding infanticide and sex selection exist. It is therefore perplexing that sex-selective abortion is carried out, often quite openly, by medical personnel in clinics and hospitals that are often state-run and not in back-street establishments [20]. Enforcement of the law should therefore be straightforward—as the lessons from South Korea demonstrate. In the late 1980s, alarming rises in the SRB, because of easy access to sex-selective abortion, caused the government to act decisively. Eight physicians in Seoul, who had performed sex determination, had their licenses suspended in 1991 leading to a fall in the SRB from 117 to 113 in the following year. Following this success, laws forbidding sex selection were enforced across the country. This was combined with a widespread and influential public awareness campaign, warning of the dangers of distorted sex ratios, focusing especially on the shortage of brides. The results led to a gradual decline in the SRB from 116 in 1998 to 110 in 2009 [11].An excess of men […] should be beneficial for women, especially in those Asian societies in which women have traditionally low social statusThe lessons are clear. The fact that in China and India sex-selective abortion is still carried out with impunity—by licensed medical personnel and not even in backstreet establishments—makes the failure of the government to enforce the law all the more obvious. One of the problems is that although sex-selective abortion is illegal, abortion itself is readily available, especially in China, and it is often difficult to prove that an abortion has been carried out to select the sex of the child, as opposed to family planning reasons.To successfully address the underlying issue of son preference is, of course, hugely challenging, and requires a multi-faceted approach. Evidence from areas outside Asia strongly supports the idea that a higher status for women leads to less traditional gender attitudes and lower levels of son preference [52]. Laws in China and India have made important moves towards gender equality in terms of social and economic rights. These measures, together with socio-economic improvements and modernization, have improved the status of women and are gradually influencing traditional gender attitudes [44].The recognition that intense intervention would be necessary to change centuries-long traditions in China led to the Care for Girls campaign, instigated by the Chinese Population and Family Planning Commission in 2003. It is a comprehensive programme of measures, initially conducted in 24 counties in 24 provinces, which aims to improve perceptions of the value of girls and emphasizes the problems that young men face in finding brides. In addition, there has been provision of a pension for parents of daughters in rural areas. The results have been encouraging: in 2007, a survey showed that the campaign had improved women''s own perceived status, and that stated son preference had declined. In one of the participating counties in Shanxi Province, the SRB dropped from 135 in 2003 to 118 in 2006 [53].Surveys of sex preference are encouraging. In 2001, a Chinese national survey found that 37% of the female respondents—predominantly younger, urban women—claimed to have no gender preference for their offspring, 45% said the ideal family consisted of one boy and one girl, and the number expressing a preference for a girl was almost equal to those who wanted a boy [54]. A study conducted ten years later in three Chinese provinces showed that around two-thirds of adults of reproductive age classify themselves as gender indifferent; of the remainder, 20% said they would prefer to have a girl, with just 12% admitting to wanting a boy [52].Other policy measures that can influence social attitudes include equal social and economic rights for males and females—for example, in relation to rights of inheritance—and free basic health care to remove the financial burden of seeking health care for daughters. Neither of these has yet been implemented. However, another suggestion that special benefits be given to families with no sons to ensure protection in old age has been introduced in some Chinese provinces.Despite the grim outlook for the generation of males entering their reproductive years over the next two decades, the future is less bleak. The global SRB has probably already peaked. In South Korea, the sex ratio has already declined markedly and China and India are both reporting incipient declines: in China the SRB for 2010 was reported as 118 down from the peak of 121 in 2005, and, importantly,14 provinces with high sex ratios are beginning to show a downward trend [19]. India is now reported to have an SRB of around 109, down from a peak of around 111 in 2005 [21]. Whilst the combination of these incipient declines in SRB, and the changing attitudes towards the imperative to have sons, are encouraging, they will not start to filter through to the reproductive age group for another two decades. In China and India the highest sex ratio cohorts have yet to reach reproductive age, so the situation will get worse before it gets better. Normal sex ratios will not be seen for several decades.? Open in a separate windowTherese HeskethOpen in a separate windowJiang Min Min  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号