首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
Kuzma J  Kokotovich A 《EMBO reports》2011,12(9):883-888
Targeted genetic modification, which enables scientists to genetically engineer plants more efficiently and precisely, challenges current process-based regulatory frameworks for genetically modified crops.In 2010, more than 85% of the corn acreage and more than 90% of the soybean acreage in the USA was planted with genetically modified (GM) crops (USDA, 2010). Most of those crops contained transgenes from other species, such as bacteria, that confer resistance to herbicides or tolerance to insect pests, and that were introduced into plant cells using Agrobacterium or other delivery methods. The resulting ‘transformed'' cells were regenerated into GM plants that were tested for the appropriate expression of the transgenes, as well as for whether the crop posed an unacceptable environmental or health risk, before being approved for commercial use. The scientific advances that enabled the generation of these GM plants took place in the early 1980s and have changed agriculture irrevocably, as evidenced by the widespread adoption of GM technology. They have also triggered intense debates about the potential risks of GM crops for human health and the environment and new forms of regulation that are needed to mitigate this. There is also continued public resistance to GM crops, particularly in Europe.Plant genetic engineering is at a technological inflection pointPlant genetic engineering is at a technological inflection point. New technologies enable more precise and subtler modification of plant genomes (Weinthal et al, 2010) than the comparably crude methods that were used to create the current stock of GM crops (Fig 1A). These methods allow scientists to insert foreign DNA into the plant genome at precise locations, remove unwanted DNA sequences or introduce subtle modifications, such as single-base substitutions that alter the activity of individual genes. They also raise serious questions about the regulation of GM crops: how do these methods differ from existing techniques and how will the resulting products be regulated? Owing to the specificity of these methods, will resulting products fall outside existing definitions of GM crops and, as a result, be regulated similarly to conventional crops? How will the definition and regulation of GM crops be renegotiated and delineated in light of these new methods?Open in a separate windowFigure 1Comparing traditional transgenesis, targeted transgenesis, targeted mutagenesis and gene replacement. (A) In traditional transgenesis, genes introduced into plant cells integrate at random chromosomal positions. This is illustrated here for a bacterial gene that confers herbicide resistance (Herbr). The plant encodes a gene for the same enzyme, however due to DNA-sequence differences between the bacterial and plant forms of the gene, the plant gene does not confer herbicide resistance (Herbs). (B) The bacterial herbicide-resistance gene can be targeted to a specific chromosomal location through the use of engineered nucleases. The nucleases recognize a specific DNA sequence and create a chromosome break. The bacterial gene is flanked by sequences homologous to the target site and recombines with the plant chromosome at the break site, resulting in a targeted insertion. (C) Engineered nucleases can be used to create targeted gene knockouts. In this illustration, a second nuclease recognizes the coding sequence of the Herbs gene. Cleavage and repair in the absence of a homologous template creates a mutation (orange). (D) A homologous DNA donor can be used to repair targeted breaks in the Herbs gene. This allows sequence changes to be introduced into the native plant gene that confer herbicide resistance. Only a single base change is needed in some instances.Of the new wave of targeted genetic modification (TagMo) techniques, one of the most thoroughly developed is TagMo, which uses engineered zinc-finger nucleases or meganucleases to create DNA double-stranded breaks at specific genomic locations (Townsend et al, 2009; Shukla et al, 2009; Gao et al, 2010). This activates DNA repair mechanisms, which genetic engineers can use to alter the target gene. If, for instance, a DNA fragment is provided that has sequence similarity with the site at which the chromosome is broken, the repair mechanism will use this fragment as a template for repair through homologous recombination (Fig 1B). In this way, any DNA sequence, for instance a bacterial gene that confers herbicide resistance, can be inserted at the site of the chromosome break. TagMos can also be used without a repair template to make single-nucleotide changes. In this case, the broken chromosomes are rejoined imprecisely, creating small insertions or deletions at the break site (Fig 1C) that can alter or knock out gene function.TagMo technology would, therefore, challenge regulatory policies both in the USA and, even more so, in the [EU]…The greatest potential of TagMo technology is in its ability to modify native plant genes in directed and targeted ways. For example, the most widely used herbicide-resistance gene in GM crops comes from bacteria. Plants encode the same enzyme, but it does not confer herbicide resistance because the DNA sequence is different. Yet, resistant forms of the plant gene have been identified that differ from native genes by only a few nucleotides. TagMo could therefore be used to transfer these genes from a related species into a crop to replace the existing genes (Fig 1D) or to exchange specific nucleotides until the desired effect is achieved. In either case, the genetic modification would not necessarily involve transfer of DNA from another species. TagMo technology would, therefore, challenge regulatory policies both in the USA and, even more so, in the European Union (EU). TagMo enables more sophisticated modifications of plant genomes that, in some cases, could be achieved by classical breeding or mutagenesis, which are not formally regulated. On the other hand, TagMo might also be used to introduce foreign genes without using traditional recombinant DNA techniques. As a result, TagMo might fall outside of existing US and EU regulatory definitions and scrutiny.In the USA, federal policies to regulate GM crops could provide a framework in which to consider the way TagMo-derived crops might be regulated (Fig 2; Kuzma & Meghani, 2009; Kuzma et al, 2009; Thompson, 2007; McHughen & Smyth, 2008). In 1986, the Office of Science and Technology Policy established the Coordinated Framework for the Regulation of Biotechnology (CFRB) to oversee the environmental release of GM crops and their products (Office of Science and Technology Policy, 1986). The CFRB involves many federal agencies and is still in operation today (Kuzma et al, 2009). It was predicated on the views that regulation should be based on science and that the risks posed by GM crops were the “same in kind” as those of non-GM products; therefore no new laws were deemed to be required (National Research Council, 2000).Open in a separate windowFigure 2Brief history of the regulation of genetic engineering (Kuzma et al, 2009). EPA, Environmental Protection Agency; FIFRA, Federal Insecticide, Fungicide and Rodenticide Act; FDA, Food and Drug Administration; FPPA, Farmland Protection Policy Act; GMO, genetically modified organism; TOSCA, Toxic Substances Control Act; USDA, United States Department of Agriculture.Various old and existing statutes were interpreted somewhat loosely in order to oversee the regulation of GM plants. Depending on the nature of the product, one or several federal agencies might be responsible. GM plants can be regulated by the US Department of Agriculture (USDA) under the Federal Plant Pest Act as ‘plant pests'' if there is a perceived threat of them becoming ‘pests'' (similarly to weeds). Alternatively, if they are pest-resistant, they can be interpreted as ‘plant pesticides'' by the US Environmental Protection Agency (EPA) under the Federal Insecticide, Fungicide, and Rodenticide Act. Each statute requires some kind of pre-market or pre-release biosafety review—evaluation of potential impacts on other organisms in the environment, gene flow between the GM plant and wild relatives, and potential adverse effects on ecosystems. By contrast, the US Food and Drug Administration (FDA) treats GM food crops as equivalent to conventional food products; as such, no special regulations were promulgated under the Federal Food Drug and Cosmetic Act for GM foods. The agency established a pre-market consultation process for GM and other novel foods that is entirely voluntary.…TagMo-derived crops come in several categories relevant to regulation…Finally, and important for our discussion, the US oversight system was built mostly around the idea that GM plants should be regulated on the basis of characteristics of the end-product and not on the process that is used to create them. In reality, however, the process used to create crops is significant, which is highlighted by the fact that the USDA uses a process-based regulatory trigger (McHughen & Smyth, 2008). Instead of being inconsequential, it is important for oversight whether a plant is considered to be a result of GM.How will crops created by TagMo fit into this regulatory framework? If only subtle changes were made to individual genes, the argument could be made that the products are analogous to mutated conventional crops, which are neither regulated nor subject to pre-market or pre-release biosafety assessments (Breyer et al, 2009). However, single mutations are not without risks; for example, they can lead to an increase in expressed plant toxins (National Research Council, 1989, 2000, 2002, 2004; Magana-Gomez & de la Barca 2009). Conversely, if new or foreign genes are introduced through TagMo methods, the resulting plants might not differ substantially from existing GM crops. Thus, TagMo-derived crops come in several categories relevant to regulation: TagMo-derived crops with inserted foreign DNA from sexually compatible or incompatible species; TagMo-derived crops with no DNA inserted, for instance those in which parts of the chromosome have been deleted or genes inactivated; and TagMo-derived crops that either replace a gene with a modified version or change its nucleotide sequence (Fig 1).TagMo-derived crops with foreign genetic material inserted are most similar to traditional GM crops, according to the USDA rule on “Importation, Interstate Movement, and Release Into the Environment of Certain Genetically Engineered Organisms”, which defines genetic engineering as “the genetic modification of organisms by recombinant DNA (rDNA) techniques” (USDA, 1997). In contrast to conventional transgenesis, TagMo enables scientists to predefine the sites into which foreign genes are inserted. If the site of foreign DNA insertion has been previously characterized and shown to have no negative consequences for the plant or its products, then perhaps regulatory requirements to characterize the insertion site and its effects on the plant could be streamlined.TagMo might be used to introduce foreign DNA from sexually compatible or incompatible species into a host organism, either by insertion or replacement. For example, foreign DNA from one species of Brassica—mustard family—can be introduced into another species of Brassica. Alternatively, TagMo might be used to introduce foreign DNA from any organism into the host, such as from bacteria or animals into plants. Arguments have been put forth advocating less stringent regulation of GM crops with cisgenic DNA sequences that come from sexually compatible species (Schouten et al, 2006). Russell and Sparrow (2008) critically evaluate these arguments and conclude that cisgenic GM crops may still have novel traits in novel settings and thus give rise to novel hazards. Furthermore, if cisgenics are not regulated, it might trigger a public backlash, which could be more costly in the long run (Russell & Sparrow, 2008). TagMo-derived crops with genetic sequences from sexually compatible species should therefore still be considered for regulation. Additional clarity and consistency is needed with respect to how cisgenics are defined in US regulatory policy, regardless of whether they are generated by established methods or by TagMo. The USDA regulatory definition of a GM crops is vague, and the EPA has a broad categorical exemption in its rules for GM crops with sequences from sexually compatible species (EPA, 2001).Public failures will probably ensue if TagMo crops slip into the market under the radar without adequate oversightThe deletion of DNA sequences by TagMo to knock out a target gene is potentially of great agronomic value, as it could remove undesirable traits. For instance, it could eliminate anti-nutrients such as trypsin inhibitors in soybean that prevent the use of soy proteins by animals, or compounds that limit the value of a crop as an industrial material, such as ricin, which contaminates castor oil. Many mutagenesis methods yield similar products as TagMos. However, most conventional mutagenesis methods, including DNA alkylating agents or radioactivity, provide no precision in terms of the DNA sequences modified, and probably cause considerable collateral damage to the genome. It could be argued that TagMo is less likely to cause unpredicted genomic changes; however, additional research is required to better understand off-target effects—that is, unintended modification of other sites—by various TagMo platforms.We propose that the discussion about how to regulate TagMo crops should be open, use public engagement and respect several criteria of oversightGenerating targeted gene knockouts (Fig 1C) does not directly involve transfer of foreign DNA, and such plants might seem to warrant an unregulated status. However, most TagMos use reagents such as engineered nucleases, which are created by rDNA methods. The resulting product might therefore be classified as a GM crop under the existing USDA definition for genetic engineering (USDA, 1997) since most TagMos are created by introducing a target-specific nuclease gene into plant cells. It is also possible to deliver rDNA-derived nucleases to cells as RNA or protein, and so foreign DNA would not need to be introduced into plants to achieve the desired mutagenic outcome. In such cases, the rDNA molecule itself never encounters a plant cell. More direction is required from regulatory agencies to stipulate the way in which rDNA can be used in the process of generating crops before the regulated status is triggered.TagMo-derived crops that introduce alien transgenes or knock out native genes are similar to traditional GM crops or conventionally mutagenized plants, respectively, but TagMo crops that alter the DNA sequence of the target gene (Fig 1D) are more difficult to classify. For example, a GM plant could have a single nucleotide change that distinguishes it from its parent and that confers a new trait such as herbicide resistance. If such a subtle genetic alteration were attained by traditional mutagenesis or by screening for natural variation, the resulting plants would not be regulated. As discussed above, if rDNA techniques are used to create the single nucleotide TagMo, one could argue that it should be regulated. Regulation would then focus on the process rather than the product. If single nucleotide changes were exempt, would there be a threshold in the number of bases that can be modified before concerns are raised or regulatory scrutiny is triggered? Or would there be a difference in regulation if the gene replacement involves a sexually compatible or an incompatible species?Most of this discussion has focused on the use of engineered nucleases such as meganucleases or zinc-finger nucleases to create TagMos. Oligonucleotide-mediated mutagenesis (OMM), however, is also used to modify plant genes (Breyer et al, 2009). OMM uses chemically synthesized oligonucleotides that are homologous to the target gene, except for the nucleotides to be changed. Breyer et al (2009) argue that OMM “should not be considered as using recombinant nucleic acid molecules” and that “OMM should be considered as a form of mutagenesis, a technique which is excluded from the scope of the EU regulation.” However, they admit that the resulting crops could be considered as GM organisms, according to EU regulatory definitions for biotechnology. They report that in the USA, OMM plants have been declared non-GM by the USDA, but it is unclear whether the non-GM distinction in the USA has regulatory implications. OMM is already being used to develop crops with herbicide tolerance, and so regulatory guidelines need to be clarified before market release.In turning to address how TagMo-related oversight should proceed, two questions are central: how are decisions made and who is involved in making them? The analysis above illustrates that many fundamental decisions need to be made concerning the way in which TagMo-derived products will be regulated and, more broadly, what constitutes a GM organism for regulatory purposes. These decisions are inherently values-based in that views on how to regulate TagMo products differ on the basis of understanding of and attitudes towards agriculture, risk, nature and technology. Neglecting the values-based assumptions underlying these decisions can lead to poor decision-making, through misunderstanding the issues at hand, and public and stakeholder backlash resulting from disagreements over values.Bozeman & Sarewitz (2005) consider this problem in a framework of ‘market failures'' and ‘public failures''. GM crops have exhibited both. Market failures are exemplified by the loss of trade with the EU owing to different regulatory standards and levels of caution (PIFB, 2006). Furthermore, there has been a decline in the number of GM crops approved for interstate movement in the USA since 2001. Public failures result from incongruence between actions by decision-makers and the values of the public. Public failures are exemplified by the anti-GM sentiment in the labelling of organic foods in the USA and court challenges to the biosafety review of GM crops by the USDA''s Animal and Plant Health Inspection Service (McHughen & Smyth, 2008). These lawsuits have delayed approval of genetically engineered alfalfa and sugar beet, thus blurring the distinction between public and market failures. Public failures will probably ensue if TagMo crops slip into the market under the radar without adequate oversight.The possibility of public failures with TagMo crops highlights the benefits of an anticipatory governance-based approach, and will help to ensure that the technology meets societal needsAnticipatory governance is a framework with principles that are well suited to guiding TagMo-related oversight and to helping to avoid public failures. It challenges an understanding of technology development that downplays the importance of societal factors—such as implications for stakeholders and the environment—and argues that societal factors should inform technology development and governance from the start (Macnaghten et al, 2005).Anticipatory governance uses three principles: foresight, integration of natural and social science research, and upstream public engagement (Karinen & Guston, 2010). The first two principles emphasize proactive engagement using interdisciplinary knowledge. Governance processes that use these principles include real-time technology assessment (Guston & Sarewitz, 2002) and upstream oversight assessment (Kuzma et al, 2008b). The third principle, upstream public engagement, involves stakeholders and the public in directing values-based assumptions within technology development and oversight (Wilsdon & Wills, 2004). Justifications for upstream public engagement are substantive (stakeholders and the public can provide information that improves decisions), instrumental (including stakeholders and the public in the decision-making process leads to more trusted decisions) and normative (citizens have a right to influence decisions about issues that affect them).TagMo crop developers seem to be arguing for a ‘process-based'' exclusion of TagMo crops from regulatory oversight, without public knowledge of their development or ongoing regulatory communication. We propose that the discussion about how to regulate TagMo crops should be open, use public engagement and respect several criteria of oversight (Kuzma et al, 2008a). These criteria should include not only biosafety, but also broader impacts on human and ecological health and well-being, distribution of health impacts, transparency, treatment of intellectual property and confidential business information, economic costs and benefits, as well as public confidence and values.We also propose that the CFRB should be a starting point for TagMo oversight. The various categories of TagMo require an approach that can discern and address the risks associated with each application. The CFRB allows for such flexibility. At the same time, the CFRB should improve public engagement and transparency, post-market monitoring and some aspects of technical risk assessment.As we have argued, TagMo is on the verge of being broadly implemented to create crop varieties with new traits, and this raises many oversight questions. First, the way in which TagMo technologies will be classified and handled within the US regulatory system has yet to be determined. As these decisions are complex, values-based and have far-reaching implications, they should be made in a transparent way that draws on insights from the natural and social sciences, and involves stakeholders and the public. Second, as products derived from TagMo technologies will soon reach the marketplace, it is important to begin predicting and addressing potential regulatory challenges, to ensure that oversight systems are in place. The possibility of public failures with TagMo crops highlights the benefits of an anticipatory governance-based approach, and will help to ensure that the technology meets societal needs.So far, the EU has emphasized governance approaches and stakeholder involvement in the regulation of new technologies more than the USA. However, if the USA can agree on a regulatory system for TagMo crops that is the result of open and transparent discussions with the public and stakeholders, it could take the lead and act as a model for similar regulation in the EU and globally. Before this can happen, a shift in US approaches to regulatory policy would be needed.? Open in a separate windowJennifer KuzmaOpen in a separate windowAdam Kokotovich  相似文献   

3.
The debate about GM crops in Europe holds valuable lessons about risk management and risk communication. These lessons will be helpful for the upcoming debate on GM animals.Biomedical research and biotechnology have grown enormously in the past decades, as nations have heavily invested time and money in these endeavours to reap the benefits of the so-called ‘bioeconomy''. Higher investments on research should increase knowledge, which is expected to translate into applied research and eventually give rise to new products and services that are of economic or social benefit. Many governments have developed ambitious strategies—both economic and political—to accelerate this process and fuel economic growth (http://www.oecd.org/futures/bioeconomy/2030). However, it turns out that social attitudes are a more important factor for translating scientific advances than previously realized; public resistance can effectively slow down or even halt technological progress, and some hoped-for developments have hit roadblocks. Addressing these difficulties has become a major challenge for policy-makers, who have to find the middle ground between promoting innovation and addressing ethical and cultural values.There are many examples of how scientific and technological advances raise broad societal concerns: research that uses human embryonic stem cells, nanotechnology, cloning and genetically modified (GM) organisms are perhaps the most contested ones. The prime example of a promising technology that has failed to reach its full potential owing to ethical, cultural and societal concerns is GM organisms (GMOs); specifically, GM crops. Intense lobbying and communication by ‘anti-GM'' groups, combined with poor public relations from industry and scientists, has turned consumers against GM crops and has largely hampered the application of this technology in most European countries. Despite this negative outcome, however, the decade-long debate has provided important lessons and insight for the management of other controversial technologies: in particular, the use of GM animals.During the early 1990s, ‘anti-GM'' non-governmental organizations (NGOs) and ‘pro-GM'' industry were the main culprits for the irreversible polarization of the GMO debate. Both groups lobbied policy-makers and politicians, but NGOs ultimately proved better at persuading the public, a crucial player in the debate. Nevertheless, the level of public outcry varied significantly, reaching its peak in the European Union (EU). In addition to the values of citizens and effective campaigning by NGOs, the structural organization of the EU had a crucial role in triggering the GMO crisis. Within the EU, the European Commission (EC) is an administrative body the decisions of which have a legal impact on the 27 Member States. The EC is well-aware of its unique position and has compensated its lack of democratic accountability by increasing transparency and making itself accessible to the third sector [1]. This strategy was an important factor in the GMO debate as the EC was willing to listen to the views of environmental groups and consumer organizations.…it turns out that social attitudes are a more important factor for translating scientific advances than previously realized…Environmental NGOs successfully exploited this gap between the European electorate and the EC, and assumed to speak as the vox populi in debates. At the same time, politicians in EU Member States were faced with aggressive anti-GMO campaigns and increasingly polarized debates. To avoid the lobbying pressure and alleviate public concerns, they chose to hide behind science: the result was a proliferation of ‘scientific committees'' charged with assessing the health and environmental risks of GM crops.Scientists soon realized that their so-called ‘expert consultation'' was only a political smoke screen in most cases. Their reports and advice were used as arguments to justify policies—rather than tools for determining policy—that sometimes ignored the actual evidence and scientific results [2,3]. For example, in 2008, French President Nikolas Sarkozy announced that he would not authorize GM pest-resistant MON810 maize for cultivation in France if ‘the experts'' had any concerns over its safety. However, although the scientific committee appointed to assess MON810 concluded that the maize was safe for cultivation, the government''s version of the report eventually claimed that scientists had “serious doubts” on MON810 safety, which was then used as an argument to ban its cultivation. Francoise Hollande''s government has adopted a similar strategy to maintain the ban on MON810 [4].In addition to the values of citizens and effective campaigning by NGOs, the structural organization of the EU had a crucial role in triggering the GMO crisisSuch unilateral decisions by Member States challenged the EC''s authority to approve the cultivation of GM crops in the EU. After intense discussions, the EC and the Member States agreed on a centralized procedure for the approval of GMOs and the distribution of responsibilities for the three stages of the risk management process: risk assessment, risk management and risk communication (Fig 1). The European Food Safety Authority (EFSA) alone would be responsible for carrying out risk assessment, whilst the Member States would deal with risk management through the standard EU comitology procedure, by which policy-makers from Member States reach consensus on existing laws. Finally, both the EC and Member States committed to engage with European citizens in an attempt to gain credibility and promote transparency.Open in a separate windowFigure 1Risk assessment and risk management for GM crops in the EU. The new process for GM crop approval under Regulation (EC) No. 1829/2003, which defines the responsibilities for risk assessment and risk management. EC, European Community; EU, European Union; GM, genetically modified.More than 20 years after this debate, the claims made both for and against GM crops have failed to materialize. GMOs have neither reduced world hunger, nor destroyed entire ecosystems or poisoned humankind, even after widespread cultivation. Most of the negative effects have occurred in international food trade [5], partly owing to a lack of harmonization in international governance. More importantly, given that the EU is the largest commodity market in the world, this is caused by the EU''s chronic resistance to GM crops. The agreed centralized procedure has not been implemented satisfactorily and the blame is laid at the door of risk management (http://ec.europa.eu/food/food/biotechnology/evaluation/index_en.htm). Indeed, the 27 Member States have never reached a consensus on GM crops, which is the only non-functional comitology procedure in the EU [2]. Moreover, even after a GM crop was approved, some member states refused to allow its cultivation, which prompted the USA, Canada and Argentina to file a dispute at the World Trade Organization (WTO) against the EU.The inability to reach agreement through the comitology procedure, has forced the EC to make the final decision for all GMO applications. Given that the EC is an administrative body with no scientific expertise, it has relied heavily on EFSA''s opinion. This has created a peculiar situation in which the EFSA performs both risk assessment and management. Anti-GM groups have therefore focused their efforts on discrediting the EFSA as an expert body. Faced with regular questions related to agricultural management or globalization, EFSA scientists are forced to respond to issues that are more linked to risk management than risk assessment [5]. By repeatedly mixing socio-economic and cultural values with scientific opinions, NGOs have questioned the expertise of EFSA scientists and portrayed them as having vested interests in GMOs.Nevertheless, there is no doubt that science has accumulated enormous knowledge on GM crops, which are the most studied crops in human history [6]. In the EU alone, about 270 million euros have been spent through the Framework Programme to study health and environmental risks [5]. Framework Programme funding is approved by Member State consensus and benefits have never been on the agenda of these studies. Despite this bias in funding, the results show that GM crops do not pose a greater threat to human health and the environment than traditional crops [5,6,7]. In addition, scientists have reached international consensus on the methodology to perform risk assessment of GMOs under the umbrella of the Codex Alimentarius [8]. One might therefore conclude that the scientific risk assessment is solid and, contrary to the views of NGOs, that science has done its homework. However, attention still remains fixed on risk assessment in an attempt to fix risk management. But what about the third stage? Have the EC and Member States done their homework on risk communication?It is generally accepted that risk management in food safety crucially depends on efficient risk communication [9]. However, risk communication has remained the stepchild of the three risk management stages [6]. A review of the GM Food/Feed Regulations noted that public communication by EU authorities had been sparse and sometimes inconsistent between the EC and Member States. Similarly, a review of the EC Directive for the release of GMOs to the environment described the information provided to the public as inadequate because it is highly technical and only published in English (http://ec.europa.eu/food/food/biotechnology/evaluation/index_en.htm). Accordingly, it is not surprising that EU citizens remain averse to GMOs. Moreover, a Eurobarometer poll lists GMOs as one of the top five environmental issues for which EU citizens feel they lack sufficient information [10]. Despite the overwhelming proliferation of scientific evidence, politicians and policy-makers have ignored the most important stakeholder: society. Indeed, the reviews mentioned above recommend that the EC and Member States should improve their risk communication activities.What have we learned from the experience? Is it prudent and realistic to gauge the public''s views on a new technology before it is put into use? Can we move towards a bioeconomy and continue to ignore society? To address these questions, we focus on GM animals, as these organisms are beginning to reach the market, raise many similar issues to GM plants and thus have the potential to re-open the GM debate. GM animals, if brought into use, will involve a similar range and distribution of stakeholders in the EU, with two significant differences: animal welfare organizations will probably take the lead over environmental NGOs in the anti-GM side, and the breeding industry is far more cautious in adopting GM animals than the plant seed industry was to adopt GM crops [11].It is generally accepted that risk management in food safety crucially depends on efficient risk communicationGloFish®—a GM fish that glows when illuminated with UV light and is being sold as a novelty pet—serves as an illustrative example. GloFish® was the first GM animal to reach the market and, more importantly, did so without any negative media coverage. It is also a controversial application of GM technology, as animal welfare organizations and scientists alike consider it a frivolous use of GM, describing it as “complete nonsense” [18]. The GloFish® is not allowed in the EU, but it is commercially available throughout the USA, except in California. One might imagine that consumers in general would not be that interested in GloFish®, as research indicates that consumer acceptance of a new product is usually higher when there are clear perceived benefits [13,14]. It is difficult to imagine the benefit of GloFish® beyond its novelty, and yet it has been found illegally in the Netherlands, Germany and the UK [15]. This highlights the futility of predicting the public''s views without consulting them.Consumer attitudes and behaviour—including in regard to GMOs—are complex and change over time [13,14]. During the past years, the perception from academia and governments of the public has moved away from portraying them as a ‘victim'' of industry towards recognizing consumers as an important factor for change. Still, such arguments put citizens at the end of the production chain where they can only exert their influence by choosing to buy or to ignore certain products. Indeed, one of the strongest arguments against GM crops has been that the public never asked for them in the first place.With GM animals, the use of recombinant DNA technologies in animal breeding would rekindle an old battle between animal welfare organizations and the meat industryWith GM animals, the use of recombinant DNA technologies in animal breeding would rekindle an old battle between animal welfare organizations and the meat industry. Animal welfare organizations claim that European consumers demand better treatment for farm animals, whilst industry maintains that price remains one of the most important factors for consumers [12]. Both sides have facts to support their claims: animal welfare issues take a prominent role in the political agenda and animal welfare organizations are growing in both number and influence; industry can demonstrate a competitive disadvantage over countries in which animal welfare regulations are more relaxed and prices are lower, such as Argentina. However, the public is absent in this debate.Consumers have been described as wearing two hats: one that supports animal welfare and one that looks at the price ticket at the supermarket [16]. This situation has an impact on the breeding of livestock and the meat industry, which sees consumer prices decreasing whilst production costs increase. This trend is believed to reflect the increasing detachment of consumers from the food production chain [17]. Higher demands on animal welfare standards, environmental protection and competing international meat producers all influence the final price of meat. To remain competitive, the meat industry has to increase production per unit; it can therefore be argued that one of the main impetuses to develop GM animals was created by the behaviour—not belief—of consumers. This second example illustrates once again that society cannot be ignored when discussing any strategy to move towards the bioeconomy.The EU''s obsession with assessing risk and side-lining benefits has not facilitated an open dialogueIn conclusion, we believe that functional risk management requires all three components, including risk communication. For applications of biotechnology, a disproportionate amount of emphasis has been placed on risk assessment. The result is that the GMO debate has been framed as black and white, as either safe or unsafe, leaving policy-makers with the difficult task of educating the public about the many shades of grey. However, there are a wide range of issues that a citizen will want take into account when deciding about GM, and not all of them can be answered by science. Citizens might trust what scientists say, but “when scientists and politicians are brought together, we may well not trust that the quality of science will remain intact” [18]. By reducing the debate to scientific matters, it is a free card for the misuse of science and has a negative impact on science itself. Whilst scientists publishing pro-GM results have been attacked by NGOs, scientific publications that highlighted potential risks of GM crops came under disproportionate attacks from the scientific community [19].Flexible governance and context need to work hand-in-hand if investments in biotechnology are ultimately to benefit society. The EU''s obsession with assessing risk and side-lining benefits has not facilitated an open dialogue. The GMO experience has also shown that science cannot provide all the answers. Democratically elected governments should therefore take the lead in communicating the risks and benefits of technological advances to their electorate, and should discuss what the bioeconomy really means and the role of new technologies, including GMOs. We need to move the spotlight away from the science alone to take in the bigger picture. Ultimately, do consumers feel that paying a few extra cents for a dozen eggs is worth it if they know the chicken is happy whether it is so-called ‘natural'' or GM?? Open in a separate windowNúria Vàzquez-SalatOpen in a separate windowLouis-Marie Houdebine  相似文献   

4.
Elucidating the temporal order of silencing   总被引:1,自引:0,他引:1  
Izaurralde E 《EMBO reports》2012,13(8):662-663
  相似文献   

5.
6.
7.
8.
Elixirs of death     
Substandard and fake drugs are increasingly threatening lives in both the developed and developing world, but governments and industry are struggling to improve the situation.When people take medicine, they assume that it will make them better. However many patients cannot trust their drugs to be effective or even safe. Fake or substandard medicine is a major public health problem and it seems to be growing. More than 200 heart patients died in Pakistan in 2012 after taking a contaminated drug against hypertension [1]. In 2006, cough syrup that contained diethylene glycol as a cheap substitute for pharmaceutical-grade glycerin was distributed in Panama, causing the death of at least 219 people [2,3]. However, the problem is not restricted to developing countries. In 2012, more than 500 patients came down with fungal meningitis and several dozens died after receiving contaminated steroid injections from a compounding pharmacy in Massachusetts [4]. The same year, a fake version of the anti-cancer drug Avastin, which contained no active ingredient, was sold in the USA. The drug seemed to have entered the country through Turkey, Switzerland, Denmark and the UK [5].…many patients cannot trust their drugs to be effective or even safeThe extent of the problem is not really known, as companies and governments do not always report incidents [6]. However, the information that is available is alarming enough, especially in developing countries. One study found that 20% of antihypertensive drugs collected from pharmacies in Rwanda were substandard [7]. Similarly, in a survey of anti-malaria drugs in Southeast Asia and sub-Saharan Africa, 20–42% were found to be either of poor quality or outright fake [8], whilst 56% of amoxicillin capsules sampled in different Arab countries did not meet the US Pharmacopeia requirements [9].Developing countries are particularly susceptible to substandard and fake medicine. Regulatory authorities do not have the means or human resources to oversee drug manufacturing and distribution. A country plagued by civil war or famine might have more pressing problems—including shortages of medicine in the first place. The drug supply chain is confusingly complex with medicines passing through many different hands before they reach the patient, which creates many possible entry points for illegitimate products. Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insurance. Instead, they buy cheap medicine from street vendors at the market or on the bus (Fig 1; [2,10,11]). “People do not have the money to buy medicine at a reasonable price. But quality comes at a price. A reasonable margin is required to pay for a quality control system,” explained Hans Hogerzeil, Professor of Global Health at Groningen University in the Netherlands. In some countries, falsifying medicine has developed into a major business. The low risk of being detected combined with relatively low penalties has turned falsifying medicine into the “perfect crime” [2].Open in a separate windowFigure 1Women sell smuggled, counterfeit medicine on the Adjame market in Abidjan, Ivory Coast, in 2007. Fraudulent street medecine sales rose by 15–25% in the past two years in Ivory Coast.Issouf Sanogo/AFP Photo/Getty Images.There are two main categories of illegitimate drugs. ‘Substandard'' medicines might result from poor-quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intent. It might be manufactured outside the regulatory system, perhaps in an illegitimate production shack that blends chalk with other ingredients and presses it into pills [10]. Whilst falsified medicines do not typically contain any active ingredients, substandard medicine might contain subtherapeutic amounts. This is particularly problematic when it comes to anti-infectious drugs, as it facilitates the emergence and spread of drug resistance [12]. A sad example is the emergence of artemisinin-resistant Plasmodium strains at the Thai–Cambodia border [8] and the Thai–Myanmar border [13], and increasing multidrug-resistant tuberculosis might also be attributed to substandard medication [11].Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insuranceEven if a country effectively prosecutes falsified and substandard medicine within its borders, it is still vulnerable to fakes and low-quality drugs produced elsewhere where regulations are more lax. To address this problem, international initiatives are urgently required [10,14,15], but there is no internationally binding law to combat counterfeit and substandard medicine. Although drug companies, governments and NGOs are interested in good-quality medicines, the different parties seem to have difficulties coming to terms with how to proceed. What has held up progress is a conflation of health issues and economic interests: innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting quality of medicine [14,16].The concern that intellectual property (IP) interests threaten public health dates back to the ‘Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement'' of the World Trade Organization (WTO), adopted in 1994, to establish global protection of intellectual property rights, including patents for pharmaceuticals. The TRIPS Agreement had devastating consequences during the acquired immunodeficiency syndrome epidemic, as it blocked patients in developing countries from access to affordable medicine. Although it includes flexibility, such as the possibility for governments to grant compulsory licenses to manufacture or import a generic version of a patented drug, it has not always been clear how these can be used by countries [14,16,17].In response to public concerns over the public health consequences of TRIPS, the Doha Declaration on the TRIPS Agreement and Public Health was adopted at the WTO''s Ministerial Conference in 2001. It reaffirmed the right of countries to use TRIPS flexibilities and confirmed the primacy of public health over the enforcement of IP rights. Although things have changed for the better, the Doha Declaration did not solve all the problems associated with IP protection and public health. For example, anti-counterfeit legislation, encouraged by multi-national pharmaceutical industries and the EU, threatened to impede the availability of generic medicines in East Africa [14,16,18]. In 2008–2009, European customs authorities seized shipments of legitimate generic medicines in transit from India to other developing countries because they infringed European IP laws [14,16,17]. “We''re left with decisions being taken based on patents and trademarks that should be taken based on health,” commented Roger Bate, a global health expert and resident scholar at the American Enterprise Institute in Washington, USA. “The health community is shooting themselves in the foot.”Conflating health care and IP issues are reflected in the unclear use of the term ‘counterfeit'' [2,14]. “Since the 1990s the World Health Organization (WHO) has used the term ‘counterfeit'' in the sense we now use ‘falsified'',” explained Hogerzeil. “The confusion started in 1995 with the TRIPS agreement, through which the term ‘counterfeit'' got the very narrow meaning of trademark infringement.” As a consequence, an Indian generic, for example, which is legal in some countries but not in others, could be labelled as ‘counterfeit''—and thus acquire the negative connotation of bad quality. “The counterfeit discussion was very much used as a way to block the market of generics and to put them in a bad light,” Hogerzeil concluded.The rifts between the stakeholders have become so deep during the course of these discussions that progress is difficult to achieve. “India is not at all interested in any international regulation. And, unfortunately, it wouldn''t make much sense to do anything without them,” Hogerzeil explained. Indeed, India is a core player: not only does it have a large generics industry, but also the country seems to be, together with China, the biggest source of fake medical products [19,20]. The fact that India is so reluctant to react is tragically ironic, as this stance hampers the growth of its own generic companies like Ranbaxy, Cipla or Piramal. “I certainly don''t believe that Indian generics would lose market share if there was stronger action on public health,” Bate said. Indeed, stricter regulations and control systems would be advantageous, because they would keep fakers at bay. The Indian generic industry is a common target for fakers, because their products are broadly distributed. “The most likely example of a counterfeit product I have come across in emerging markets is a counterfeit Indian generic,” Bate said. Such fakes can damage a company''s reputation and have a negative impact on its revenues when customers stop buying the product.The WHO has had a key role in attempting to draft international regulations that would contain the spread of falsified and substandard medicine. It took a lead in 2006 with the launch of the International Medical Products Anti-Counterfeiting Taskforce (IMPACT). But IMPACT was not a success. Concerns were raised over the influence of multi-national drug companies and the possibility that issues on quality of medicines were conflated with the attempts to enforce stronger IP measures [17]. The WHO distanced itself from IMPACT after 2010. For example, it no longer hosts IMPACT''s secretariat at its headquarters in Geneva [2].‘Substandard'' medicines might result from poor quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intentIn 2010, the WHO''s member states established a working group to further investigate how to proceed, which led to the establishment of a new “Member State mechanism on substandard/spurious/falsely labelled/falsified/counterfeit medical products” (http://www.who.int/medicines/services/counterfeit/en/index.html). However, according to a publication by Amir Attaran from the University of Ottawa, Canada, and international colleagues, the working group “still cannot agree how to define the various poor-quality medicines, much less settle on any concrete actions” [14]. The paper''s authors demand more action and propose a binding legal framework: a treaty. “Until we have stronger public health law, I don''t think that we are going to resolve this problem,” Bate, who is one of the authors of the paper, said.Similarly, the US Food and Drug Administration (FDA) commissioned the Institute of Medicine (IOM) to convene a consensus committee on understanding the global public health implications of falsified and substandard pharmaceuticals [2]. Whilst others have called for a treaty, the IOM report calls on the World Health Assembly—the governing body of the WHO—to develop a code of practice such as a “voluntary soft law” that countries can sign to express their will to do better. “At the moment, there is not yet enough political interest in a treaty. A code of conduct may be more realistic,” Hogerzeil, who is also on the IOM committee, commented. Efforts to work towards a treaty should nonetheless be pursued, Bate insisted: “The IOM is right in that we are not ready to sign a treaty yet, but that does not mean you don''t start negotiating one.”Whilst a treaty might take some time, there are several ideas from the IOM report and elsewhere that could already be put into action to deal with this global health threat [10,12,14,15,19]. Any attempts to safeguard medicines need to address both falsified and substandard medicines, but the counter-measures are different [14]. Falsifying medicine is, by definition, a criminal act. To counteract fakers, action needs to be taken to ensure that the appropriate legal authorities deal with criminals. Substandard medicine, on the other hand, arises when mistakes are made in genuine manufacturing companies. Such mistakes can be reduced by helping companies do better and by improving quality control of drug regulatory authorities.Manufacturing pharmaceuticals is a difficult and costly business that requires clean water, high-quality chemicals, expensive equipment, technical expertise and distribution networks. Large and multi-national companies benefit from economies of scale to cope with these problems. But smaller companies often struggle and compromise in quality [2,21]. “India has 20–40 big companies and perhaps nearly 20,000 small ones. To me, it seems impossible for them to produce at good quality, if they remain so small,” Hogerzeil explained. “And only by being strict, can you force them to combine and to become bigger industries that can afford good-quality assurance systems.” Clamping down on drug quality will therefore lead to a consolidation of the industry, which is an essential step. “If you look at Europe and the US, there were hundreds of drug companies—now there are dozens. And if you look at the situation in India and China today, there are thousands and that will have to come down to dozens as well,” Bate explained.…innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting […] medicineIn addition to consolidating the market by applying stricter rules, the IOM has also suggested measures for supporting companies that observe best practices [2]. For example, the IOM proposes that the International Finance Corporation and the Overseas Private Investment Corporation, which promote private-sector development to reduce poverty, should create separate investment vehicles for pharmaceutical manufacturers who want to upgrade to international standards. Another suggestion is to harmonize market registration of pharmaceutical products, which would ease the regulatory burden for generic producers in developing countries and improve the efficiency of regulatory agencies.Once the medicine leaves the manufacturer, controlling distribution systems becomes another major challenge in combatting falsified and substandard medicine. Global drug supply chains have grown increasingly complicated; drugs cross borders, are sold back and forth between wholesalers and distributers, and are often repackaged. Still, there is a main difference between developing and developed countries. In the latter case, relatively few companies dominate the market, whereas in poorer nations, the distribution system is often fragmented and uncontrolled with parallel schemes, too few pharmacies, even fewer pharmacists and many unlicensed medical vendors. Every transaction creates an opportunity for falsified or substandard medicine to enter the market [2,10,19]. More streamlined and transparent supply chains and stricter licensing requirements would be crucial to improve drug quality. “And we can start in the US,” Hogerzeil commented.…India is a core player: not only does it have a large generics industry, but the country also seems to be, together with China, the biggest source of fake medical productsDistribution could be improved at different levels, starting with the import of medicine. “There are states in the USA where the regulation for medicine importation is very lax. Anyone can import; private clinics can buy medicine from Lebanon or elsewhere and fly them in,” Hogerzeil explained. The next level would be better control over the distribution system within the country. The IOM suggests that state boards should license wholesalers and distributors that meet the National Association of Boards of Pharmacy accreditation standards. “Everybody dealing with medicine has to be licensed,” Hogerzeil said. “And there should be a paper trail of who buys what from whom. That way you close the entry points for illegal drugs and prevent that falsified medicines enter the legal supply chain.” The last level would be a track-and-trace system to identify authentic drugs [2]. Every single package of medicine should be identifiable through an individual marker, such as a 3D bar code. Once it is sold, it is ticked off in a central database, so the marker cannot be reused.According to Hogerzeil, equivalent measures at these different levels should be established in every country. “I don''t believe in double standards”, he said. “Don''t say to Uganda: ‘you can''t do that''. Rather, indicate to them what a cost-effective system in the West looks like and help them, and give them the time, to create something in that direction that is feasible in their situation.”Nigeria, for instance, has demonstrated that with enough political will, it is possible to reduce the proliferation of falsified and substandard medicine. Nigeria had been a major source for falsified products, but things changed in 2001, when Dora Akunyili was appointed Director General of the National Agency for Food and Drug Administration and Control. Akunyili has a personal motivation for fighting falsified drugs: her sister Vivian, a diabetic patient, lost her life to fake insulin in 1988. Akunyili strengthened import controls, campaigned for public awareness, clamped down on counterfeit operations and pushed for harsher punishments [10,19]. Paul Orhii, Akunyili''s successor, is committed to continuing her work [10]. Although there are no exact figures, various surveys indicate that the rate of bad-quality medicine has dropped considerably in Nigeria [10].China is also addressing its drug-quality problems. In a highly publicized event, the former head of China''s State Food and Drug Administration, Zheng Xiaoyu, was executed in 2007 after he was found guilty of accepting bribes to approve untested medicine. Since then, China''s fight against falsified medicine has continued. As a result of heightened enforcement, the number of drug companies in China dwindled from 5,000 in 2004 to about 3,500 this year [2]. Moreover, in July 2012, more than 1,900 suspects were arrested for the sale of fake or counterfeit drugs.Quality comes at a price, however. It is expensive to produce high-quality medicine, and it is expensive to control the production and distribution of drugs. Many low- and middle-income countries might not have the resources to tackle the problem and might not see quality of medicine as a priority. But they should, and affluent countries should help. Not only because health is a human right, but also for economic reasons. A great deal of time and money is invested into testing the safety and efficacy of medicine during drug development, and these resources are wasted when drugs do not reach patients. Falsified and substandard medicines are a financial burden to health systems and the emergence of drug-resistant pathogens might make invaluable medications useless. Investing in the safety of medicine is therefore a humane and an economic imperative.  相似文献   

9.
Perry JN  Arpaia S  Bartsch D  Kiss J  Messéan A  Nuti M  Sweet JB  Tebbe CC 《EMBO reports》2012,13(6):481-2; author reply 482-3
The correspondents argue that “The anglerfish deception” contains omissions, errors, misunderstandings and misinterpretations.EMBO reports (2012) advanced online publication; doi: 10.1038/embor.2012.71EMBO reports (2012) 13 2, 100–105; doi: 10.1038/embor.2011.254The commentary [1] on aspects of genetically modified organism (GMO) regulation, risk assessment and risk management in the EU contains omissions, errors, misunderstandings and misinterpretations. As background, environmental risk assessment (ERA) of genetically modified (GM) plants for cultivation in the EU is conducted by applicants following principles and data requirements described in the Guidance Document (ERA GD) established by the European Food Safety Authority (EFSA) [2], which follows the tenets of Directive 2001/18/EC. The ERA GD was not referenced in [1], which wrongly referred only to EFSA guidance that does not cover ERA. Applications for cultivation of a GM plant containing the ERA, submitted to the European Commission (EC), are checked by the EFSA to ensure they address all the requirements specified in its ERA GD [2]. A lead Member State (MS) is then appointed to conduct the initial evaluation of the application, requesting further information from the applicant if required. The MS evaluation is forwarded to the EC, EFSA and all other MSs. Meanwhile, all other MSs can comment on the application and raise concerns. The EFSA GMO Panel carefully considers the content of the application, the lead MS Opinion, other MSs'' concerns, all relevant data published in the scientific literature, and the applicant''s responses to its own requests for further information. The Panel then delivers its Opinion on the application, which covers all the potential environmental areas of risk listed in 2001/18/EC. This Opinion is sent to the EC, all MSs and the applicant and published in the EFSA journal (efsa.europa.eu). Panel Opinions on GM plants for cultivation consider whether environmental harm might be caused, and, if so, suggest possible management to mitigate these risks, and make recommendations for post-market environmental monitoring (PMEM). The final decision on whether to allow the cultivation of GM plants, and any specific conditions for management and monitoring, rests with the EC and MSs and is not within the remit of the EFSA.Against this background we respond to several comments in [1]. Regarding the Comparative Safety Assessment of GM plants and whether or not further questions are asked following this assessment, the Comparative Safety Assessment, described fully in [2], is not a ‘first step''. It is a general principle that forms a central part of the ERA process, as introduced in section 2.1 of [2]. Each ERA starts with problem formulation and identification, facilitating a structured approach to identifying potential risks and scientific uncertainties; following this critical first step many further questions must be asked and addressed. In [2] it is clearly stated that all nine specific areas of risk listed in 2001/18/EC must be addressed—persistence and invasiveness; vertical gene flow; horizontal gene flow; interactions with target organisms; interactions with non-target organisms; human health; animal health; biogeochemical processes; cultivation, management and harvesting techniques. Under the Comparative Safety Assessment, following problem formulation, each of these areas of risk must be assessed by using a six-step approach, involving hazard identification, hazard characterization, exposure assessment, risk characterization, risk management strategies and an overall risk evaluation and conclusion. Indeed, far from asking “no further questions” [1], the EFSA GMO Panel always sends a sequence of written questions to the applicant as part of the ERA process to achieve a complete set of data to support the ERA evaluation (on average about ten per application).The principle of comparative analysis in ERA—sometimes referred to as substantial equivalence in the risk assessment of food and feed—is not discredited. The comparative approach is supported by all of the world''s leading national science academies [for example, 3]; none has recommended an alternative. The principle is enshrined in risk assessment guidelines issued by all relevant major international bodies, including the World Health Organization, the Food and Agriculture Organization of the United Nations and the Organisation for Economic Co-operation and Development. Critics of this approach have failed to propose any credible alternative baseline to risk assess GMOs. The comparative analysis as described in [2] is not a substitute for a safety assessment, but is a tool within the ERA [4] through which comparisons are made with non-GM counterparts in order to identify hazards associated with the GM trait, the transformation process and the associated management systems, which are additional to those impacts associated with the non-GM plant itself. The severity and frequency of these hazards are then quantified in order to assess the levels of risks associated with the novel features of the GM plant and its cultivation.European Parliament (EP) communications include that “the characteristics of the receiving environments and the geographical areas in which GM plants may be cultivated should be duly taken into account”. We agree, and the ERA GD [2] recognizes explicitly that receiving environments differ across the EU, and that environmental impacts might differ regionally. Therefore, the ERA GD [2] demands that such differences be fully accounted for in cultivation applications and that receiving environments be assessed separately in each of the nine specific areas of risk (see section 2.3.2). Furthermore, [2] states in section 3.5 that the ERA should consider scenarios representative of the diversity of situations that might occur and assess their potential implications. The EP communications state that “the long-term environmental effects of GM crops, as well as their potential effects on non-target organisms, should be rigorously assessed”. This is covered explicitly in section 2.3.4 of [2], and developed in the recent guidance on PMEM [5].The EFSA is committed to openness, transparency and dialogue and meets regularly with a wide variety of stakeholders including non-governmental organizations (NGOs) [6] to discuss GMO topics. That the EFSA is neither a centralized nor a singular voice of science in the EU is clear, because the initial report on the ERA is delivered by a MS, not the EFSA; all MSs can comment on the ERA; and EFSA GMO Panel Opinions respond transparently to every concern raised by each MS. Following publication, the EFSA regularly attends the SCFCAH Committee (comprising MS representatives) to account for its Opinions. The involvement of all MSs in the evaluation process ensures that concerns relating to their environments are addressed in the ERA. Subsequently, MSs can contribute to decisions on the management and monitoring of GM plants in their territories if cultivation is approved.In recent years, several MSs have used the ‘safeguard clause'', Article 23 of 2001/18/EC, to attempt to ban the cultivation of specific GM plants in their territories, despite earlier EFSA Panel Opinions on those plants. But the claim that “the risk science of the EFSA''s GM Panel has been publicly disputed in Member State''s justifications of their Article 23 prohibitions” needs to be placed into context [1]. When a safeguard clause (SC) is issued by a MS, the EFSA GMO Panel is often asked by the EC to deliver an Opinion on the scientific basis of the SC. The criteria on which to judge the documentation accompanying a SC are whether: (i) it represents new scientific evidence—and is not just repetition of information previously assessed—that demonstrates a risk to human and animal health and the environment; and (from the guidance notes to Annex II of 2001/18/EC) (ii) it is proportionate to the level of risk and to the level of uncertainty. It is pertinent that on 8 September 2011, the EU Court of Justice ruled that ‘with a view to the adoption of emergency measures, Article 34 of Regulation (EC) No 1829/2003 requires Member States to establish, in addition to urgency, the existence of a situation which is likely to constitute a clear and serious risk to human health, animal health or the environment''. Scientific literature is monitored continually by the Panel and relevant new work is examined to determine whether it raises any new safety concern. In all cases where the EFSA was consulted by the EC, there has been no new scientific information presented that would invalidate the Panel''s previous assessment.Throughout [1] the text demonstrates a fundamental misunderstanding of the distinction between ERA and risk management. ERA is the responsibility of the EFSA, although it is asked for its opinion on risk management methodology by the EC. Risk management implementation is the responsibility of the EC and MSs. Hence, the setting of protection goals is an issue for risk managers and might vary between MSs. However, the ERA GD [2], through its six-step approach, makes it mandatory for applications to relate the results of any studies directly to limits of environmental concern that reflect protection goals and the level of change deemed acceptable. Indeed, the recent EFSA GMO Panel Opinions on Bt-maize events [for example, 7] have been written specifically to provide MSs and risk managers with the tools to adapt the results of the quantified ERA to their own local protection goals. This enables MSs to implement risk management and PMEM proportional to the risks identified in their territories.The EFSA GMO Panel comprises independent researchers, appointed for their expertise following an open call to the scientific community. The Panel receives able support from staff of the EFSA GMO Unit and numerous ad hoc members of its working groups. It has no agenda and is neither pro- or anti-GMOs; its paramount concern is the quality of the science underpinning its Guidance Documents and Opinions.  相似文献   

10.
The temptation to silence dissenters whose non-mainstream views negatively affect public policies is powerful. However, silencing dissent, no matter how scientifically unsound it might be, can cause the public to mistrust science in general.Dissent is crucial for the advancement of science. Disagreement is at the heart of peer review and is important for uncovering unjustified assumptions, flawed methodologies and problematic reasoning. Enabling and encouraging dissent also helps to generate alternative hypotheses, models and explanations. Yet, despite the importance of dissent in science, there is growing concern that dissenting voices have a negative effect on the public perception of science, on policy-making and public health. In some cases, dissenting views are deliberately used to derail certain policies. For example, dissenting positions on climate change, environmental toxins or the hazards of tobacco smoke [1,2] seem to laypeople as equally valid conflicting opinions and thereby create or increase uncertainty. Critics often use legitimate scientific disagreements about narrow claims to reinforce the impression of uncertainty about general and widely accepted truths; for instance, that a given substance is harmful [3,4]. This impression of uncertainty about the evidence is then used to question particular policies [1,2,5,6].The negative effects of dissent on establishing public polices are present in cases in which the disagreements are scientifically well-grounded, but the significance of the dissent is misunderstood or blown out of proportion. A study showing that many factors affect the size of reef islands, to the effect that they will not necessarily be reduced in size as sea levels rise [7], was simplistically interpreted by the media as evidence that climate change will not have a negative impact on reef islands [8].In other instances, dissenting voices affect the public perception of and motivation to follow public-health policies or recommendations. For example, the publication of a now debunked link between the measles, mumps and rubella vaccine and autism [9], as well as the claim that the mercury preservative thimerosal, which was used in childhood vaccines, was a possible risk factor for autism [10,11], created public doubts about the safety of vaccinating children. Although later studies showed no evidence for these claims, doubts led many parents to reject vaccinations for their children, risking the herd immunity for diseases that had been largely eradicated from the industrialized world [12,13,14,15]. Many scientists have therefore come to regard dissent as problematic if it has the potential to affect public behaviour and policy-making. However, we argue that such concerns about dissent as an obstacle to public policy are both dangerous and misguided.Whether dissent is based on genuine scientific evidence or is unfounded, interested parties can use it to sow doubt, thwart public policies, promote problematic alternatives and lead the public to ignore sound advice. In response, scientists have adopted several strategies to limit these negative effects of dissent—masking dissent, silencing dissent and discrediting dissenters. The first strategy aims to present a united front to the public. Scientists mask existing disagreements among themselves by presenting only those claims or pieces of evidence about which they agree [16]. Although there is nearly universal agreement among scientists that average global temperatures are increasing, there are also legitimate disagreements about how much warming will occur, how quickly it will occur and the impact it might have [7,17,18,19]. As presenting these disagreements to the public probably creates more doubt and uncertainty than is warranted, scientists react by presenting only general claims [20].A second strategy is to silence dissenting views that might have negative consequences. This can take the form of self-censorship when scientists are reluctant to publish or publicly discuss research that might—incorrectly—be used to question existing scientific knowledge. For example, there are genuine disagreements about how best to model cloud formation, water vapour feedback and aerosols in general circulation paradigms, all of which have significant effects on the magnitude of global climate change predictions [17,19]. Yet, some scientists are hesitant to make these disagreements public, for fear that they will be accused of being denialists, faulted for confusing the public and policy-makers, censured for abating climate-change deniers, or criticized for undermining public policy [21,22,23,24].…there is growing concern that dissenting voices can have a negative effect on the public perception of science, on policy-making and public healthAnother strategy is to discredit dissenters, especially in cases in which the dissent seems to be ideologically motivated. This could involve publicizing the financial or political ties of the dissenters [2,6,25], which would call attention to their probable bias. In other cases, scientists might discredit the expertise of the dissenter. One such example concerns a 2007 study published in the Proceedings of the National Academy of Sciences USA, which claimed that cadis fly larvae consuming Bt maize pollen die at twice the rate of flies feeding on non-Bt maize pollen [26]. Immediately after publication, both the authors and the study itself became the target of relentless and sometimes scathing attacks from a group of scientists who were concerned that anti-GMO (genetically modified organism) interest groups would seize on the study to advance their agenda [27]. The article was criticized for its methodology and its conclusions, the Proceedings of the National Academy of Sciences USA was criticized for publishing the article and the US National Science Foundation was criticized for funding the study in the first place.Public policies, health advice and regulatory decisions should be based on the best available evidence and knowledge. As the public often lack the expertise to assess the quality of dissenting views, disagreements have the potential to cast doubt over the reliability of scientific knowledge and lead the public to question relevant policies. Strategies to block dissent therefore seem reasonable as a means to protect much needed or effective health policies, advice and regulations. However, even if the public were unable to evaluate the science appropriately, targeting dissent is not the most appropriate strategy to prevent negative side effects for several reasons. Chiefly, it contributes to the problems that the critics of dissent seek to address, namely increasing the cacophony of dissenting voices that only aim to create doubt. Focusing on dissent as a problematic activity sends the message to policy-makers and the public that any dissent undermines scientific knowledge. Reinforcing this false assumption further incentivizes those who seek merely to create doubt to thwart particular policies. Not surprisingly, think-tanks, industry and other organizations are willing to manufacture dissent simply to derail policies that they find economically or ideologically undesirable.Another danger of targeting dissent is that it probably stifles legitimate crucial voices that are needed for both advancing science and informing sound policy decisions. Attacking dissent makes scientists reluctant to voice genuine doubts, especially if they believe that doing so might harm their reputations, damage their careers and undermine prevailing theories or policies needed. For instance, a panel of scientists for the US National Academy of Sciences, when presenting a risk assessment of radiation in 1956, omitted wildly different predictions about the potential genetic harm of radiation [16]. They did not include this wide range of predictions in their final report precisely because they thought the differences would undermine confidence in their recommendations. Yet, this information could have been relevant to policy-makers. As such, targeting dissent as an obstacle to public policy might simply reinforce self-censorship and stifle legitimate and scientifically informed debate. If this happens, scientific progress is hindered.Second, even if the public has mistaken beliefs about science or the state of the knowledge of the science in question, focusing on dissent is not an effective way to protect public policy from false claims. It fails to address the presumed cause of the problem—the apparent lack of understanding of the science by the public. A better alternative would be to promote the public''s scientific literacy. If the public were educated to better assess the quality of the dissent and thus disregard instances of ideological, unsupported or unsound dissent, dissenting voices would not have such a negative effect. Of course, one might argue that educating the public would be costly and difficult, and that therefore, the public should simply listen to scientists about which dissent to ignore and which to consider. This is, however, a paternalistic attitude that requires the public to remain ignorant ‘for their own good''; a position that seems unjustified on many levels as there are better alternatives for addressing the problem.Moreover, silencing dissent, rather than promoting scientific literacy, risks undermining public trust in science even if the dissent is invalid. This was exemplified by the 2009 case of hacked e-mails from a computer server at the University of East Anglia''s Climate Research Unit (CRU). After the selective leaking of the e-mails, climate scientists at the CRU came under fire because some of the quotes, which were taken out of context, seemed to suggest that they were fudging data or suppressing dissenting views [28,29,30,31]. The stolen e-mails gave further ammunition to those opposing policies to reduce greenhouse emissions as they could use accusations of data ‘cover up'' as proof that climate scientists were not being honest with the public [29,30,31]. It also allowed critics to present climate scientists as conspirators who were trying to push a political agenda [32]. As a result, although there was nothing scientifically inappropriate revealed in the ‘climategate'' e-mails, it had the consequence of undermining the public''s trust in climate science [33,34,35,36].A significant amount of evidence shows that the ‘deficit model'' of public understanding of science, as described above, is too simplistic to account correctly for the public''s reluctance to accept particular policy decisions [37,38,39,40]. It ignores other important factors such as people''s attitudes towards science and technology, their social, political and ethical values, their past experiences and the public''s trust in governmental institutions [41,42,43,44]. The development of sound public policy depends not only on good science, but also on value judgements. One can agree with the scientific evidence for the safety of GMOs, for instance, but still disagree with the widespread use of GMOs because of social justice concerns about the developing world''s dependence on the interests of the global market. Similarly, one need not reject the scientific evidence about the harmful health effects of sugar to reject regulations on sugary drinks. One could rationally challenge such regulations on the grounds that informed citizens ought to be able to make free decisions about what they consume. Whether or not these value judgements are justified is an open question, but the focus on dissent hinders our ability to have that debate.Focusing on dissent as a problematic activity sends the message to policy-makers and the public that any dissent undermines scientific knowledgeAs such, targeting dissent completely fails to address the real issues. The focus on dissent, and the threat that it seems to pose to public policy, misdiagnoses the problem as one of the public misunderstanding science, its quality and its authority. It assumes that scientific or technological knowledge is the only relevant factor in the development of policy and it ignores the role of other factors, such as value judgements about social benefits and harms, and institutional trust and reliability [45,46]. The emphasis on dissent, and thus on scientific knowledge, as the only or main factor in public policy decisions does not give due attention to these legitimate considerations.Furthermore, by misdiagnosing the problem, targeting dissent also impedes more effective solutions and prevents an informed debate about the values that should guide public policy. By framing policy debates solely as debates over scientific facts, the normative aspects of public policy are hidden and neglected. Relevant ethical, social and political values fail to be publicly acknowledged and openly discussed.Controversies over GMOs and climate policies have called attention to the negative effects of dissent in the scientific community. Based on the assumption that the public''s reluctance to support particular policies is the result of their inability to properly understand scientific evidence, scientists have tried to limit dissenting views that create doubt. However, as outlined above, targeting dissent as an obstacle to public policy probably does more harm than good. It fails to focus on the real problem at stake—that science is not the only relevant factor in sound policy-making. Of course, we do not deny that scientific evidence is important to the develop.ment of public policy and behavioural decisions. Rather, our claim is that this role is misunderstood and often oversimplified in ways that actually contribute to problems in developing sound science-based policies.? Open in a separate windowInmaculada de Melo-MartínOpen in a separate windowKristen Intemann  相似文献   

11.
L Bornmann 《EMBO reports》2012,13(8):673-676
The global financial crisis has changed how nations and agencies prioritize research investment. There has been a push towards science with expected benefits for society, yet devising reliable tools to predict and measure the social impact of research remains a major challenge.Even before the Second World War, governments had begun to invest public funds into scientific research with the expectation that military, economic, medical and other benefits would ensue. This trend continued during the war and throughout the Cold War period, with increasing levels of public money being invested in science. Nuclear physics was the main benefactor, but other fields were also supported as their military or commercial potential became apparent. Moreover, research came to be seen as a valuable enterprise in and of itself, given the value of the knowledge generated, even if advances in understanding could not be applied immediately. Vannevar Bush, science advisor to President Franklin D. Roosevelt during the Second World War, established the inherent value of basic research in his report to the President, Science, the endless frontier, and it has become the underlying rationale for public support and funding of science.However, the growth of scientific research during the past decades has outpaced the public resources available to fund it. This has led to a problem for funding agencies and politicians: how can limited resources be most efficiently and effectively distributed among researchers and research projects? This challenge—to identify promising research—spawned both the development of measures to assess the quality of scientific research itself, and to determine the societal impact of research. Although the first set of measures have been relatively successful and are widely used to determine the quality of journals, research projects and research groups, it has been much harder to develop reliable and meaningful measures to assess the societal impact of research. The impact of applied research, such as drug development, IT or engineering, is obvious but the benefits of basic research are less so, harder to assess and have been under increasing scrutiny since the 1990s [1]. In fact, there is no direct link between the scientific quality of a research project and its societal value. As Paul Nightingale and Alister Scott of the University of Sussex''s Science and Technology Policy Research centre have pointed out: “research that is highly cited or published in top journals may be good for the academic discipline but not for society” [2]. Moreover, it might take years, or even decades, until a particular body of knowledge yields new products or services that affect society. By way of example, in an editorial on the topic in the British Medical Journal, editor Richard Smith cites the original research into apoptosis as work that is of high quality, but that has had “no measurable impact on health” [3]. He contrasts this with, for example, research into “the cost effectiveness of different incontinence pads”, which is certainly not seen as high value by the scientific community, but which has had an immediate and important societal impact.…the growth of scientific research during the past decades has outpaced the public resources available to fund itThe problem actually begins with defining the ‘societal impact of research''. A series of different concepts has been introduced: ‘third-stream activities'' [4], ‘societal benefits'' or ‘societal quality'' [5], ‘usefulness'' [6], ‘public values'' [7], ‘knowledge transfer'' [8] and ‘societal relevance'' [9, 10]. Yet, each of these concepts is ultimately concerned with measuring the social, cultural, environmental and economic returns from publicly funded research, be they products or ideas.In this context, ‘societal benefits'' refers to the contribution of research to the social capital of a nation, in stimulating new approaches to social issues, or in informing public debate and policy-making. ‘Cultural benefits'' are those that add to the cultural capital of a nation, for example, by giving insight into how we relate to other societies and cultures, by providing a better understanding of our history and by contributing to cultural preservation and enrichment. ‘Environmental benefits'' benefit the natural capital of a nation, by reducing waste and pollution, and by increasing natural preserves or biodiversity. Finally, ‘economic benefits'' increase the economic capital of a nation by enhancing its skills base and by improving its productivity [11].Given the variability and the complexity of evaluating the societal impact of research, Barend van der Meulen at the Rathenau Institute for research and debate on science and technology in the Netherlands, and Arie Rip at the School of Management and Governance of the University of Twente, the Netherlands, have noted that “it is not clear how to evaluate societal quality, especially for basic and strategic research” [5]. There is no accepted framework with adequate datasets comparable to,for example, Thomson Reuters'' Web of Science, which enables the calculation of bibliometric values such as the h index [12] or journal impact factor [13]. There are also no criteria or methods that can be applied to the evaluation of societal impact, whilst conventional research and development (R&D) indicators have given little insight, with the exception of patent data. In fact, in many studies, the societal impact of research has been postulated rather than demonstrated [14]. For Benoît Godin at the Institut National de la Recherche Scientifique (INRS) in Quebec, Canada, and co-author Christian Doré, “systematic measurements and indicators [of the] impact on the social, cultural, political, and organizational dimensions are almost totally absent from the literature” [15]. Furthermore, they note, most research in this field is primarily concerned with economic impact.A presentation by Ben Martin from the Science and Technology Policy Research Unit at Sussex University, UK, cites four common problems that arise in the context of societal impact measurements [16]. The first is the causality problem—it is not clear which impact can be attributed to which cause. The second is the attribution problem, which arises because impact can be diffuse or complex and contingent, and it is not clear what should be attributed to research or to other inputs. The third is the internationality problem that arises as a result of the international nature of R&D and innovation, which makes attribution virtually impossible. Finally, the timescale problem arises because the premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impact.…in many studies, the societal impact of research has been postulated rather than demonstratedIn addition, there are four other problems. First, it is hard to find experts to assess societal impact that is based on peer evaluation. As Robert Frodeman and James Britt Holbrook at the University of North Texas, USA, have noted, “[s]cientists generally dislike impacts considerations” and evaluating research in terms of its societal impact “takes scientists beyond the bounds of their disciplinary expertise” [10]. Second, given that the scientific work of an engineer has a different impact than the work of a sociologist or historian, it will hardly be possible to have a single assessment mechanism [4, 17]. Third, societal impact measurement should take into account that there is not just one model of a successful research institution. As such, assessment should be adapted to the institution''s specific strengths in teaching and research, the cultural context in which it exists and national standards. Finally, the societal impact of research is not always going to be desirable or positive. For example, Les Rymer, graduate education policy advisor to the Australian Group of Eight (Go8) network of university vice-chancellors, noted in a report for the Go8 that, “environmental research that leads to the closure of a fishery might have an immediate negative economic impact, even though in the much longer term it will preserve a resource that might again become available for use. The fishing industry and conservationists might have very different views as to the nature of the initial impact—some of which may depend on their view about the excellence of the research and its disinterested nature” [18].Unlike scientific impact measurement, for which there are numerous established methods that are continually refined, research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishments. Even so, governments already conduct budget-relevant measurements, or plan to do so. The best-known national evaluation system is the UK Research Assessment Exercise (RAE), which has evaluated research in the UK since the 1980s. Efforts are under way to set up the Research Excellence Framework (REF), which is set to replace the RAE in 2014 “to support the desire of modern research policy for promoting problem-solving research” [21]. In order to develop the new arrangements for the assessment and funding of research in the REF, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches for evaluating the impact of research [20]. The recommendation from this consultation is that impact should be measured in a quantifiable way, and expert panels should review narrative evidence in case studies supported by appropriate indicators [19,21].…premature measurement of impact might result in policies that emphasize research that yields only short-term benefits, ignoring potential long-term impactMany of the studies that have carried out societal impact measurement chose to do so on the basis of case studies. Although this method is labour-intensive and a craft rather than a quantitative activity, it seems to be the best way of measuring the complex phenomenon that is societal impact. The HEFCE stipulates that “case studies may include any social, economic or cultural impact or benefit beyond academia that has taken place during the assessment period, and was underpinned by excellent research produced by the submitting institution within a given timeframe” [22]. Claire Donovan at Brunel University, London, UK, considers the preference for a case-study approach in the REF to be “the ‘state of the art'' [for providing] the necessary evidence-base for increased financial support of university research across all fields” [23]. According to Finn Hansson from the Department of Leadership, Policy and Philosophy at the Copenhagen Business School, Denmark, and co-author Erik Ernø-Kjølhede, the new REF is “a clear political signal that the traditional model for assessing research quality based on a discipline-oriented Mode 1 perception of research, first and foremost in the form of publication in international journals, was no longer considered sufficient by the policy-makers” [19]. ‘Mode 1'' describes research governed by the academic interests of a specific community, whereas ‘Mode 2'' is characterized by collaboration—both within the scientific realm and with other stakeholders—transdisciplinarity and basic research that is being conducted in the context of application [19].The new REF will also entail changes in budget allocations. The evaluation of a research unit for the purpose of allocations will determine 20% of the societal influence dimension [19]. The final REF guidance contains lists of examples for different types of societal impact [24].Societal impact is much harder to measure than scientific impact, and there are probably no indicators that can be used across all disciplines and institutions for collation in databases [17]. Societal impact often takes many years to become apparent, and “[t]he routes through which research can influence individual behaviour or inform social policy are often very diffuse” [18].Yet, the practitioners of societal impact measurement should not conduct this exercise alone; scientists should also take part. According to Steve Hanney at Brunel University, an expert in assessing payback or impacts from health research, and his co-authors, many scientists see societal impact measurement as a threat to their scientific freedom and often reject it [25]. If the allocation of funds is increasingly oriented towards societal impact issues, it challenges the long-standing reward system in science whereby scientists receive credits—not only citations and prizes but also funds—for their contributions to scientific advancement. However, given that societal impact measurement is already important for various national evaluations—and other countries will follow probably—scientists should become more concerned with this aspect of their research. In fact, scientists are often unaware that their research has a societal impact. “The case study at BRASS [Centre for Business Relationships, Accountability, Sustainability and Society] uncovered activities that were previously ‘under the radar'', that is, researchers have been involved in activities they realised now can be characterized as productive interactions” [26] between them and societal stakeholders. It is probable that research in many fields already has a direct societal impact, or induces productive interactions, but that it is not yet perceived as such by the scientists conducting the work.…research into societal impact is still in the early stages: there is no distinct community with its own series of conferences, journals or awards for special accomplishmentsThe involvement of scientists is also necessary in the development of mechanisms to collect accurate and comparable data [27]. Researchers in a particular discipline will be able to identify appropriate indicators to measure the impact of their kind of work. If the approach to establishing measurements is not sufficiently broad in scope, there is a danger that readily available indicators will be used for evaluations, even if they do not adequately measure societal impact [16]. There is also a risk that scientists might base their research projects and grant applications on readily available and ultimately misleading indicators. As Hansson and Ernø-Kjølhede point out, “the obvious danger is that researchers and universities intensify their efforts to participate in activities that can be directly documented rather than activities that are harder to document but in reality may be more useful to society” [19]. Numerous studies have documented that scientists already base their activities on the criteria and indicators that are applied in evaluations [19, 28, 29].Until reliable and robust methods to assess impact are developed, it makes sense to use expert panels to qualitatively assess the societal relevance of research in the first instance. Rymer has noted that, “just as peer review can be useful in assessing the quality of academic work in an academic context, expert panels with relevant experience in different areas of potential impact can be useful in assessing the difference that research has made” [18].Whether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting the public funding and support of basic researchWhether scientists like it or not, the societal impact of their research is an increasingly important factor in attracting public funding and support of basic research. This has always been the case, but new research into measures that can assess the societal impact of research would provide better qualitative and quantitative data on which funding agencies and politicians could base decisions. At the same time, such measurement should not come at the expense of basic, blue-sky research, given that it is and will remain near-impossible to predict the impact of certain research projects years or decades down the line.  相似文献   

12.
13.
Robin Skinner  Steven McFaull 《CMAJ》2012,184(9):1029-1034

Background:

Suicide is the second leading cause of death for young Canadians (10–19 years of age) — a disturbing trend that has shown little improvement in recent years. Our objective was to examine suicide trends among Canadian children and adolescents.

Methods:

We conducted a retrospective analysis of standardized suicide rates using Statistics Canada mortality data for the period spanning from 1980 to 2008. We analyzed the data by sex and by suicide method over time for two age groups: 10–14 year olds (children) and 15–19 year olds (adolescents). We quantified annual trends by calculating the average annual percent change (AAPC).

Results:

We found an average annual decrease of 1.0% (95% confidence interval [CI] −1.5 to −0.4) in the suicide rate for children and adolescents, but stratification by age and sex showed significant variation. We saw an increase in suicide by suffocation among female children (AAPC = 8.1%, 95% CI 6.0 to 10.4) and adolescents (AAPC = 8.0%, 95% CI 6.2 to 9.8). In addition, we noted a decrease in suicides involving poisoning and firearms during the study period.

Interpretation:

Our results show that suicide rates in Canada are increasing among female children and adolescents and decreasing among male children and adolescents. Limiting access to lethal means has some potential to mitigate risk. However, suffocation, which has become the predominant method for committing suicide for these age groups, is not amenable to this type of primary prevention.Suicide was ranked as the second leading cause of death among Canadians aged 10–34 years in 2008.1 It is recognized that suicidal behaviour and ideation is an important public health issue among children and adolescents; disturbingly, suicide is a leading cause of Canadian childhood mortality (i.e., among youths aged 10–19 years).2,3Between 1980 and 2008, there were substantial improvements in mortality attributable to unintentional injury among 10–19 year olds, with rates decreasing from 37.7 per 100 000 to 10.7 per 100 000; suicide rates, however, showed less improvement, with only a small reduction during the same period (from 6.2 per 100 000 in 1980 to 5.2 per 100 000 in 2008).1Previous studies that looked at suicides among Canadian adolescents and young adults (i.e., people aged 15–25 years) have reported rates as being generally stable over time, but with a marked increase in suicides by suffocation and a decrease in those involving firearms.2 There is limited literature on self-inflicted injuries among children 10–14 years of age in Canada and the United States, but there appears to be a trend toward younger children starting to self-harm.3,4 Furthermore, the trend of suicide by suffocation moving to younger ages may be partly due to cases of the “choking game” (self-strangulation without intent to cause permanent harm) that have been misclassified as suicides.57Risk factors for suicidal behaviour and ideation in young people include a psychiatric diagnosis (e.g., depression), substance abuse, past suicidal behaviour, family factors and other life stressors (e.g., relationships, bullying) that have complex interactions.8 A suicide attempt involves specific intent, plans and availability of lethal means, such as firearms,9 elevated structures10 or substances.11 The existence of “pro-suicide” sites on the Internet and in social media12 may further increase risk by providing details of various ways to commit suicide, as well as evaluations ranking these methods by effectiveness, amount of pain involved and length of time to produce death.1315Our primary objective was to present the patterns of suicide among children and adolescents (aged 10–19 years) in Canada.  相似文献   

14.

Background:

The gut microbiota is essential to human health throughout life, yet the acquisition and development of this microbial community during infancy remains poorly understood. Meanwhile, there is increasing concern over rising rates of cesarean delivery and insufficient exclusive breastfeeding of infants in developed countries. In this article, we characterize the gut microbiota of healthy Canadian infants and describe the influence of cesarean delivery and formula feeding.

Methods:

We included a subset of 24 term infants from the Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort. Mode of delivery was obtained from medical records, and mothers were asked to report on infant diet and medication use. Fecal samples were collected at 4 months of age, and we characterized the microbiota composition using high-throughput DNA sequencing.

Results:

We observed high variability in the profiles of fecal microbiota among the infants. The profiles were generally dominated by Actinobacteria (mainly the genus Bifidobacterium) and Firmicutes (with diverse representation from numerous genera). Compared with breastfed infants, formula-fed infants had increased richness of species, with overrepresentation of Clostridium difficile. Escherichia–Shigella and Bacteroides species were underrepresented in infants born by cesarean delivery. Infants born by elective cesarean delivery had particularly low bacterial richness and diversity.

Interpretation:

These findings advance our understanding of the gut microbiota in healthy infants. They also provide new evidence for the effects of delivery mode and infant diet as determinants of this essential microbial community in early life.The human body harbours trillions of microbes, known collectively as the “human microbiome.” By far the highest density of commensal bacteria is found in the digestive tract, where resident microbes outnumber host cells by at least 10 to 1. Gut bacteria play a fundamental role in human health by promoting intestinal homeostasis, stimulating development of the immune system, providing protection against pathogens, and contributing to the processing of nutrients and harvesting of energy.1,2 The disruption of the gut microbiota has been linked to an increasing number of diseases, including inflammatory bowel disease, necrotizing enterocolitis, diabetes, obesity, cancer, allergies and asthma.1 Despite this evidence and a growing appreciation for the integral role of the gut microbiota in lifelong health, relatively little is known about the acquisition and development of this complex microbial community during infancy.3Two of the best-studied determinants of the gut microbiota during infancy are mode of delivery and exposure to breast milk.4,5 Cesarean delivery perturbs normal colonization of the infant gut by preventing exposure to maternal microbes, whereas breastfeeding promotes a “healthy” gut microbiota by providing selective metabolic substrates for beneficial bacteria.3,5 Despite recommendations from the World Health Organization,6 the rate of cesarean delivery has continued to rise in developed countries and rates of breastfeeding decrease substantially within the first few months of life.7,8 In Canada, more than 1 in 4 newborns are born by cesarean delivery, and less than 15% of infants are exclusively breastfed for the recommended duration of 6 months.9,10 In some parts of the world, elective cesarean deliveries are performed by maternal request, often because of apprehension about pain during childbirth, and sometimes for patient–physician convenience.11The potential long-term consequences of decisions regarding mode of delivery and infant diet are not to be underestimated. Infants born by cesarean delivery are at increased risk of asthma, obesity and type 1 diabetes,12 whereas breastfeeding is variably protective against these and other disorders.13 These long-term health consequences may be partially attributable to disruption of the gut microbiota.12,14Historically, the gut microbiota has been studied with the use of culture-based methodologies to examine individual organisms. However, up to 80% of intestinal microbes cannot be grown in culture.3,15 New technology using culture-independent DNA sequencing enables comprehensive detection of intestinal microbes and permits simultaneous characterization of entire microbial communities. Multinational consortia have been established to characterize the “normal” adult microbiome using these exciting new methods;16 however, these methods have been underused in infant studies. Because early colonization may have long-lasting effects on health, infant studies are vital.3,4 Among the few studies of infant gut microbiota using DNA sequencing, most were conducted in restricted populations, such as infants delivered vaginally,17 infants born by cesarean delivery who were formula-fed18 or preterm infants with necrotizing enterocolitis.19Thus, the gut microbiota is essential to human health, yet the acquisition and development of this microbial community during infancy remains poorly understood.3 In the current study, we address this gap in knowledge using new sequencing technology and detailed exposure assessments20 of healthy Canadian infants selected from a national birth cohort to provide representative, comprehensive profiles of gut microbiota according to mode of delivery and infant diet.  相似文献   

15.
Schultz AS  Finegan B  Nykiforuk CI  Kvern MA 《CMAJ》2011,183(18):E1334-E1344

Background:

Many hospitals have adopted smoke-free policies on their property. We examined the consequences of such polices at two Canadian tertiary acute-care hospitals.

Methods:

We conducted a qualitative study using ethnographic techniques over a six-month period. Participants (n = 186) shared their perspectives on and experiences with tobacco dependence and managing the use of tobacco, as well as their impressions of the smoke-free policy. We interviewed inpatients individually from eight wards (n = 82), key policy-makers (n = 9) and support staff (n = 14) and held 16 focus groups with health care providers and ward staff (n = 81). We also reviewed ward documents relating to tobacco dependence and looked at smoking-related activities on hospital property.

Results:

Noncompliance with the policy and exposure to secondhand smoke were ongoing concerns. Peoples’ impressions of the use of tobacco varied, including divergent opinions as to whether such use was a bad habit or an addiction. Treatment for tobacco dependence and the management of symptoms of withdrawal were offered inconsistently. Participants voiced concerns over patient safety and leaving the ward to smoke.

Interpretation:

Policies mandating smoke-free hospital property have important consequences beyond noncompliance, including concerns over patient safety and disruptions to care. Without adequately available and accessible support for withdrawal from tobacco, patients will continue to face personal risk when they leave hospital property to smoke.Canadian cities and provinces have passed smoking bans with the goal of reducing people’s exposure to secondhand smoke in workplaces, public spaces and on the property adjacent to public buildings.1,2 In response, Canadian health authorities and hospitals began implementing policies mandating smoke-free hospital property, with the goals of reducing the exposure of workers, patients and visitors to tobacco smoke while delivering a public health message about the dangers of smoking.25 An additional anticipated outcome was the reduced use of tobacco among patients and staff. The impetuses for adopting smoke-free policies include public support for such legislation and the potential for litigation for exposure to second-hand smoke.2,4Tobacco use is a modifiable risk factor associated with a variety of cancers, cardiovascular diseases and respiratory conditions.611 Patients in hospital who use tobacco tend to have more surgical complications and exacerbations of acute and chronic health conditions than patients who do not use tobacco.611 Any policy aimed at reducing exposure to tobacco in hospitals is well supported by evidence, as is the integration of interventions targetting tobacco dependence.12 Unfortunately, most of the nearly five million Canadians who smoke will receive suboptimal treatment,13 as the routine provision of interventions for tobacco dependence in hospital settings is not a practice norm.1416 In smoke-free hospitals, two studies suggest minimal support is offered for withdrawal, 17,18 and one reports an increased use of nicotine-replacement therapy after the implementation of the smoke-free policy.19Assessments of the effectiveness of smoke-free policies for hospital property tend to focus on noncompliance and related issues of enforcement.17,20,21 Although evidence of noncompliance and litter on hospital property2,17,20 implies ongoing exposure to tobacco smoke, half of the participating hospital sites in one study reported less exposure to tobacco smoke within hospital buildings and on the property.18 In addition, there is evidence to suggest some decline in smoking among staff.18,19,21,22We sought to determine the consequences of policies mandating smoke-free hospital property in two Canadian acute-care hospitals by eliciting lived experiences of the people faced with enacting the policies: patients and health care providers. In addition, we elicited stories from hospital support staff and administrators regarding the policies.  相似文献   

16.
The safety of genetically modified crops remains a contested issue, given the potential risk for human health and the environment. To further reduce any risks and alleviate public concerns, terminator technology could be used both to tag and control genetically modified plants.Efforts to design genetically modified (GM) crops have focused on minimizing the amount of foreign DNA present in the genome. One reason for this development is to address consumer concerns about unforeseeable effects of either the transgenes or the technology used to introduce them into plant genomes. The latter risk has not been completely assessed, but the former can be dealt with both by minimizing the amount of the foreign DNA inserted and by taking precautionary steps in the selection and adaptation of the transgene itself. In fact, first-generation GM crops contain many unnecessary DNA sequences, such as antibiotic-resistance genes and T-DNA border sequences.These efforts, however, are not only in response to consumer concerns; they will also be helpful to GM engineers. To make maize tolerant to high sodium concentrations, for example, it only requires the introduction of a salt-tolerance gene. Other DNA sequences, such as antibiotic-resistance genes, are only needed to select transgenic cells. Afterwards, they become unnecessary at best and detrimental at worst, as they preclude the use of the same antibiotic to select for the introduction of further foreign DNA sequences. New methods are already being used to transfer transgenes and cisgenes, and to introduce specific mutations that do not require any marker genes [1]. Along with these efforts, discussions have begun about establishing a precise definition of GM [2,3]. The main argument is that if the plant does not contain any transgenes, it is not subject to GM regulation [3,4].But are these new endeavours sufficient to prevent potential harm to humans or the environment? The accident at the nuclear reactor in Fukushima, Japan, in March 2011, demonstrated that a disastrous event can overcome even supposedly safe design—the power plant was not built to withstand the double impact of an earthquake and a tsunami. Thus, rather than simply minimizing risk, we need to develop an emergency control that can shut down everything if the system gets out of control. In the case of GM organisms, no matter how they have been created, we need to be able to trace every individual plant and control its biological activity.In short, we need a tag that identifies a GM crop as such. It is usually possible to identify a GM plant or plant product by using PCR-based analysis to detect the transgene or selection markers. However, new techniques that enable site-specific mutagenesis or the introduction of cisgenes without selection markers would generate ‘stealth'' GM products that are unidentifiable. Furthermore, private companies do not necessarily share information about the nature of the transgene, the selection markers and the exact technique used to generate a certain plant line, which would also make detection impossible. Thus, an easily identifiable tag would help to identify GM crops no matter how and where they have been created.This should be a germination control or ‘terminator'' gene, such as that developed by Monsanto under the moniker ‘genetic use restriction technology'' (GURT), but which was abandoned for commercial use after severe protests [5,6]. One variety of GURT would make the viability of the transgene dependent on treatment with a specific chemical; it would be easy to design GM plants based on GURT that would stop the production of viable seeds if not regularly treated with the activator compound. As transgenic plants with the terminator gene would not grow without the reagent, the escaping of transgenes into the wild would be highly unlikely. If a particular GM line were found to be harmful to human health or the environment after release, the only necessary action to eliminate them would be to withhold treatment.Natural organisms cannot be controlled as easily. Invasive species, such as Caulerpa taxifolia, have disastrous effects on the ecosystems into which they are introduced, and human efforts to keep them under control have largely failed. Thus, the most important aspect of artificial products is that they must be controllable. The nuclear disasters at Fukushima and Chernobyl happened because humans lost control over the reactors. As with reactors, GM crops are artificial constructs over which we must maintain control.Thus, the international community, including plant breeding companies, should discuss the possibility of tagging all GM crops with the terminator system. As the tag and the introduced gene must be inseparable, GM engineers must insert any designed transgene with a terminator tag in tandem. Such GM organisms could then be considered as the ‘same in kind'' as non-GM plants with a similar phenotype after appropriate risk and safety assessment. Of course, possibilities other than the terminator system should be investigated; however, at the moment it is the best option. If the idea of a general tag for all GM plants were accepted, researchers could then improve the tag to make it more compact and safer than Monsanto''s terminator technology and give us even stricter control over GM organisms. The terminator is not a terminus, but a start.  相似文献   

17.

Background:

Acute kidney injury is a serious complication of elective major surgery. Acute dialysis is used to support life in the most severe cases. We examined whether rates and outcomes of acute dialysis after elective major surgery have changed over time.

Methods:

We used data from Ontario’s universal health care databases to study all consecutive patients who had elective major surgery at 118 hospitals between 1995 and 2009. Our primary outcomes were acute dialysis within 14 days of surgery, death within 90 days of surgery and chronic dialysis for patients who did not recover kidney function.

Results:

A total of 552 672 patients underwent elective major surgery during the study period, 2231 of whom received acute dialysis. The incidence of acute dialysis increased steadily from 0.2% in 1995 (95% confidence interval [CI] 0.15–0.2) to 0.6% in 2009 (95% CI 0.6–0.7). This increase was primarily in cardiac and vascular surgeries. Among patients who received acute dialysis, 937 died within 90 days of surgery (42.0%, 95% CI 40.0–44.1), with no change in 90-day survival over time. Among the 1294 patients who received acute dialysis and survived beyond 90 days, 352 required chronic dialysis (27.2%, 95% CI 24.8–29.7), with no change over time.

Interpretation:

The use of acute dialysis after cardiac and vascular surgery has increased substantially since 1995. Studies focusing on interventions to better prevent and treat perioperative acute kidney injury are needed.More than 230 million elective major surgeries are done annually worldwide.1 Acute kidney injury is a serious complication of major surgery. It represents a sudden loss of kidney function that affects morbidity, mortality and health care costs.2 Dialysis is used for the most severe forms of acute kidney injury. In the nonsurgical setting, the incidence of acute dialysis has steadily increased over the last 15 years, and patients are now more likely to survive to discharge from hospital.35 Similarly, in the surgical setting, the incidence of acute dialysis appears to be increasing over time,610 with declining inhospital mortality.8,10,11Although previous studies have improved our understanding of the epidemiology of acute dialysis in the surgical setting, several questions remain. Many previous studies were conducted at a single centre, thereby limiting their generalizability.6,1214 Most multicentre studies were conducted in the nonsurgical setting and used diagnostic codes for acute kidney injury not requiring dialysis; however, these codes can be inaccurate.15,16 In contrast, a procedure such as dialysis is easily determined. The incidence of acute dialysis after elective surgery is of particular interest given the need for surgical consent, the severe nature of the event and the potential for mitigation. The need for chronic dialysis among patients who do not recover renal function after surgery has been poorly studied, yet this condition has a major affect on patient survival and quality of life.17 For these reasons, we studied secular trends in acute dialysis after elective major surgery, focusing on incidence, 90-day mortality and need for chronic dialysis.  相似文献   

18.
The authors of “The anglerfish deception” respond to the criticism of their article.EMBO reports (2012) advanced online publication; doi: 10.1038/embor.2012.70EMBO reports (2012) 13 2, 100–105; doi: 10.1038/embor.2011.254Our respondents, eight current or former members of the EFSA GMO panel, focus on defending the EFSA''s environmental risk assessment (ERA) procedures. In our article for EMBO reports, we actually focused on the proposed EU GMO legislative reform, especially the European Commission (EC) proposal''s false political inflation of science, which denies the normative commitments inevitable in risk assessment (RA). Unfortunately the respondents do not address this problem. Indeed, by insisting that Member States enjoy freedom over risk management (RM) decisions despite the EFSA''s central control over RA, they entirely miss the relevant point. This is the unacknowledged policy—normative commitments being made before, and during, not only after, scientific ERA. They therefore only highlight, and extend, the problem we identified.The respondents complain that we misunderstood the distinction between RA and RM. We did not. We challenged it as misconceived and fundamentally misleading—as though only objective science defined RA, with normative choices cleanly confined to RM. Our point was that (i) the processes of scientific RA are inevitably shaped by normative commitments, which (ii) as a matter of institutional, policy and scientific integrity must be acknowledged and inclusively deliberated. They seem unaware that many authorities [1,2,3,4] have recognized such normative choices as prior matters, of RA policy, which should be established in a broadly deliberative manner “in advance of risk assessment to ensure that [RA] is systematic, complete, unbiased and transparent” [1]. This was neither recognized nor permitted in the proposed EC reform—a central point that our respondents fail to recognize.In dismissing our criticism that comparative safety assessment appears as a ‘first step'' in defining ERA, according to the new EFSA ERA guidelines, which we correctly referred to in our text but incorrectly referenced in the bibliography [5], our respondents again ignore this widely accepted ‘framing'' or ‘problem formulation'' point for science. The choice of comparator has normative implications as it immediately commits to a definition of what is normal and, implicitly, acceptable. Therefore the specific form and purpose of the comparison(s) is part of the validity question. Their claim that we are against comparison as a scientific step is incorrect—of course comparison is necessary. This simply acts as a shield behind which to avoid our and others'' [6] challenge to their self-appointed discretion to define—or worse, allow applicants to define—what counts in the comparative frame. Denying these realities and their difficult but inevitable implications, our respondents instead try to justify their own particular choices as ‘science''. First, they deny the first-step status of comparative safety assessment, despite its clear appearance in their own ERA Guidance Document [5]—in both the representational figure (p.11) and the text “the outcome of the comparative safety assessment allows the determination of those ‘identified'' characteristics that need to be assessed [...] and will further structure the ERA” (p.13). Second, despite their claims to the contrary, ‘comparative safety assessment'', effectively a resurrection of substantial equivalence, is a concept taken from consumer health RA, controversially applied to the more open-ended processes of ERA, and one that has in fact been long-discredited if used as a bottleneck or endpoint for rigorous RA processes [7,8,9,10]. The key point is that normative commitments are being embodied, yet not acknowledged, in RA science. This occurs through a range of similar unaccountable RA steps introduced into the ERA Guidance, such as judgement of ‘biological relevance'', ‘ecological relevance'', or ‘familiarity''. We cannot address these here, but our basic point is that such endless ‘methodological'' elaborations of the kind that our EFSA colleagues perform, only obscure the institutional changes needed to properly address the normative questions for policy-engaged science.Our respondents deny our claim concerning the singular form of science the EC is attempting to impose on GM policy and debate, by citing formal EFSA procedures for consultations with Member States and non-governmental organizations. However, they directly refute themselves by emphasizing that all Member State GM cultivation bans, permitted only on scientific grounds, have been deemed invalid by EFSA. They cannot have it both ways. We have addressed the importance of unacknowledged normativity in quality assessments of science for policy in Europe elsewhere [11]. However, it is the ‘one door, one key'' policy framework for science, deriving from the Single Market logic, which forces such singularity. While this might be legitimate policy, it is not scientific. It is political economy.Our respondents conclude by saying that the paramount concern of the EFSA GMO panel is the quality of its science. We share this concern. However, they avoid our main point that the EC-proposed legislative reform would only exacerbate their problem. Ignoring the normative dimensions of regulatory science and siphoning-off scientific debate and its normative issues to a select expert panel—which despite claiming independence faces an EU Ombudsman challenge [12] and European Parliament refusal to discharge their 2010 budget, because of continuing questions over conflicts of interests [13,14]—will not achieve quality science. What is required are effective institutional mechanisms and cultural norms that identify, and deliberatively address, otherwise unnoticed normative choices shaping risk science and its interpretive judgements. It is not the EFSA''s sole responsibility to achieve this, but it does need to recognize and press the point, against resistance, to develop better EU science and policy.  相似文献   

19.

Background:

Studies suggest that Aboriginal people in Canada are over-represented among people using injection drugs. The factors associated with transitioning to the use of injection drugs among young Aboriginal people in Canada are not well understood.

Methods:

The Cedar Project is a prospective cohort study (2003–2007) involving young Aboriginal people in Vancouver and Prince George, British Columbia, who use illicit drugs. Participants’ venous blood samples were tested for antibodies to HIV and the hepatitis C virus, and drug use was confirmed using saliva screens. The primary outcomes were use of injection drugs at baseline and tranisition to injection drug use in the six months before each follow-up interview.

Results:

Of 605 participants, 335 (55.4%) reported using injection drugs at baseline. Young people who used injection drugs tended to be older than those who did not, female and in a relationship. Participants who injected drugs were also more likely than those who did not to have been denied shelter because of their drug use, to have been incarcerated, to have a mental illness and to have been involved in sex work. Transition to injection drug use occurred among 39 (14.4%) participants, yielding a crude incidence rate of 19.8% and an incidence density of 11.5 participants per 100 person-years. In unadjusted analysis, transition to injection drug use was associated with being female (odds ratio [OR] 1.98, 95% confidence interval (CI) 1.06–3.72), involved in sex work (OR 3.35, 95% CI 1.75–6.40), having a history of sexually transmitted infection (OR 2.01, 95% CI 1.07–3.78) and using drugs with sex-work clients (OR 2.51, 95% CI 1.19–5.32). In adjusted analysis, transition to injection drug use remained associated with involvement in sex work (adjusted OR 3.94, 95% CI 1.45–10.71).

Interpretation:

The initiation rate for injection drug use of 11.5 participants per 100 person-years among participants in the Cedar Project is distressing. Young Aboriginal women in our study were twice as likely to inject drugs as men, and participants who injected drugs at baseline were more than twice as likely as those who did not to be involved in sex work.Aboriginal leadership in Canada is deeply concerned about substance use, more specifically injection drug use and its association with the spread of HIV and the hepatitis C virus among Aboriginal young people.1,2 Recent studies in Canada suggest that Aboriginal people are over-represented among people who use injection drugs.3,4 For Aboriginal young people in Canada under the age of 24 years, injection drug use accounts for the majority of infections with the hepatitis C virus (70%–80%)5,6 and over half (59%) of HIV infections.7Indigenous scholars have stated that research on substance use within Aboriginal communities must consider the context of colonization, including the intergenerational impacts of the residential school and child welfare systems.811 It is now well documented that Aboriginal children experienced extensive psychological, sexual, physical and emotional abuses within those systems.12,13 As former students of residential schools raise children and grandchildren, the intergenerational effects of abuse and familial fragmentation are evident in communities where interpersonal violence and drug dependence are pervasive.1416A priority for preventing infections with HIV and hepatitis C among young Aboriginal people is the development of programs and rights-based,18,19 youth-informed17 policies aimed at preventing the use of injection drugs. However, research to date has not provided sufficient evidence to inform such development.2,19 Concerns over this paucity of information led to the launch of a two-city cohort study in 2003 to address HIV-related vulnerabilities among young Aboriginal people in British Columbia — a unique study centred on at-risk youth and supported by and partnered with Aboriginal investigators and collaborators.We report here baseline and longitudinal data on the factors associated with injection drug use and the transition to injection drug use to inform the development of prevention programs and policies.  相似文献   

20.

Background:

Previous studies have suggested that the immunochemical fecal occult blood test has superior specificity for detecting bleeding in the lower gastrointestinal tract even if bleeding occurs in the upper tract. We conducted a large population-based study involving asymptomatic adults in Taiwan, a population with prevalent upper gastrointestinal lesions, to confirm this claim.

Methods:

We conducted a prospective cohort study involving asymptomatic people aged 18 years or more in Taiwan recruited to undergo an immunochemical fecal occult blood test, colonoscopy and esophagogastroduodenoscopy between August 2007 and July 2009. We compared the prevalence of lesions in the lower and upper gastrointestinal tracts between patients with positive and negative fecal test results. We also identified risk factors associated with a false-positive fecal test result.

Results:

Of the 2796 participants, 397 (14.2%) had a positive fecal test result. The sensitivity of the test for predicting lesions in the lower gastrointestinal tract was 24.3%, the specificity 89.0%, the positive predictive value 41.3%, the negative predictive value 78.7%, the positive likelihood ratio 2.22, the negative likelihood ratio 0.85 and the accuracy 73.4%. The prevalence of lesions in the lower gastrointestinal tract was higher among those with a positive fecal test result than among those with a negative result (41.3% v. 21.3%, p < 0.001). The prevalence of lesions in the upper gastrointestinal tract did not differ significantly between the two groups (20.7% v. 17.5%, p = 0.12). Almost all of the participants found to have colon cancer (27/28, 96.4%) had a positive fecal test result; in contrast, none of the three found to have esophageal or gastric cancer had a positive fecal test result (p < 0.001). Among those with a negative finding on colonoscopy, the risk factors associated with a false-positive fecal test result were use of antiplatelet drugs (adjusted odds ratio [OR] 2.46, 95% confidence interval [CI] 1.21–4.98) and a low hemoglobin concentration (adjusted OR 2.65, 95% CI 1.62–4.33).

Interpretation:

The immunochemical fecal occult blood test was specific for predicting lesions in the lower gastrointestinal tract. However, the test did not adequately predict lesions in the upper gastrointestinal tract.The fecal occult blood test is a convenient tool to screen for asymptomatic gastrointestinal bleeding.1 When the test result is positive, colonoscopy is the strategy of choice to investigate the source of bleeding.2,3 However, 13%–42% of patients can have a positive test result but a negative colonoscopy,4 and it has not yet been determined whether asymptomatic patients should then undergo evaluation of the upper gastrointestinal tract.Previous studies showed that the frequency of lesions in the upper gastrointestinal tract was comparable or even higher than that of colonic lesions59 and that the use of esophagogastroduodenoscopy may change clinical management.10,11 Some studies showed that evaluation of the upper gastrointestinal tract helped to identify important lesions in symptomatic patients and those with iron deficiency anemia;12,13 however, others concluded that esophagogastroduodenoscopy was unjustified because important findings in the upper gastrointestinal tract were rare1417 and sometimes irrelevant to the results of fecal occult blood testing.1821 This controversy is related to the heterogeneity of study populations and to the limitations of the formerly used guaiac-based fecal occult blood test,520 which was not able to distinguish bleeding in the lower gastrointestinal tract from that originating in the upper tract.The guaiac-based fecal occult blood test is increasingly being replaced by the immunochemical-based test. The latter is recommended for detecting bleeding in the lower gastrointestinal tract because it reacts with human globin, a protein that is digested by enzymes in the upper gastrointestinal tract.22 With this advantage, the occurrence of a positive fecal test result and a negative finding on colonoscopy is expected to decrease.We conducted a population-based study in Taiwan to verify the performance of the immunochemical fecal occult blood test in predicting lesions in the lower gastrointestinal tract and to confirm that results are not confounded by the presence of lesions in the upper tract. In Taiwan, the incidence of colorectal cancer is rapidly increasing, and Helicobacter pylori-related lesions in the upper gastrointestinal tract remain highly prevalent.23 Same-day bidirectional endoscopies are therefore commonly used for cancer screening.24 This screening strategy provides an opportunity to evaluate the performance of the immunochemical fecal occult blood test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号