首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 866 毫秒
1.
Wolinsky H 《EMBO reports》2012,13(4):308-312
Genomics has become a powerful tool for conservationists to track individual animals, analyse populations and inform conservation management. But as helpful as these techniques are, they are not a substitute for stricter measures to protect threatened species.You might call him Queequeg. Like Herman Melville''s character in the 1851 novel Moby Dick, Howard Rosenbaum plies the seas in search of whales following old whaling charts. Standing on the deck of a 12 m boat, he brandishes a crossbow with hollow-tipped darts to harpoon the flanks of the whales as they surface to breathe (Fig 1). “We liken it to a mosquito bite. Sometimes there''s a reaction. Sometimes the whales are competing to mate with a female, so they don''t even react to the dart,” explained Rosenbaum, a conservation biologist and geneticist, and Director of the New York City-based Wildlife Conservation Society''s Ocean Giants programme. Rosenbaum and his colleagues use the darts to collect half-gram biopsy samples of whale epidermis and fat—about the size of a human fingernail—to extract DNA as part of international efforts to save the whales.Open in a separate windowFigure 1Howard Rosenbaum with a crossbow to obtain skin samples from whales. © Wildlife Conservation Society.Like Rosenbaum, many conservation biologists and wildlife managers increasingly rely on DNA analysis tools to identify species, determine sex or analyse pedigrees. George Amato, Director of the Sackler Institute for Comparative Genomics at the American Museum of Natural History in New York, NY, USA, said that during his 25-year career, genetic tools have become increasingly important for conservation biology and related fields. Genetic information taken from individual animals to the extent of covering whole populations now plays a valuable part in making decisions about levels of protection for certain species or populations and managing conflicts between humans and conservation goals.[…] many conservation biologists and wildlife managers increasingly rely on DNA analysis tools to identify species, determine sex or analyse pedigreesMoreover, Amato expects the use and importance of genetics to grow even more, given that conservation of biodiversity has become a global issue. “My office overlooks Central Park. And there are conservation issues in Central Park: how do you maintain the diversity of plants and animals? I live in suburban Connecticut, where we want the highest levels of diversity within a suburban environment,” he said. “Then, you take this all the way to Central Africa. There are conservation issues across the entire spectrum of landscapes. With global climate change, techniques in genetics and molecular biology are being used to look at issues and questions across that entire landscape.”Rosenbaum commented, “The genomic revolution has certainly changed the way we think about conservation and the questions we can ask and the things we can do. It can be a forensic analysis.” The data translates “into a conservation value where governments, conservationists, and people who actively protect these species can use this information to better protect these animals in the wild.”“The genomic revolution has certainly changed the way we think about conservation […]”Rosenbaum and colleagues from the Wildlife Conservation Society, the American Museum of Natural History and other organizations used genomics for the largest study so far—based on more than 1,500 DNA samples—about the population dynamics of humpback whales in the Southern hemisphere [1]. The researchers analysed population structure and migration rates; they found the highest gene flow between whales that breed on either side of the African continent and a lower gene flow between whales on opposite sides of the Atlantic, from the Brazilian coast to southern Africa. The group also identified an isolated population of fewer than 200 humpbacks in the northern Indian Ocean off the Arabian Peninsula, which are only distantly related to the humpbacks breeding off the coast of Madagascar and the eastern coast of southern Africa. “This group is a conservation priority,” Rosenbaum noted.He said the US National Oceanographic and Atmospheric Administration is using this information to determine whether whale populations are recovering or endangered and what steps should be taken to protect them. Through wildlife management and protection, humpbacks have rebounded to 60,000 or more individuals from fewer than 5,000 in the 1960s. Rosenbaum''s data will, among other things, help to verify whether the whales should be managed as one large group or divided into subgroups.He has also been looking at DNA collected from dolphins caught in fishing nets off the coast of Argentina. Argentine officials will be using the data to make recommendations about managing these populations. “We''ve been able to demonstrate that it''s not one continuous population in Argentina. There might be multiple populations that merit conservation protection,” Rosenbaum explained.The sea turtle is another popular creature that is high on conservationists'' lists. To get DNA samples from sea turtles, population geneticist and wildlife biologist Nancy FitzSimmons from the University of Canberra in Australia reverts to a simpler method than Rosenbaum''s harpoon. “Ever hear of a turtle rodeo?” she asked. FitzSimmons goes out on a speed boat in the Great Barrier Reef with her colleagues, dives into the water and wrangles a turtle on board so it can be measured, tagged, have its reproductive system examined with a laparoscope and a skin tag removed with a small scissor or scalpel for DNA analysis (Fig 2).Open in a separate windowFigure 2Geneticist Stewart Pittard measuring a sea turtle. © Michael P. Jensen, NOAA.Like Rosenbaum, she uses DNA as a forensic tool to characterize individuals and populations [2]. “That''s been a really important part, to be able to tell people who are doing the management, ‘This population is different from that one, and you need to manage them appropriately,''” FitzSimmons explained. The researchers have characterized the turtle''s feeding grounds around Australia to determine which populations are doing well and which are not. If they see that certain groups are being harmed through predation or being trapped in ‘ghost nets'' abandoned by fishermen, conservation measures can be implemented.FitzSimmons, who started her career studying the genetics of bighorn sheep, has recently been using DNA technology in other areas, including finding purebred crocodiles to reintroduce them into a wetland ecosystem at Cat Tien National Park in Vietnam. “DNA is invaluable. You can''t reintroduce animals that aren''t purebred,” she said, explaining the rationale for looking at purebreds. “It''s been quite important to do genetic studies to make sure you''re getting the right animals to the right places.”Geneticist Hans Geir Eiken, senior researcher at the Norwegian Institute for Agricultural and Environmental Research in Svanvik, Norway, does not wrestle with the animals he is interested in. He uses a non-intrusive method to collect DNA from brown bears (Fig 3). “We collect the hair that is on the vegetation, on the ground. We can manage with only a single hair to get a DNA profile,” he said. “We can even identify mother and cub in the den based on the hairs. We can collect hairs from at least two different individuals and separate them afterwards and identify them as separate entities. Of course we also study how they are related and try to separate the bears into pedigrees, but that''s more research and it''s only occasionally that we do that for [bear] management.”Open in a separate windowFigure 3Bear management in Scandinavia. (A) A brown bear in a forest in Northern Finland © Alexander Kopatz, Norwegian Institute for Agricultural and Environmental Research. (B) Faecal sampling. Monitoring of bears in Norway is performed in a non-invasive way by sampling hair and faecal samples in the field followed by DNA profiling. © Hans Geir Eiken. (C) Brown-bear hair sample obtained by so-called systematic hair trapping. A scent lure is put in the middle of a small area surrounded by barbed wire. To investigate the smell, the bears have to cross the wire and some hair will be caught. © Hans Geir Eiken. (D) A female, 2.5-year-old bear that was shot at Svanvik in the Pasvik Valley in Norway in August 2008. She and her brother had started to eat from garbage cans after they left their mother and the authorities gave permission to shoot them. The male was shot one month later after appearing in a schoolyard. © Hans Geir Eiken.Eiken said the Norwegian government does not invest a lot of money on helicopters or other surveillance methods, and does not want to not bother the animals. “A lot of disturbing things were done to bears. They were trapped. They were radio-collared,” he said. “I think as a researcher we should replace those approaches with non-invasive genetic techniques. We don''t disturb them. We just collect samples from them.”Eiken said that the bears pose a threat to two million sheep that roam freely around Norway. “Bears can kill several tons of them everyday. This is not the case in the other countries where they don''t have free-ranging sheep. That''s why it''s a big economic issue for us in Norway.” Wildlife managers therefore have to balance the fact that brown bears are endangered against the economic interests of sheep owners; about 10% of the brown bears are killed each year because they have caused damage, or as part of a restricted ‘licensed'' hunting programme. Eiken said that within two days of a sheep kill, DNA analysis can determine which species killed the sheep, and, if it is a bear, which individual. “We protect the females with cubs. Without the DNA profiles, it would be easy to kill the females, which also take sheep of course.”Wildlife managers […] have to balance the fact that brown bears are endangered against the economic interests of sheep owners…It is not only wildlife management that interests Eiken; he was part of a group led by Axel Janke at the Biodiversity and Climate Research Centre in Frankfurt am Main, Germany, which completed sequencing of the brown bear genome last year. The genome will be compared with that of the polar bear in the hope of finding genes involved in environmental adaptation. “The reason why [the comparison is] so interesting between the polar bear and the brown bear is that if you look at their evolution, it''s [maybe] less than one million years when they separated. In genetics that''s not a very long time,” Eiken said. “But there are a lot of other issues that we think are even more interesting. Brown bears stay in their caves for 6 months in northern Norway. We think we can identify genes that allow the bear to be in the den for so long without dying from it.”Like bears, wolves have also been clashing with humans for centuries. Hunters exterminated the natural wolf population in the Scandinavian Peninsula in the late nineteenth century as governments protected reindeer farming in northern Scandinavia. After the Swedish government finally banned wolf hunting in the 1960s, three wolves from Finland and Russia immigrated in the 1980s, and the population rose to 250, along with some other wolves that joined the highly inbred population. Sweden now has a database of all individual wolves, their pedigrees and breeding territories to manage the population and resolve conflicts with farmers. “Wolves are very good at causing conflicts with people. If a wolf takes a sheep or cattle, or it is in a recreation area, it represents a potential conflict. If a wolf is identified as a problem, then the local authorities may issue a license to shoot that wolf,” said Staffan Bensch, a molecular ecologist and ornithologist at Lund University in Sweden.Again, it is the application of genomics tools that informs conservation management for the Scandinavian wolf population. Bensch, who is best known for his work on population genetics and genomics of migratory songbirds, was called to apply his knowledge of microsatellite analysis. The investigators collect saliva from the site where a predator has chewed or bitten the prey, and extract mitochondrial DNA to determine whether a wolf, a bear, a fox or a dog has killed the livestock. The genetic information potentially can serve as a death warrant if a wolf is linked with a kill, and to determine compensation for livestock owners.The genetic information potentially can serve as a death warrant if a wolf is linked with a kill…Yet, not all wolves are equal. “If it''s shown to be a genetically valuable wolf, then somehow more damage can be tolerated, such as a wolf taking livestock for instance,” Bensch said. “In the management policy, there is genetic analysis of every wolf that has a question on whether it should be shot or saved. An inbred Scandinavian wolf has no valuable genes so it''s more likely to be shot.” Moreover, Bensch said that DNA analysis showed that in at least half the cases, dogs were the predator. “There are so many more dogs than there are wolves,” he said. “Some farmers are prejudiced that it is the wolf that killed their sheep.”According to Dirk Steinke, lead scientist at Marine Barcode of Life and an evolutionary biologist at the Biodiversity Institute of Ontario at the University of Guelph in Canada, DNA barcoding could also contribute to conservation efforts. The technique—usually based on comparing the sequence of the mitochondrial CO1 gene with a database—could help to address the growing trade in shark fins for wedding feasts in China and among the Chinese diaspora, for example. Shark fins confiscated by Australian authorities from Indonesian ships are often a mess of tissue; barcoding helps them to identify the exact species. “As it turns out, some of them are really in the high-threat categories on the IUCN Red List of Threatened Species, so it was pretty concerning,” Steinke said. “That is something where barcoding turns into a tool where wildlife management can be done—even if they only get fragments of an animal. I am not sure if this can prevent people from hunting those animals, but you can at least give them the feedback on whether they did something illegal or not.”Steinke commented that DNA tools are handy not only for megafauna, but also for the humbler creatures in the sea, “especially when it comes to marine invertebrates. The larval stages are the only ones where they are mobile. If you''re looking at wildlife management from an invertebrate perspective in the sea, then these mobile life stages are very important. Their barcoding might become very handy because for some of those groups it''s the only reliable way of knowing what you''re looking at.” Yet, this does not necessarily translate into better conservation: “Enforcement reactions come much quicker when it''s for the charismatic megafauna,” Steinke conceded.“Enforcement reactions come much quicker when it''s for the charismatic megafauna”Moreover, reliable identification of animal species could even improve human health. For instance, Amato and colleagues from the US Centers for Disease Control and Prevention demonstrated for the first time the presence of zoonotic viruses in non-human primates seized in American airports [3]. They identified retroviruses (simian foamy virus) and/or herpesviruses (cytomegalovirus and lymphocryptovirus), which potentially pose a threat to human health. Amato suggested that surveillance of the wildlife trade by using barcodes would help facilitate prevention of disease. Moreover, DNA barcoding could also show whether the meat itself is from monkeys or other wild animals to distinguish illegally hunted and traded bushmeat—the term used for meat from wild animals in Africa—from legal meats.Amato''s group also applied barcoding to bluefin tuna, commonly used in sushi, which he described as the “bushmeat of the developed world”, as the species is being driven to near extinction through overharvesting. Developing barcodes for tuna could help to distinguish bluefin from yellowfin or other tuna species and could assist measures to protect the bluefin. “It can be used sort of like ‘Wildlife CSI'' (after the popular American TV series),” he said.As helpful as these technologies are […] they are not sufficient to protect severely threatened species…In fact, barcoding for law enforcement is growing. Mitchell Eaton, assistant unit leader at the US Geological Survey New York Cooperative Fish and Wildlife Research Unit in Ithaca, NY, USA, noted that the technique is being used by US government agencies such as the FDA and the US Fish & Wildlife Service, as well as African and South American governments, to monitor the illegal export of pets and bushmeat. It is also used as part of the United Nations'' Convention on Biological Diversity for cataloguing the Earth''s biodiversity, identifying pathogens and monitoring endangered species. He expects that more law enforcement agencies around the world will routinely apply these tools: “This is actually easy technology to use.”In that way, barcoding as well as genetics and its related technologies help to address a major problem in conservation and protection measures: to monitor the size, distribution and migration of populations of animals and to analyse their genetic diversity. It gives biologists and conservations a better picture of what needs extra protective measures, and gives enforcement agencies a new and reliable forensic tool to identify and track illegal hunting and trade of protected species. As helpful as these technologies are, however, they are not sufficient to protect severely threatened species such as the bluefin tuna and are therefore not a substitute for more political action and stricter enforcement.  相似文献   

2.
The French government has ambitious goals to make France a leading nation for synthetic biology research, but it still needs to put its money where its mouth is and provide the field with dedicated funding and other support.Synthetic biology is one of the most rapidly growing fields in the biological sciences and is attracting an increasing amount of public and private funding. France has also seen a slow but steady development of this field: the establishment of a national network of synthetic biologists in 2005, the first participation of a French team at the International Genetically Engineered Machine competition in 2007, the creation of a Master''s curriculum, an institute dedicated to synthetic and systems biology at the University of Évry-Val-d''Essonne-CNRS-Genopole in 2009–2010, and an increasing number of conferences and debates. However, scientists have driven the field with little dedicated financial support from the government.Yet the French government has a strong self-perception of its strengths and has set ambitious goals for synthetic biology. The public are told about a “new generation of products, industries and markets” that will derive from synthetic biology, and that research in the field will result in “a substantial jump for biotechnology” and an “industrial revolution”[1,2]. Indeed, France wants to compete with the USA, the UK, Germany and the rest of Europe and aims “for a world position of second or third”[1]. However, in contrast with the activities of its competitors, the French government has no specific scheme for funding or otherwise supporting synthetic biology[3]. Although we read that “France disposes of strong competences” and “all the assets needed”[2], one wonders how France will achieve its ambitious goals without dedicated budgets or detailed roadmaps to set up such institutions.In fact, France has been a straggler: whereas the UK and the USA have published several reports on synthetic biology since 2007, and have set up dedicated governing networks and research institutions, the governance of synthetic biology in France has only recently become an official matter. The National Research and Innovation Strategy (SNRI) only defined synthetic biology as a “priority” challenge in 2009 and created a working group in 2010 to assess the field''s developments, potentialities and challenges; the report was published in 2011[1].At the same time, the French Parliamentary Office for the Evaluation of Scientific and Technological Choices (OPECST) began a review of the field “to establish a worldwide state of the art and the position of our country in terms of training, research and technology transfer”. Its 2012 report entitled The Challenges of Synthetic Biology[2] assessed the main ethical, legal, economic and social challenges of the field. It made several recommendations for a “controlled” and “transparent” development of synthetic biology. This is not a surprise given that the development of genetically modified organisms and nuclear power in France has been heavily criticized for lack of transparency, and that the government prefers to avoid similar future controversies. Indeed, the French government seems more cautious today: making efforts to assess potential dangers and public opinion before actually supporting the science itself.Both reports stress the necessity of a “real” and “transparent” dialogue between science and society and call for “serene […] peaceful and constructive” public discussion. The proposed strategy has three aims: to establish an observatory, to create a permanent forum for discussion and to broaden the debate to include citizens[4]. An Observatory for Synthetic Biology was set up in January 2012 to collect information, mobilize actors, follow debates, analyse the various positions and organize a public forum. Let us hope that this observatory—unlike so many other structures—will have a tangible and durable influence on policy-making, public opinion and scientific practice.Many structural and organizational challenges persist, as neither the National Agency for Research nor the National Centre for Scientific Research have defined the field as a funding priority and public–private partnerships are rare in France. Moreover, strict boundaries between academic disciplines impede interdisciplinary work, and synthetic biology is often included in larger research programmes rather than supported as a research field in itself. Although both the SNRI and the OPECST reports make recommendations for future developments—including setting up funding policies and platforms—it is not clear whether these will materialize, or when, where and what size of investments will be made.France has ambitious goals for synthetic biology, but it remains to be seen whether the government is willing to put ‘meat to the bones'' in terms of financial and institutional support. If not, these goals might come to be seen as unrealistic and downgraded or they will be replaced with another vision that sees synthetic biology as something that only needs discussion and deliberation but no further investment. One thing is already certain: the future development of synthetic biology in France is a political issue.  相似文献   

3.
4.
Geneticists and historians collaborated recently to identify the remains of King Richard III of England, found buried under a car park. Genetics has many more contributions to make to history, but scientists and historians must learn to speak each other''s languages.The remains of King Richard III (1452–1485), who was killed with sword in hand at the Battle of Bosworth Field at the end of the War of the Roses, had lain undiscovered for centuries. Earlier this year, molecular biologists, historians, archaeologists and other experts from the University of Leicester, UK, reported that they had finally found his last resting place. They compared ancient DNA extracted from a scoliotic skeleton discovered under a car park in Leicester—once the site of Greyfriars church, where Richard was rumoured to be buried, but the location of which had been lost to time—with that of a seventeenth generation nephew of King Richard: it was a match. Richard has captured the public imagination for centuries: Tudor-friendly playwright William Shakespeare (1564–1616) portrayed Richard as an evil hunchback who killed his nephews in order to ascend to the throne, whilst in succeeding years others have leapt to his defence and backed an effort to find his remains.The application of genetics to history is revealing much about the ancestry and movements of groups of humans, from the fall of the Roman Empire to ancient ChinaMolecular biologist Turi King, who led the Leicester team that extracted the DNA and tracked down a descendant of Richard''s older sister, said that Richard''s case shows how multi-disciplinary teams can join forces to answer history''s questions. “There is a lot of talk about what meaning does it have,” she said. “It tells us where Richard III was buried; that the story that he was buried in Greyfriars is true. I think there are some people who [will] try and say: “well, it''s going to change our view of him” […] It won''t, for example, tell us about his personality or if he was responsible for the killing of the Princes in the Tower.”The discovery and identification of Richard''s skeleton made headlines around the world, but he is not the main prize when it comes to collaborations between historians and molecular biologists. Although some of the work has focused on high-profile historic figures—such as Louis XVI (1754–1793), the only French king to be executed, and Vlad the Impaler, the Transylvanian royal whose patronymic name inspired Bram Stoker''s Dracula (Fig 1)—many other projects involve population studies. Application of genetics to history is revealing much about the ancestry and movements of groups of humans, from the fall of the Roman Empire to ancient China.Open in a separate windowFigure 1The use of molecular genetics to untangle history. Even when the historical record is robust, molecular biology can contribute to our understanding of important figures and their legacies and provide revealing answers to questions about ancient princes and kings.Medieval historian Michael McCormick of Harvard University, USA, commented that historians have traditionally relied on studying records written on paper, sheepskin and papyrus. However, he and other historians are now teaming up with geneticists to read the historical record written down in the human genome and expand their portfolio of evidence. “What we''re seeing happening now—because of the tremendous impact from the natural sciences and particularly the application of genomics; what some of us are calling genomic archaeology—is that we''re working back from modern genomes to past events reported in our genomes,” McCormick explained. “The boundaries between history and pre-history are beginning to dissolve. It''s a really very, very exciting time.”…in the absence of written records, DNA and archaeological records could help fill in gapsMcCormick partnered with Mark Thomas, an evolutionary geneticist at University College London, UK, to try to unravel the mystery of one million Romano-Celtic men who went missing in Britain after the fall of the Roman Empire. Between the fourth and seventh centuries, Germanic tribes of Angles, Saxons and Jutes began to settle in Britain, replacing the Romano-British culture and forcing some of the original inhabitants to migrate to other areas. “You can''t explain the predominance of the Germanic Y chromosome in England based on the population unless you imagine (a) that they killed all the male Romano-Celts or (b) there was what Mark called ‘sexual apartheid'' and the conquerors mated preferentially with the local women. [The latter] seems to be the best explanation that I can see,” McCormick said of the puzzle.Ian Barnes, a molecular palaeobiologist at Royal Holloway University of London, commented that McCormick studies an unusual period, for which both archaeological and written records exist. “I think archaeologists and historians are used to having conflicting evidence between the documentary record and the archaeological record. If we bring in DNA, the goal is to work out how to pair all the information together into the most coherent story.”Patrick Geary, Professor of Western Medieval History at the Institute for Advanced Study in Princeton, New Jersey, USA, studies the migration period of Europe: a time in the first millennium when Germanic tribes, including the Goths, Vandals, Huns and Longobards, moved across Europe as the Roman Empire was declining. “We do not have detailed written information about these migrations or invasions or whatever one wants to call them. Primarily what we have are accounts written later on, some generations later, from the contemporary record. What we tend to have are things like sermons bemoaning the faith of people because God''s wrath has brought the barbarians on them. Hardly the kind of thing that gives us an idea of exactly what is going on—are these really invasions, are they migrations, are they small military groups entering the Empire? And what are these ‘peoples'': biologically related ethnic groups, or ad hoc confederations?” he said.Geary thinks that in the absence of written records, DNA and archaeological records could help fill in the gaps. He gives the example of jewellery, belt buckles and weapons found in ancient graves in Hungary and Northern and Southern Italy, which suggest migrations rather than invasions: “If you find this kind of jewellery in one area and then you find it in a cemetery in another, does it mean that somebody was selling jewellery in these two areas? Does this mean that people in Italy—possibly because of political change—want to identify themselves, dress themselves in a new style? This is hotly debated,” Geary explained. Material goods can suggest a relationship between people but the confirmation will be found in their DNA. “These are the kinds of questions that nobody has been able to ask because until very recently, DNA analysis simply could not be done and there were so many problems with it that this was just hopeless,” he explained. Geary has already collected some ancient DNA samples and plans to collect more from burial sites north and south of the Alps dating from the sixth century, hoping to sort out kinship relations and genetic profiles of populations.King said that working with ancient DNA is a tricky business. “There are two reasons that mitochondrial DNA (mtDNA) is the DNA we wished to be able to analyse in [King] Richard. In the first instance, we had a female line relative of Richard III and mtDNA is passed through the female line. Fortunately, it''s also the most likely bit of DNA that we''d be able to retrieve from the skeletal remains, as there are so many copies of it in the cell. After death, our DNA degrades, so mtDNA is easier to retrieve simply due to the sheer number of copies in each cell.”Geary contrasted the analysis of modern and ancient DNA. He called modern DNA analysis “[…] almost an industrial thing. You send it off to a lab, you get it back, it''s very mechanical.” Meanwhile, he described ancient DNA work as artisanal, because of degeneration and contamination. “Everything that touched it, every living thing, every microbe, every worm, every archaeologist leaves DNA traces, so it''s a real mess.” He said the success rate for extracting ancient mtDNA from teeth and dense bones is only 35%. The rate for nuclear DNA is only 10%. “Five years ago, the chances would have been zero of getting any, so 10% is a great step forward. And it''s possible we would do even better because this is a field that is rapidly transforming.”But the bottleneck is not only the technical challenge to extract and analyse ancient DNA. Historians and geneticists also need to understand each other better. “That''s why historians have to learn what it is that geneticists do, what this data is, and the geneticists have to understand the kind of questions that [historians are] trying to ask, which are not the old nineteenth century questions about identity, but questions about population, about gender roles, about relationship,” Geary said.DNA analysis can help to resolve historical questions and mysteries about our ancestors, but both historians and geneticists are becoming concerned about potential abuses and frivolous applications of DNA analysis in their fields. Thomas is particularly disturbed by studies based on single historical figures. “Unless it''s a pretty damn advanced analysis, then studying individuals isn''t particularly useful for history unless you want to say something like this person had blue eyes or whatever. Population level studies are best,” he said. He conceded that the genetic analysis of Richard III''s remnants was a sound application but added that this often is not the case with other uses, which he referred to as “genetic astrology.” He was critical of researchers who come to unsubstantiated conclusions based on ancient DNA, and scientific journals that readily publish such papers.…both historians and geneticists are becoming concerned about potential abuses or frivolous applications of DNA analysis in their fieldsThomas said that it is reasonable to analyse a Y chromosome or mtDNA to estimate a certain genetic trait. “But then to look at the distribution of those, note in the tree where those types are found, and informally, interpretively make inferences—“Well this must have come from here and therefore when I find it somewhere else then that means that person must have ancestors from this original place”—[…] that''s deeply flawed. It''s the most widely used method for telling historical stories from genetic data. And yet is easily the one with the least credibility.” Thomas criticized such facile use of genetic data, which misleads the public and the media. “I suppose I can''t blame these [broadcast] guys because it''s their job to make the programme look interesting. If somebody comes along and says ‘well, I can tell you you''re descended from some Viking warlord or some Celtic princess'', then who are they to question.”Similarly, the historians have reservations about making questionable historical claims on the basis of DNA analysis. Geary said the use of mtDNA to identify Richard III was valuable because it answered a specific, factual question. However, he is turned off by other research using DNA to look at individual figures, such as a case involving a princess who was a direct descendant of the woman who posed for Leonardo Da Vinci''s Mona Lisa. “There''s some people running around trying to dig up famous people and prove the obvious. I think that''s kind of silly. There are others that I think are quite appropriate, and while is not my kind of history, I think it is fine,” he said. “The Richard III case was in the tradition of forensics.”…the cases in which historians and archaeologists work with molecular biologists are rare and remain disconnected in general from the mainstream of historical or archaeological researchNicola Di Cosmo, a historian at the Institute for Advanced Study, who is researching the impact of climate change on the thirteenth century Mongol empire, follows closely the advances in DNA and history research, but has not yet applied it to his own work. Nevertheless, he said that genetics could help to understand the period he studies because there are no historical documents, although monumental burials exist. “It is important to get a sense of where these people came from, and that''s where genetics can help,” he said. He is also concerned about geneticists who publish results without involving historians and without examining other records. He cited a genetic study of a so-called ‘Eurasian male'' in a prestige burial of the Asian Hun Xiongnu, a nomadic people who at the end of the third century B.C. formed a tribal league that dominated most of Central Asia for more than 500 years. “The conclusion the geneticists came to was that there was some sort of racial tolerance in this nomadic empire, but we have no way to even assume that they had any concept of race or tolerance.”Di Cosmo commented that the cases in which historians and archaeologists work with molecular biologists are rare and remain disconnected in general from the mainstream of historical or archaeological research. “I believe that historians, especially those working in areas for which written records are non-existent, ought to be taking seriously the evidence churned out by genetic laboratories. On the other hand, geneticists must realize that the effectiveness of their research is limited unless they access reliable historical information and understand how a historical argument may or may not explain the genetic data” [1].Notwithstanding the difficulties in collaboration between two fields, McCormick is excited about historians working with DNA. He said the intersection of history and genomics could create a new scientific discipline in the years ahead. “I don''t know what we''d call it. It would be a sort of fusion science. It certainly has the potential to produce enormous amounts of enormously interesting new evidence about our human past.”  相似文献   

5.
Greener M 《EMBO reports》2008,9(11):1067-1069
A consensus definition of life remains elusiveIn July this year, the Phoenix Lander robot—launched by NASA in 2007 as part of the Phoenix mission to Mars—provided the first irrefutable proof that water exists on the Red Planet. “We''ve seen evidence for this water ice before in observations by the Mars Odyssey orbiter and in disappearing chunks observed by Phoenix […], but this is the first time Martian water has been touched and tasted,” commented lead scientist William Boynton from the University of Arizona, USA (NASA, 2008). The robot''s discovery of water in a scooped-up soil sample increases the probability that there is, or was, life on Mars.Meanwhile, the Darwin project, under development by the European Space Agency (ESA; Paris, France; www.esa.int/science/darwin), envisages a flotilla of four or five free-flying spacecraft to search for the chemical signatures of life in 25 to 50 planetary systems. Yet, in the vastness of space, to paraphrase the British astrophysicist Arthur Eddington (1822–1944), life might be not only stranger than we imagine, but also stranger than we can imagine. The limits of our current definitions of life raise the possibility that we would not be able to recognize an extra-terrestrial organism.Back on Earth, molecular biologists—whether deliberately or not—are empirically tackling the question of what is life. Researchers at the J Craig Venter Institute (Rockville, MD, USA), for example, have synthesized an artificial bacterial genome (Gibson et al, 2008). Others have worked on ‘minimal cells'' with the aim of synthesizing a ‘bioreactor'' that contains the minimum of components necessary to be self-sustaining, reproduce and evolve. Some biologists regard these features as the hallmarks of life (Luisi, 2007). However, to decide who is first in the ‘race to create life'' requires a consensus definition of life itself. “A definition of the precise boundary between complex chemistry and life will be critical in deciding which group has succeeded in what might be regarded by the public as the world''s first theology practical,” commented Jamie Davies, Professor of Experimental Anatomy at the University of Edinburgh, UK.For most biologists, defining life is a fascinating, fundamental, but largely academic question. It is, however, crucial for exobiologists looking for extra-terrestrial life on Mars, Jupiter''s moon Europa, Saturn''s moon Titan and on planets outside our solar system.In their search for life, exobiologists base their working hypothesis on the only example to hand: life on Earth. “At the moment, we can only assume that life elsewhere is based on the same principles as on Earth,” said Malcolm Fridlund, Secretary for the Exo-Planet Roadmap Advisory Team at the ESA''s European Space Research and Technology Centre (Noordwijk, The Netherlands). “We should, however, always remember that the universe is a peculiar place and try to interpret unexpected results in terms of new physics and chemistry.”The ESA''s Darwin mission will, therefore, search for life-related gases such as carbon dioxide, water, methane and ozone in the atmospheres of other planets. On Earth, the emergence of life altered the balance of atmospheric gases: living organisms produced all of the Earth'' oxygen, which now accounts for one-fifth of the atmosphere. “If all life on Earth was extinguished, the oxygen in our atmosphere would disappear in less than 4 million years, which is a very short time as planets go—the Earth is 4.5 billion years old,” Fridlund said. He added that organisms present in the early phases of life on Earth produced methane, which alters atmospheric composition compared with a planet devoid of life.Although the Darwin project will use a pragmatic and specific definition of life, biologists, philosophers and science-fiction authors have devised numerous other definitions—none of which are entirely satisfactory. Some are based on basic physiological characteristics: a living organism must feed, grow, metabolize, respond to stimuli and reproduce. Others invoke metabolic definitions that define a living organism as having a distinct boundary—such as a membrane—which facilitates interaction with the environment and transfers the raw materials needed to maintain its structure (Wharton, 2002). The minimal cell project, for example, defines cellular life as “the capability to display a concert of three main properties: self-maintenance (metabolism), reproduction and evolution. When these three properties are simultaneously present, we will have a full fledged cellular life” (Luisi, 2007). These concepts regard life as an emergent phenomenon arising from the interaction of non-living chemical components.Cryptobiosis—hidden life, also known as anabiosis—and bacterial endospores challenge the physiological and metabolic elements of these definitions (Wharton, 2002). When the environment changes, certain organisms are able to undergo cryptobiosis—a state in which their metabolic activity either ceases reversibly or is barely discernible. Cryptobiosis allows the larvae of the African fly Polypedilum vanderplanki to survive desiccation for up to 17 years and temperatures ranging from −270 °C (liquid helium) to 106 °C (Watanabe et al, 2002). It also allows the cysts of the brine shrimp Artemia to survive desiccation, ultraviolet radiation, extremes of temperature (Wharton, 2002) and even toyshops, which sell the cysts as ‘sea monkeys''. Organisms in a cryptobiotic state show characteristics that vary markedly from what we normally consider to be life, although they are certainly not dead. “[C]ryptobiosis is a unique state of biological organization”, commented James Clegg, from the Bodega Marine Laboratory at the University of California (Davies, CA, USA), in an article in 2001 (Clegg, 2001). Bacterial endospores, which are the “hardiest known form of life on Earth” (Nicholson et al, 2000), are able to withstand almost any environment—perhaps even interplanetary space. Microbiologists isolated endospores of strict thermophiles from cold lake sediments and revived spores from samples some 100,000 years old (Nicholson et al, 2000).…life might be not only stranger than we imagine, but also stranger than we can imagineAnother problem with the definitions of life is that these can expand beyond biology. The minimal cell project, for example, in common with most modern definitions of life, encompass the ability to undergo Darwinian evolution (Wharton, 2002). “To be considered alive, the organism needs to be able to undergo extensive genetic modification through natural selection,” said Professor Paul Freemont from Imperial College London, UK, whose research interests encompass synthetic biology. But the virtual ‘organisms'' in computer simulations such as the Game of Life (www.bitstorm.org/gameoflife) and Tierra (http://life.ou.edu/tierra) also exhibit life-like characteristics, including growth, death and evolution—similar to robots and other artifical systems that attempt to mimic life (Guruprasad & Sekar, 2006). “At the moment, we have some problems differentiating these approaches from something biologists consider [to be] alive,” Fridlund commented.…to decide who is first in the ‘race to create life'' requires a consensus definition of lifeBoth the genetic code and all computer-programming languages are means of communicating large quantities of codified information, which adds another element to a comprehensive definition of life. Guenther Witzany, an Austrian philosopher, has developed a “theory of communicative nature” that, he claims, differentiates biotic and abiotic life. “Life is distinguished from non-living matter by language and communication,” Witzany said. According to his theory, RNA and DNA use a ‘molecular syntax'' to make sense of the genetic code in a manner similar to language. This paragraph, for example, could contain the same words in a random order; it would be meaningless without syntactic and semantic rules. “The RNA/DNA language follows syntactic, semantic and pragmatic rules which are absent in [a] random-like mixture of nucleic acids,” Witzany explained.Yet, successful communication requires both a speaker using the rules and a listener who is aware of and can understand the syntax and semantics. For example, cells, tissues, organs and organisms communicate with each other to coordinate and organize their activities; in other words, they exchange signals that contain meaning. Noradrenaline binding to a β-adrenergic receptor in the bronchi communicates a signal that says ‘dilate''. “If communication processes are deformed, destroyed or otherwise incorrectly mediated, both coordination and organisation of cellular life is damaged or disturbed, which can lead to disease,” Witzany added. “Cellular life also interprets abiotic environmental circumstances—such as the availability of nutrients, temperature and so on—to generate appropriate behaviour.”Nonetheless, even definitions of life that include all the elements mentioned so far might still be incomplete. “One can make a very complex definition that covers life on the Earth, but what if we find life elsewhere and it is different? My opinion, shared by many, is that we don''t have a clue of how life arose on Earth, even if there are some hypotheses,” Fridlund said. “This underlies many of our problems defining life. Since we do not have a good minimum definition of life, it is hard or impossible to find out how life arose without observing the process. Nevertheless, I''m an optimist who believes the universe is understandable with some hard work and I think we will understand these issues one day.”Both synthetic biology and research on organisms that live in extreme conditions allow biologists to explore biological boundaries, which might help them to reach a consensual minimum definition of life, and understand how it arose and evolved. Life is certainly able to flourish in some remarkably hostile environments. Thermus aquaticus, for example, is metabolically optimal in the springs of Yellowstone National Park at temperatures between 75 °C and 80 °C. Another extremophile, Deinococcus radiodurans, has evolved a highly efficient biphasic system to repair radiation-induced DNA breaks (Misra et al, 2006) and, as Fridlund noted, “is remarkably resistant to gamma radiation and even lives in the cooling ponds of nuclear reactors.”In turn, synthetic biology allows for a detailed examination of the elements that define life, including the minimum set of genes required to create a living organism. Researchers at the J Craig Venter Institute, for example, have synthesized a 582,970-base-pair Mycoplasma genitalium genome containing all the genes of the wild-type bacteria, except one that they disrupted to block pathogenicity and allow for selection. ‘Watermarks'' at intergenic sites that tolerate transposon insertions identify the synthetic genome, which would otherwise be indistinguishable from the wild type (Gibson et al, 2008).Yet, as Pier Luigi Luisi from the University of Roma in Italy remarked, even M. genitalium is relatively complex. “The question is whether such complexity is necessary for cellular life, or whether, instead, cellular life could, in principle, also be possible with a much lower number of molecular components”, he said. After all, life probably did not start with cells that already contained thousands of genes (Luisi, 2007).…researchers will continue their attempts to create life in the test tube—it is, after all, one of the greatest scientific challengesTo investigate further the minimum number of genes required for life, researchers are using minimal cell models: synthetic genomes that can be included in liposomes, which themselves show some life-like characteristics. Certain lipid vesicles are able to grow, divide and grow again, and can include polymerase enzymes to synthesize RNA from external substrates as well as functional translation apparatuses, including ribosomes (Deamer, 2005).However, the requirement that an organism be subject to natural selection to be considered alive could prove to be a major hurdle for current attempts to create life. As Freemont commented: “Synthetic biologists could include the components that go into a cell and create an organism [that is] indistinguishable from one that evolved naturally and that can replicate […] We are beginning to get to grips with what makes the cell work. Including an element that undergoes natural selection is proving more intractable.”John Dupré, Professor of Philosophy of Science and Director of the Economic and Social Research Council (ESRC) Centre for Genomics in Society at the University of Exeter, UK, commented that synthetic biologists still approach the construction of a minimal organism with certain preconceptions. “All synthetic biology research assumes certain things about life and what it is, and any claims to have ‘confirmed'' certain intuitions—such as life is not a vital principle—aren''t really adding empirical evidence for those intuitions. Anyone with the opposite intuition may simply refuse to admit that the objects in question are living,” he said. “To the extent that synthetic biology is able to draw a clear line between life and non-life, this is only possible in relation to defining concepts brought to the research. For example, synthetic biologists may be able to determine the number of genes required for minimal function. Nevertheless, ‘what counts as life'' is unaffected by minimal genomics.”Partly because of these preconceptions, Dan Nicholson, a former molecular biologist now working at the ESRC Centre, commented that synthetic biology adds little to the understanding of life already gained from molecular biology and biochemistry. Nevertheless, he said, synthetic biology might allow us to go boldly into the realms of biological possibility where evolution has not gone before.An engineered synthetic organism could, for example, express novel amino acids, proteins, nucleic acids or vesicular forms. A synthetic organism could use pyranosyl-RNA, which produces a stronger and more selective pairing system than the natural existent furanosyl-RNA (Bolli et al, 1997). Furthermore, the synthesis of proteins that do not exist in nature—so-called never-born proteins—could help scientists to understand why evolutionary pressures only selected certain structures.As Luisi remarked, the ratio between the number of theoretically possible proteins containing 100 amino acids and the real number present in nature is close to the ratio between the space of the universe and the space of a single hydrogen atom, or the ratio between all the sand in the Sahara Desert and a single grain. Exploring never-born proteins could, therefore, allow synthetic biologists to determine whether particular physical, structural, catalytic, thermodynamic and other properties maximized the evolutionary fitness of natural proteins, or whether the current protein repertoire is predominately the result of chance (Luisi, 2007).In the final analysis, as with all science, deep understanding is more important than labelling with words.“Synthetic biology also could conceivably help overcome the ‘n = 1 problem''—namely, that we base biological theorising on terrestrial life only,” Nicholson said. “In this way, synthetic biology could contribute to the development of a more general, broader understanding of what life is and how it might be defined.”No matter the uncertainties, researchers will continue their attempts to create life in the test tube—it is, after all, one of the greatest scientific challenges. Whether or not they succeed will depend partly on the definition of life that they use, though in any case, the research should yield numerous insights that are beneficial to biologists generally. “The process of creating a living system from chemical components will undoubtedly offer many rich insights into biology,” Davies concluded. “However, the definition will, I fear, reflect politics more than biology. Any definition will, therefore, be subject to a lot of inter-lab political pressure. Definitions are also important for bioethical legislation and, as a result, reflect larger politics more than biology. In the final analysis, as with all science, deep understanding is more important than labelling with words.”  相似文献   

6.
7.
Rinaldi A 《EMBO reports》2012,13(1):24-27
Does the spin of an electron allow birds to see the Earth''s magnetic field? Andrea Rinaldi investigates the influence of quantum events in the biological world.The subatomic world is nothing like the world that biologists study. Physicists have struggled for almost a century to understand the wave–particle duality of matter and energy, but many questions remain unanswered. That biological systems ultimately obey the rules of quantum mechanics might be self-evident, but the idea that those rules are the very basis of certain biological functions has needed 80 years of thought, research and development for evidence to begin to emerge (Sidebar A).

Sidebar A | Putting things in their place

Although Erwin Schrödinger (1887–1961) is often credited as the ‘father'' of quantum biology, owing to the publication of his famous 1944 book, What is Life?, the full picture is more complex. While other researchers were already moving towards these concepts in the 1920s, the German theoretical physicist Pascual Jordan (1902–1980) was actually one of the first to attempt to reconcile biological phenomena with the quantum revolution that Jordan himself, working with Max Born and Werner Heisenberg, largely ignited. “Pascual Jordan was one of many scientists at the time who were exploring biophysics in innovative ways. In some cases, his ideas have proven to be speculative or even fantastical. In others, however, his ideas have proven to be really ahead of their time,” explained Richard Beyler, a science historian at Portland State University, USA, who analysed Jordan''s contribution to the rise of quantum biology (Beyler, 1996). “I think this applies to Jordan''s work in quantum biology as well.”Beyler also remarked that some of the well-known figures of molecular biology''s past—Max Delbrück is a notable example—entered into their studies at least in part as a response or rejoinder to Jordan''s work. “Schrödinger''s book can also be read, on some level, as an indirect response to Jordan,” Beyler said.Jordan was certainly a complex personality and his case is rendered more complicated by the fact that he explicitly hitched his already speculative scientific theories to various right-wing political philosophies. “During the Nazi regime, for example, he promoted the notion that quantum biology served as evidence for the naturalness of dictatorship and the prospective death of liberal democracy,” Beyler commented. “After 1945, Jordan became a staunch Cold Warrior and saw in quantum biology a challenge to philosophical and political materialism. Needless to say, not all of his scientific colleagues appreciated these propagandistic endeavors.”Pascual Jordan [pictured above] and the dawn of quantum biology. From 1932, Jordan started to outline the new field''s background in a series of essays that were published in journals such as Naturwissenschaften. An exposition of quantum biology is also encountered in his book Die Physik und das Geheimnis des organischen Lebens, published in 1941. Photo courtesy of Luca Turin.Until very recently, it was not even possible to investigate whether quantum phenomena such as coherence and entanglement could play a significant role in the function of living organisms. As such, researchers were largely limited to computer simulations and theoretical experiments to explain their observations (see A quantum leap in biology, www.emboreports.org). Recently, however, quantum biologists have been making inroads into developing methodology to measure the degree of quantum entanglement in light-harvesting systems. Their breakthrough has turned once ephemeral theories into solid evidence, and has sparked the beginning of an entirely new discipline.How widespread is the direct relevance of quantum effects in nature is hard to say and many scientists suspect that there are only a few cases in which quantum mechanics have a crucial role. However, interest in the field is growing and researchers are looking for more examples of quantum-dependent biological systems. In a way, quantum biology can be viewed as a natural evolution of biophysics, moving from the classical to the quantum, from the atomic to the subatomic. Yet the discipline might prove to be an even more intimate and further-reaching marriage that could provide a deeper understanding of things such as protein energetics and dynamics, and all biological processes where electrons flow.Recently […] quantum biologists have been making inroads into developing methodology to measure the degree of quantum entanglement in light-harvesting systemsAmong the biological systems in which quantum effects are believed to have a crucial role is magnetoreception, although the nature of the receptors and the underlying biophysical mechanisms remain unknown. The possibility that organisms use a ferromagnetic material (magnetite) in some cases has received some confirmation, but support is growing for the explanation lying in a chemical detection mechanism with quantum mechanical properties. This explanation posits a chemical compass based on the light-triggered production of a radical pair—a pair of molecules each with an unpaired electron—the spins of which are entangled. If the products of the radical pair system are spin-dependent, then a magnetic field—like the geomagnetic one—that affects the direction of spin will alter the reaction products. The idea is that these reaction products affect the sensitivity of light sensors in the eye, thus allowing organisms to ‘see'' magnetic fields.The research comes from a team led by Thorsten Ritz at the University of California Irvine, USA, and other groups, who have suggested that the radical pair reaction takes place in the molecule cryptochrome. Cryptochromes are flavoprotein photoreceptors first identified in the model plant Arabidopsis thaliana, in which they play key roles in growth and development. More recently, cryptochromes have been found to have a role in the circadian clock of fruit flies (Ritz et al, 2010) and are known to be present in migratory birds. Intriguingly, magnetic fields have been shown to have an effect on both Arabidopsis seedlings, which respond as though they have been exposed to higher levels of blue light, and Drosophila, in which the period length of the clock is lengthened, mimicking the effect of increased blue light signal intensity on cryptochromes (Ahmad et al, 2007; Yoshii et al, 2009).“The study of quantum effects in biological systems is a rapidly broadening field of research in which intriguing phenomena are yet to be uncovered and understood”Direct evidence that cryptochrome is the avian magnetic compass is currently lacking, but the molecule does have some features that make its candidacy possible. In a recent review (Ritz et al, 2010), Ritz and colleagues discussed the mechanism by which cryptochrome might form radical pairs. They argued that “Cryptochromes are bound to a light-absorbing flavin cofactor (FAD) which can exist in three interconvertable [sic] redox forms: (FAD, FADH, FADH),” and that the redox state of FAD is light-dependent. As such, both the oxidation and reduction of the flavin have radical species as intermediates. “Therefore both forward and reverse reactions may involve the formation of radical pairs” (Ritz et al, 2010). Although speculative, the idea is that a magnetic field could alter the spin of the free electrons in the radical pairs resulting in altered photoreceptor responses that could be perceived by the organism. “Given the relatively short time from the first suggestion of cryptochrome as a magnetoreceptor in 2000, the amount of studies from different fields supporting the photo-magnetoreceptor and cryptochrome hypotheses […] is promising,” the authors concluded. “It suggests that we may be only one step away from a true smoking gun revealing the long-sought after molecular nature of receptors underlying the 6th sense and thus the solution of a great outstanding riddle of sensory biology.”Research into quantum effects in biology took off in 2007 with groundbreaking experiments from Graham Fleming''s group at the University of California, Berkeley, USA. Fleming''s team were able to develop tools that allowed them to excite the photosynthetic apparatus of the green sulphur bacterium Chlorobium tepidum with short laser pulses to demonstrate that wave-like energy transfer takes place through quantum coherence (Engel et al, 2007). Shortly after, Martin Plenio''s group at Ulm University in Germany and Alán Aspuru-Guzik''s team at Harvard University in the USA simultaneously provided evidence that it is a subtle interplay between quantum coherence and environmental noise that optimizes the performance of biological systems such as the photosynthetic machinery, adding further interest to the field (Plenio & Huelga, 2008; Rebentrost et al, 2009). “The recent Quantum Effects in Biological Systems (QuEBS) 2011 meeting in Ulm saw an increasing number of biological systems added to the group of biological processes in which quantum effects are suspected to play a crucial role,” commented Plenio, one of the workshop organizers; he mentioned the examples of avian magnetoreception and the role of phonon-assisted tunnelling to explain the function of the sense of smell (see below). “The study of quantum effects in biological systems is a rapidly broadening field of research in which intriguing phenomena are yet to be uncovered and understood,” he concluded.“The area of quantum effects in biology is very exciting because it is pushing the limits of quantum physics to a new scale,” Yasser Omar from the Technical University of Lisbon, Portugal commented. ”[W]e are finding that quantum coherence plays a significant role in the function of systems that we previously thought would be too large, too hot—working at physiological temperatures—and too complex to depend on quantum effects.”Another growing focus of quantum biologists is the sense of smell and odorant recognition. Mainstream researchers have always favoured a ‘lock-and-key'' mechanism to explain how organisms detect and distinguish different smells. In this case, the identification of odorant molecules relies on their specific shape to activate receptors on the surface of sensory neurons in the nasal epithelium. However, a small group of ‘heretics'' think that the smell of a molecule is actually determined by intramolecular vibrations, rather than by its shape. This, they say, explains why the shape theory has so far failed to explain why different molecules can have similar odours, while similar molecules can have dissimilar odours. It also goes some way to explaining how humans can manage with fewer than 400 smell receptors.…determining whether quantum effects have a role in odorant recognition has involved assessing the physical violations of such a mechanism […] and finding that, given certain biological parameters, there are noneA recent study in Proceedings of the National Academy of Sciences USA has now provided new grist for the mill for ‘vibrationists''. Researchers from the Biomedical Sciences Research Center “Alexander Fleming”, Vari, Greece—where the experiments were performed—and the Massachusetts Institute of Technology (MIT), USA, collaborated to replace hydrogen with deuterium in odorants such as acetophenone and 1-octanol, and asked whether Drosophila flies could distinguish the two isotopes, which are identically shaped but vibrate differently (Franco et al, 2011). Not only were the flies able to discriminate between the isotopic odorants, but when trained to discriminate against the normal or deuterated isotopes of a compound, they could also selectively avoid the corresponding isotope of a different odorant. The findings are inconsistent with a shape-only model for smell, the authors concluded, and suggest that flies can ‘smell molecular vibrations''.“The ability to detect heavy isotopes in a molecule by smell is a good test of shape and vibration theories: shape says it should be impossible, vibration says it should be doable,” explained Luca Turin from MIT, one of the study''s authors. Turin is a major proponent of the vibration theory and suggests that the transduction of molecular vibrations into receptor activation could be mediated by inelastic electron tunnelling (Fig 1; see also The scent of life, www.emboreports.org). “The results so far had been inconclusive and complicated by possible contamination of the test odorants with impurities,” Turin said. “Our work deals with impurities in a novel way, by asking flies whether the presence of deuterium isotope confers a common smell character to odorants, much in the way that the presence of -SH in a molecule makes it smell ‘sulphuraceous'', regardless of impurities. The flies'' answer seems to be ‘yes''.”Open in a separate windowFigure 1Diagram of a vibration-sensing receptor using an inelastic electron tunnelling mechanism. An odorant—here benzaldehyde—is depicted bound to a protein receptor that includes an electron donor site at the top left to which an electron—blue sphere—is bound. The electron can tunnel to an acceptor site at the bottom right while losing energy (vertical arrow) by exciting one or more vibrational modes of the benzaldehyde. When the electron reaches the acceptor, the signal is transduced via a G-protein mechanism, and the olfactory stimulus is triggered. Credit: Luca Turin.One of the study''s Greek co-authors, Efthimios Skoulakis, suggested that flies are better suited than humans at doing this experiment for a couple of reasons. “[The flies] seem to have better acuity than humans and they cannot anticipate the task they will be required to complete (as humans would), thus reducing bias in the outcome,” he said. “Drosophila does not need to detect deuterium per se to survive and be reproductively successful, so it is likely that detection of the vibrational difference between such a compound and its normal counterpart reflects a general property of olfactory systems.”The question of whether quantum mechanics really plays a non-trivial role in biology is still hotly debated by physicists and biologists alikeJennifer Brookes, a physicist at University College London, UK, explained that recent advances in determining whether quantum effects have a role in odorant recognition has involved assessing the physical violations of such a mechanism in the first instance, and finding that, given certain biological parameters, there are none. “The point being that if nature uses something like the quantized vibrations of molecules to ‘measure'' a smell then the idea is not—mathematically, physically and biologically—as eccentric as it at first seems,” she said. Moreover, there is the possibility that quantum mechanics could play a much broader role in biology than simply underpinning the sense of smell. “Odorants are not the only small molecules that interact unpredictably with large proteins; steroid hormones, anaesthetics and neurotransmitters, to name a few, are examples of ligands that interact specifically with special receptors to produce important biological processes,” Brookes wrote in a recent essay (Brookes, 2010).The question of whether quantum mechanics really plays a non-trivial role in biology is still hotly debated by physicists and biologists alike. “[A] non-trivial quantum effect in biology is one that would convince a biologist that they needed to take an advanced quantum mechanics course and learn about Hilbert space and operators etc., so that they could understand the effect,” argued theoretical quantum physicists Howard Wiseman and Jens Eisert in their contribution to the book Quantum Aspects of Life (Wiseman & Eisert, 2008). In their rational challenge to the general enthusiasm for a quantum revolution in biology, Wiseman and Eisert point out that a number of “exotic” and “implausible” quantum effects—including a quantum life principle, quantum computing in the brain, quantum computing in genetics, and quantum consciousness—have been suggested and warn researchers to be cautious of “ideas that are more appealing at first sight than they are realistic” (Wiseman & Eisert, 2008).“One could easily expect many more new exciting ideas and discoveries to emerge from the intersection of two major areas such as quantum physics and biology”Keeping this warning in mind, the view of life from a quantum perspective can still provide a deeper insight into the mechanisms that allow living organisms to thrive without succumbing to the increasing entropy of their environment. But does quantum biology have practical applications? “The investigation of the role of quantum physics in biology is fascinating because it could help explain why evolution has favoured some biological designs, as well as inspire us to develop more efficient artificial devices,” Omar said. The most often quoted examples of such devices are solar collectors that would use efficient energy transport mechanisms inspired by the quantum proficiency of natural light-harvesting systems, and quantum computing. But there is much more ahead. In 2010, the Pentagon''s cutting-edge research branch, DARPA (Defense Advanced Research Projects Agency, USA), launched a solicitation for innovative proposals in the area of quantum effects in a biological environment. “Proposed research should establish beyond any doubt that manifestly quantum effects occur in biology, and demonstrate through simulation proof-of-concept experiments that devices that exploit these effects could be developed into biomimetic sensors,” states the synopsis (DARPA, 2010). This programme will thus look explicitly at photosynthesis, magnetic field sensing and odour detection to lay the foundations for novel sensor technologies for military applications.Clearly a number of civil needs could also be fulfilled by quantum-based biosensors. Take, for example, the much sought-after ‘electronic nose'' that could replace the use of dogs to find drugs or explosives, or could assess food quality and safety. Such a device could even be used to detect cancer, as suggested by a recent publication from a Swedish team of researchers who reported that ovarian carcinomas emit a different array of volatile signals to normal tissue (Horvath et al, 2010). “Our goal is to be able to screen blood samples from apparently healthy women and so detect ovarian cancer at an early stage when it can still be cured,” said the study''s leading author György Horvath in a press release (University of Gothenburg, 2010).Despite its already long incubation time, quantum biology is still in its infancy but with an intriguing adolescence ahead. “A new wave of scientists are finding that quantum physics has the appropriate language and methods to solve many problems in biology, observing phenomena from a different point of view and developing new concepts. The next important steps are experimental verification/falsification,” Brookes said. “One could easily expect many more new exciting ideas and discoveries to emerge from the intersection of two major areas such as quantum physics and biology,” Omar concluded.  相似文献   

8.
Hunter P 《EMBO reports》2011,12(6):504-507
New applications and technologies, and more rigorous safety measures could herald a new era for genetically modified crops with improved traits, for use in agriculture and the pharmaceutical industry.The imminent prospect of the first approval of a plant-made pharmaceutical (PMP) for human use could herald a new era for applied plant science, after a decade of public backlash against genetically modified crops, particularly in Europe. Yet, the general resistance to genetically modified organisms might have done plant biotechnology a favour in the long run, by forcing it to adopt more-rigorous procedures for efficacy and safety in line with the pharmaceutical industry. This could, in turn, lead to renewed vigour for plant science, with the promise of developing not only food crops that deliver benefits to consumers and producers, but also a wide range of new pharmaceuticals.This is certainly the view of David Aviezer, CEO of Protalix, an Israeli company that has developed what could become the first recombinant therapeutic protein from plants to treat Gaucher disease. The protein is called taliglucerase alpha; it is a recombinant human form of the enzyme glucocerebrosidase that is produced in genetically engineered carrot cells. This enzyme has a crucial role in the breakdown of glycolipids in the cell membrane and is either used to provide energy or for cellular recognition. Deficiency of this enzyme causes accumulation of lipids with a variety of effects including premature death.“My feeling is that there is a dramatic change in this area with a shift away from the direction where a decade ago biotech companies like Monsanto and Dow went with growing transgenic plants in an open field, and instead moving this process into a more regulatory well-defined process inside a clean room,” Aviezer said. “Now the process is taking place in confined conditions and is very highly regulated as in the pharmaceutical industry.”…resistance to genetically modified organisms might have done plant biotechnology a favour […] forcing it to adopt more-rigorous procedures for efficacy and safety…He argues that this is ushering in a new era for plant biotechnology that could lead to greater public acceptance, although he denies that the move to clean-room development has been driven purely by the environmental backlash against genetically modified organisms in the late 1990s and early 2000s. “That was one aspect, but I think the move has been coming more from an appreciation that biopharmaceuticals require a more regulatory defined setting than is achieved at the moment with transgenic plants.”Interest in deriving pharmaceuticals from plants, known colloquially as ‘pharming'', first took off in the 1990s after researchers showed that monoclonal antibodies could be made in tobacco plants (Hiatt et al, 1989). This led to genetic engineering of plants to produce vaccines, antibodies and proteins for therapeutics, but none gained regulatory approval, mostly because of safety concerns. Moreover, the plants were grown in open fields, therefore attracting the same criticisms as transgenic food crops. In fact, a recent study showed that the views of the public on pharming depended on the product and the means to produce it; the researchers found increasing acceptance if the plants were used to produce therapeutics against severe diseases and grown in containment (Pardo et al, 2009).However, it was the technical challenges involved in purification and the associated regulatory issues that really delayed the PMP field, according to George Lomonossoff, project leader in biological chemistry at the John Innes Centre for plant research in Norwich in the UK, part of the Biotechnology and Biological Sciences Research Council (BBSRC). “Extraction from plants required the development of systems which are not clogged by the large amounts of fibrous material, mainly cellulose, and the development of GMP [good manufacturing practice; quality and testing guidelines for pharmaceutical manufacture] compliant methods of purification which are distinct from those required from, say, mammalian cells,” said Lomonossoff. “All this is very time consuming.”“Secondly there was no regulatory framework in place to assess the risks associated with proteins produced in plants, and determining how equivalent they are to mammalian-cell-produced material and what kind of contaminants you might have to guard against,” Lomonossoff added. “Again, attempting to address all possible concerns is a lengthy and expensive process.” Yet recent work by Protalix and a few other companies, such as Dow Agrosciences, has given grounds for optimism that purification and GMP-compliant methods of production have finally been established, Lomonossoff added.…a recent study showed that the views of the public on pharming depended on the product and the means to produce it…The first important breakthrough for PMPs came in 2006, when Dow Agrosciences gained regulatory approval from the US Department of Agriculture for a vaccine against Newcastle disease, a contagious bird infection caused by paramyxovirus PMV-1. “Though the vaccine, produced in tobacco-suspension culture cells, was never deployed commercially, it showed that regulatory approval for a plant-made pharmaceutical can be obtained, albeit for veterinary use in this case,” Lomonossoff said.As approval is imminent for taliglucerase alpha for human use, it is natural to ask why plants, as opposed to micro-organisms and animals, are worth the effort as sources of vaccines, antibiotics or hormones. There are three reasons: first, plants can manufacture some existing drugs more cheaply; second, they can do it more quickly; and third, and perhaps most significantly, they will be able to manufacture more complex proteins that cannot be produced with sufficient yield in any other way.An important example in the first category is insulin, which is being manufactured in increasing quantities to treat type 1 diabetes and some cases of type 2 diabetes. Until the arrival of recombinant DNA technology, replacement insulin was derived from the pancreases of animals in abattoirs, mostly cattle and pigs, but it is now more often produced from transgenic Escherichia coli, or sometimes yeast. Recently, there has been growing interest in using plants rather than bacteria as sources of insulin (Davidson, 2004; Molony et al, 2005). SemBioSys, a plant biotechnology company based in Calgary, Canada, is now developing systems to produce insulin and other therapeutic proteins in the seeds of safflower, an oilseed crop (Boothe et al, 2009).…plants can in principle be engineered to produce any protein, including animal ones…“We have developed technology that combines the high-capacity, low-cost production of therapeutic proteins in seeds with a novel technology that simplifies downstream purification,” said Joseph Boothe, vice president of research and development at SemBioSys. “The target proteins are engineered to associate with small, oil-containing structures within the seed known as oilbodies,” Boothe explained. “When extracted from the seed these oilbodies and associated proteins can be separated from other components by simple centrifugation. As a result, much of the heavy lifting around the initial purification is accomplished without chromatography, providing for substantial cost savings.”The second potential advantage of PMPs is their speed to market, which could prove most significant for the production of vaccines, either against emerging diseases or seasonal influenza, for which immunological changes in the virus mean that newly formulated vaccines are required each year. “In terms of a vaccine, I think influenza is very promising particularly as speed is of the essence in combating new strains,” Lomonossoff said. “Using transient expression methods, you can go from sequence to expressed protein in two weeks.” Transient gene expression involves injection of genes into a cell to produce a target protein, rather than permanently incorporating the gene into a host genome. This is emerging as a less technically difficult and faster alternative to developing stable cell lines for expressing bioengineered proteins. The process of injecting the desired gene into the target genome, known as transfection, can be effected not only by viruses, but also by non-viral agents including various lipids, polyethylenine and calcium phosphate.The last of the three advantages of plants for pharmaceutical production—the ability to manufacture proteins not available by other means—is creating perhaps the greatest excitement. The Protalix taliglucerase alpha protein falls into this category, and is likely to be followed by other candidates for treating disorders that require enzymes or complex molecules beyond the scope of bacteria, according to Aviezer. “I would say that for simpler proteins, bacteria will still be the method of choice for a while,” Aviezer said. “But for more complex proteins currently made via mammalian cells, I think we can offer a very attractive alternative using plant cells.”Indeed, plants can in principle be engineered to produce any protein, including animal ones, as Boothe pointed out. “In some cases this may require additional genetic engineering to enable the plant to perform certain types of protein modification that differ between plants and animals,” he said. “The classic example of this is glycosylation. With recent advances in the field it is now possible to engineer plants to glycosylate proteins in a manner similar to that of mammalian cells.” Glycosylation is a site-directed process that adds mono- or polysaccharides to organic molecules, and plays a vital role in folding and conferring stability on the finished molecule or macromolecule. Although plants can be engineered to perform it, bacteria generally cannot, which is a major advantage of plant systems over micro-organisms for pharmaceutical manufacture, according to Aviezer. “This enables plant systems to do complex folding and so make proteins for enzyme replacement or antibodies,” Aviezer said.Genomic-assisted breeding is being used either as a substitute for, or a complement to, genetic-modification techniques…In addition to plants themselves, their viruses also have therapeutic potential, either to display epitopes—the protein, sugar or lipid components of antigens on the surface of an infectious agent—so as to trigger an immune response or, alternatively, to deliver a drug directly into a cell. However, as Lomonossoff pointed out, regulatory authorities remain reluctant to approve any compound containing foreign nucleic acids for human use because of the risk of infection as a side effect. “I hope the empty particle technology [viruses without DNA] we have recently developed will revive this aspect,” Lomonossoff said. “The empty particles can also be used as nano-containers for targeted drug delivery and we are actively exploring this.”As pharmaceutical production is emerging as a new field for plant biology, there is a small revolution going on in plant breeding, with the emergence of genomic techniques that allow simultaneous selection across several traits. Although genetic modification can, by importing a foreign gene, provide instant expression of a desired trait, such as drought tolerance, protein content or pesticide resistance, the new field of genomics-assisted breeding has just as great potential through selection of unique variants within the existing gene pool of a plant, according to Douwe de Boer, managing director of the Netherlands biotech group Genetwister. “With this technology it will be possible to breed faster and more efficiently, especially for complex traits that involve multiple genes,” he said. “By using markers it is possible to combine many different traits in one cultivar, variety, or line in a pre-planned manner and as such breed superior crops.”“The application of genomics technologies and next generation sequencing will surely revolutionize plant breeding and will eventually allow this to be achieved with clinical precision”Genomic-assisted breeding is being used either as a substitute for, or a complement to, genetic-modification techniques, both for food crops to bolt on traits such as nutrient value or drought resistance, and for pharmaceutical products, for example to increase the yield of a desired compound or reduce unwanted side effects. Yet, there is more research required to make genomic-assisted breeding as widely used as established genetic-modification techniques. “The challenge in our research is to find markers for each trait and as such we extensively make use of bio-informatics for data storage, analysis and visualization,” de Boer said.The rewards are potentially enormous, according to Alisdair Fernie, a group leader from the Max-Planck-Institute for Molecular Plant Physiology in Potsdam, Germany. “Smart breeding will certainly have a massive impact in the future,” Fernie said. “The application of genomics technologies and next generation sequencing will surely revolutionize plant breeding and will eventually allow this to be achieved with clinical precision.” The promise of such genomic technologies in plants extends beyond food and pharmaceuticals to energy and new materials or products such as lubricants; the potential of plants is that they are not just able to produce the desired compound, but can often do so more quickly, efficiently and cheaply than competing biotechnological methods.  相似文献   

9.
Last year''s Nobel Prizes for Carol Greider and Elizabeth Blackburn should be encouraging for all female scientists with childrenCarol Greider, a molecular biologist at Johns Hopkins University (Baltimore, MD, USA), recalled that when she received a phone call from the Nobel Foundation early in October last year, she was staring down a large pile of laundry. The caller informed her that she had won the 2009 Nobel Prize in Physiology or Medicine along with Elizabeth Blackburn, her mentor and co-discoverer of the enzyme telomerase, and Jack Szostak. The Prize was not only the ultimate reward for her own achievements, but it also highlighted a research field in biology that, unlike most others, is renowned for attracting a significant number of women.Indeed, the 2009 awards stood out in particular, as five women received Nobel prizes. In addition to the Prize for Greider and Blackburn, Ada E. Yonath received one in chemistry, Elinor Ostrom became the first female Prize-winner in economics, and Herta Müller won for literature (Fig 1).Open in a separate windowFigure 1The 2009 Nobel Laureates assembled for a photo during their visit to the Nobel Foundation on 12 December 2009. Back row, left to right: Nobel Laureates in Chemistry Ada E. Yonath and Venkatraman Ramakrishnan, Nobel Laureates in Physiology or Medicine Jack W. Szostak and Carol W. Greider, Nobel Laureate in Chemistry Thomas A. Steitz, Nobel Laureate in Physiology or Medicine Elizabeth H. Blackburn, and Nobel Laureate in Physics George E. Smith. Front row, left to right: Nobel Laureate in Physics Willard S. Boyle, Nobel Laureate in Economic Sciences Elinor Ostrom, Nobel Laureate in Literature Herta Müller, and Nobel Laureate in Economic Sciences Oliver E. Williamson. © The Nobel Foundation 2009. Photo: Orasis.Greider, the daughter of scientists, has overcome many obstacles during her career. She had dyslexia that placed her in remedial classes; “I thought I was stupid,” she told The New York Times (Dreifus, 2009). Yet, by far the biggest challenge she has tackled is being a woman with children in a man''s world. When she attended a press conference at Johns Hopkins to announce the Prize, she brought her children Gwendolyn and Charles with her (Fig 2). “How many men have won the Nobel in the last few years, and they have kids the same age as mine, and their kids aren''t in the picture? That''s a big difference, right? And that makes a statement,” she said.The Prize […] highlighted a research field in biology that, unlike most others, is renowned for attracting a significant number of womenOpen in a separate windowFigure 2Mother, scientist and Nobel Prize-winner: Carol Greider is greeted by her lab and her children. © Johns Hopkins Medicine 2009. Photo: Keith Weller.Marie Curie (1867–1934), the Polish–French physicist and chemist, was the first woman to win the Prize in 1903 for physics, together with her husband Pierre, and again for chemistry in 1911—the only woman to twice achieve such recognition. Curie''s daughter Irène Joliot-Curie (1897–1956), a French chemist, also won the Prize with her husband Frédéric in 1935. Since Curie''s 1911 prize, 347 Nobel Prizes in Physiology or Medicine and Chemistry (the fields in which biologists are recognized) have been awarded, but only 14—just 4%—have gone to women, with 9 of these awarded since 1979. That is a far cry from women holding up half the sky.Yet, despite the dominance of men in biology and the other natural sciences, telomere research has a reputation as a field dominated by women. Daniela Rhodes, a structural biologist and senior scientist at the MRC Laboratory of Molecular Biology (Cambridge, UK) recalls joining the field in 1993. “When I went to my first meeting, my world changed because I was used to being one of the few female speakers,” she said. “Most of the speakers there were female.” She estimated that 80% of the speakers at meetings at Cold Spring Harbour Laboratory in those early days were women, while the ratio in the audience was more balanced.Since Curie''s 1911 prize, 347 Nobel Prizes in Physiology or Medicine and Chemistry […] have been awarded, but only 14—just 4%—have gone to women…“There''s nothing particularly interesting about telomeres to women,” Rhodes explained. “[The] field covers some people like me who do structural biology, to cell biology, to people interested in cancers […] It could be any other field in biology. I think it''s [a result of] having women start it and [including] other women.” Greider comes to a similar conclusion: “I really see it as a founder effect. It started with Joe Gall [who originally recruited Blackburn to work in his lab].”Gall, a cell biologist, […] welcomed women to his lab at a time when the overall situation for women in science was “reasonably glum”…Gall, a cell biologist, earned a reputation for being gender neutral while working at Yale University in the 1950s and 1960s; he welcomed women to his lab at a time when the overall situation for women in science was “reasonably glum,” as he put it. “It wasn''t that women were not accepted into PhD programs. It''s just that the opportunities for them afterwards were pretty slim,” he explained.“Very early on he was very supportive to a number of women who went on and then had their own labs and […] many of those women [went] out in the world [to] train other women,” Greider commented. “A whole tree that then grows up that in the end there are many more women in that particular field simply because of that historical event.Thomas Cech, who won the Nobel Prize for Chemistry in 1989 and who worked in Gall''s lab with Blackburn, agreed: “In biochemistry and metabolism, we talk about positive feedback loops. This was a positive feedback loop. Joe Gall''s lab at Yale was an environment that was free of bias against women, and it was scientifically supportive.”Gall, now 81 and working at the Carnegie Institution of Washington (Baltimore, MD, USA), is somewhat dismissive about his positive role. “It never occurred to me that I was doing anything unusual. It literally, really did not. And it''s only been in the last 10 or 20 years that anyone made much of it,” he said. “If you look back, […] my laboratory [was] very close to [half] men and [half] women.”During the 1970s and 1980s; “[w]hen I entered graduate school,” Greider recalled, “it was a time when the number of graduate students [who] were women was about 50%. And it wasn''t unusual at all.” What has changed, though, is the number of women choosing to pursue a scientific career further. According to the US National Science Foundation (Arlington, VA, USA), women received 51.8% of doctorates in the life sciences in 2006, compared with 43.8% in 1996, 34.6% in 1986, 20.7% in 1976 and 11.9% in 1966 (www.nsf.gov/statistics).In fact, Gall suspects that biology tends to attract more women than the other sciences. “I think if you look in biology departments that you would find a higher percentage [of women] than you would in physics and chemistry,” he said. “I think […] it''s hard to dissociate societal effects from specific effects, but probably fewer women are inclined to go into chemistry [or] physics. Certainly, there is no lack of women going into biology.” However, the representation of women falls off at each level, from postdoc to assistant professor and tenured professor. Cech estimated that only about 20% of the biology faculty in the USA are women.“[It] is a leaky pipeline,” Greider explained. “People exit the system. Women exit at a much higher proportion than do men. I don''t see it as a [supply] pipeline issue at all, getting the trainees in, because for 25 years there have been a great number of women trainees.“We all thought that with civil rights and affirmative action you''d open the doors and women would come in and everything would just follow. And it turned out that was not true.”Nancy Hopkins, a molecular biologist and long-time advocate on issues affecting women faculty members at the Massachusetts Institute of Technology (Cambridge, MA, USA), said that the situation in the USA has improved because of civil rights laws and affirmative action. “I was hired—almost every woman of my generation was hired—as a result of affirmative action. Without it, there wouldn''t have been any women on the faculty,” she said, but added that: “We all thought that with civil rights and affirmative action you''d open the doors and women would come in and everything would just follow. And it turned out that was not true.”Indeed, in a speech at an academic conference in 2005, Harvard President Lawrence Summers said that innate differences between males and females might be one reason why fewer women than men succeeded in science and mathematics. The economist, who served as Secretary of Treasury under President William Clinton, told The Boston Globe that “[r]esearch in behavioural genetics is showing that things people previously attributed to socialization weren''t [due to socialization after all]” (Bombardieri, 2005).Some attendees of the meeting were angered by Summers''s remarks that women do not have the same ‘innate ability'' as men in some fields. Hopkins said she left the meeting as a protest and in “a state of shock and rage”. “It isn''t a question of political correctness, it''s about making unscientific, unfounded and damaging comments. It''s what discrimination is,” she said, adding that Summers''s views reflect the problems women face in moving up the ladder in academia. “To have the president of Harvard say that the second most important reason for their not being equal was really their intrinsic genetic inferiority is so shocking that no matter how many times I think back to his comments, I''m still shocked. These women were not asking to be considered better or special. They were just asking to have their gender be invisible.”Nonetheless, women are making inroads into academia, despite lingering prejudice and discrimination. One field of biology that counts a relatively high number of successful women among its upper ranks is developmental biology. Christiane Nüsslein-Volhard, for example, is Director of the Max Planck Institute for Developmental Biology in Tübingen, Germany, and won the Nobel Prize for Physiology or Medicine in 1995 for her work on the development of Drosophila embryos. She estimated that about 30% of speakers at conferences in her field are women.…for many women, the recent Nobel Prize for Greider […] and Blackburn […] therefore comes as much needed reassurance that it is possible to combine family life and a career in scienceHowever, she also noted that women have never been the majority in her own lab owing to the social constraints of German society. She explained that in Germany, Switzerland and Austria, family issues pose barriers for many women who want to have children and advance professionally because the pressure for women to not use day care is extremely strong. As such, “[w]omen want to stay home because they want to be an ideal mother, and then at the same time they want to go to work and do an ideal job and somehow this is really very difficult,” she said. “I don''t know a single case where the husband stays at home and takes care of the kids and the household. This doesn''t happen. So women are now in an unequal situation because if they want to do the job, they cannot; they don''t have a chance to find someone to do the work for them. […] The wives need wives.” In response to this situation, Nüsslein-Volhard has established the CNV Foundation to financially support young women scientists with children in Germany, to help pay for assistance with household chores and child care.Rhodes, an Italian native who grew up in Sweden, agreed with Nüsslein-Volhard''s assessment of the situation for many European female scientists with children. “Some European countries are very old-fashioned. If you look at the Protestant countries like Holland, women still do not really go out and have a career. It tends to be the man,” she said. “What I find depressing is [that in] a country like Sweden where I grew up, which is a very liberated country, there has been equality between men and women for a couple of generations, and if you look at the percentage of female professors at the universities, it''s still only 10%.” In fact, studies both from Europe and the USA show that academic science is not a welcoming environment for women with children; less so than for childless women and fathers, who are more likely to succeed in academic research (Ledin et al, 2007; Martinez et al, 2007).For Hopkins, her divorce at the age of 30 made a choice between children or a career unavoidable. Offered a position at MIT, she recalled that she very deliberately chose science. She said that she thought to herself: “Okay, I''m going to take the job, not have children and not even get married again because I couldn''t imagine combining that career with any kind of decent family life.” As such, for many women, the recent Nobel Prize for Greider, who raised two children, and Blackburn (Fig 3), who raised one, therefore comes as much needed reassurance that it is possible to combine family life and a career in science. Hopkins said the appearance of Greider and her children at the press conference sent “the message to young women that they can do it, even though very few women in my generation could do it. The ways in which some women are managing to do it are going to become the role models for the women who follow them.”Open in a separate windowFigure 3Elizabeth Blackburn greets colleagues and the media at a reception held in Genentech Hall at UCSF Mission Bay to celebrate her award of the Nobel Prize in Physiology or Medicine. © University of California, San Francisco 2009. Photo: Susan Merrell.  相似文献   

10.
With the advent of molecular biology, genomics, and proteomics, the intersection between science and law has become increasingly significant. In addition to the ethical and legal concerns surrounding the collection, storage, and use of genomic data, patent disputes for new biotechnologies are quickly becoming part of mainstream business discussions. Under current patent law, new technologies cannot be patented if they are “obvious” changes to an existing patent. The definition of “obvious,” therefore, has a huge impact on determining whether a patent is granted. For example, are modifications to microarray protocols, popular in diagnostic medicine, considered “obvious” improvements of previous products? Also, inventions that are readily apparent now may not have been obvious when discovered. Polymerase chain reaction, or PCR, is now a common component of every biologist’s toolbox and seems like an obvious invention, though it clearly was not in 1983. Thus, there is also a temporal component that complicates the interpretation of an invention’s obviousness. The following article discusses how a recent Supreme Court decision has altered the definition of “obviousness” in patent disputes. By examining how the obviousness standard has changed, the article illuminates how legal definitions that seem wholly unrelated to biology or medicine could still potentially have enormous effects on these fieldsJust what is obvious or not is a question that has provoked substantial litigation in the Federal Circuit, the appellate court with special jurisdiction over patent law disputes. Under U.S. patent law, an inventor may not obtain a patent, which protects his invention from infringement by others, if the differences between the subject matter sought to be patented and the prior art are such that “the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill” in the patent’s subject matter area [1]. However, what was “obvious” at the time of invention to a person of ordinary skill is hardly clear and is, in effect, a legal fiction designed to approximate objectivity. As illustrated by Chief Justice John Roberts of the Supreme Court in a moment of levity, “Who do you get to ... tell you something’s not obvious … the least insightful person you can find?” [2] Despite the apparent objectivity provided by a “person of ordinary skill” obviousness standard, the difficulty lies in that such a standard is still susceptible to multiple interpretations, depending on the point of view and knowledge ascribed to the “ordinary person.” As such, how obviousness is defined and interpreted by the courts will have important implications on biotechnology patents and the biotechnology business.The issue of obviousness arose in April 2007 when the Supreme Court handed down its decision in KSR Int’l Co. v. Teleflex, Inc. [3] The facts of the case were anything but glamorous; in the suit, Teleflex, a manufacturer of adjustable pedal systems for automobiles, sued KSR, its rival, for infringement of its patent, which “describe[d] a mechanism for combining an electronic sensor with an adjustable automobile pedal so that the pedal’s position can be transmitted to a computer that controls the throttle in the vehicle’s engine.” [4] Teleflex believed that KSR’s new pedal design was too similar to its own patented design and therefore infringed upon it [5]. In defense, KSR argued that Teleflex’s patent was merely the obvious combination of two pre-existing elements and, thus, the patent, upon which Teleflex’s infringement claim was based, was invalid.Patent law relies on the concept of obviousness to distinguish whether new inventions are worthy of being protected by a patent. If a new invention is too obvious, it is not granted a patent and is therefore not a legally protected property interest. However, if an invention is deemed not obvious and has met the other patentability requirements, a patent will be granted, thereby conferring exclusive use of the invention to the patent holder. This exclusive right prohibits others from making, using, selling, offering to sell, or importing into the United States the patented invention [6]. Essentially, the definition of obviousness sets the balance between rewarding new inventions with exclusive property rights and respecting old inventions by not treating minor variations of existing patents as new patents. In this manner, the law seeks to provide economic incentives for the creation of new inventions by ensuring that the property right conferred by the patent will be protected against insignificant variations. The importance of where the line for obviousness is drawn and how clearly it is drawn is especially important in the biotechnology industry. Studies have shown that the development of a new pharmaceutical therapy can take up to 14 years with costs exceeding $800 million [7]. Such an enormous investment of time and money would not be practical if it did not predictably result in a legally enforceable property right.The standard for what constitutes a patentable discovery has evolved over the last 150 years. In 1851, the Supreme Court held in Hotchkiss v. Greenwood that a patentable discovery required a level of ingenuity above that possessed by an ordinary person [8]. Lower courts treated the Hotchkiss standard as a subjective standard, whereby courts sought to determine “what constitute[d] an invention” [9] and a “flash of creative genius” [10]. However, the attempts at imposing the Hotchkiss standard proved unworkable, and in 1952, Congress overrode the case law with the Patent Act, “mandat[ing] that patentability be governed by an objective nonobviousness standard.” [11] This new statutory standard moved the courts away from subjective determinations and toward a more workable, objective obviousness standard.While the Patent Act laid the foundation for the current obviousness standard, the Supreme Court in Graham v. John Deere Co. interpreted the statutory language in an attempt to provide greater clarity as to what exactly “obvious” meant [12]. The Supreme Court determined that the objective analysis would require “the scope and content of the prior art ... to be determined; differences between the prior art and the claims at issue ... to be ascertained; and the level of ordinary skill in the pertinent art resolved.” [13] In addition to analysis under this three-part framework, the Supreme Court called for several secondary considerations to be weighed, including “commercial success, long felt but unresolved needs, [and the] failure of others [to solve the problem addressed].” [13]Unsurprisingly, lower courts were unsatisfied with the Supreme Court’s attempts to clarify the obviousness standard and sought to provide “more uniformity and consistency” to their evaluation of obviousness than the Supreme Court’s jumble of factors provided [14]. In search of consistency, the Federal Circuit created the “teaching, suggestion, or motivation” test (TSM test) “under which a patent is only proved obvious if ‘some motivation or suggestion to combine prior art teachings’ can be found in the prior art, the nature of the problem, or the knowledge of a person having ordinary skill in the art.” [14] Through implementation of the TSM test, the Federal Circuit sought to maintain the flexibility envisioned by the Supreme Court in Graham, while at the same time providing more certainty and predictability to obviousness determinations.The issue before the Supreme Court in KSR Int’l Co. v. Teleflex, Inc. was whether the Federal Circuit’s elaboration on the statutory language of the Patent Act, the TSM test, was consistent with the terms of the Patent Act itself and the Supreme Court’s own analysis in Graham. The Supreme Court determined that while the TSM test was, on its terms, consistent with the framework set out in Graham, the rigid manner in which the Federal Circuit had taken to applying that standard was inconsistent with the flexible approach established by Graham [15]. More generally, it appears the Supreme Court was mainly interested in restoring a more rounded, thorough inquiry to the evaluation of obviousness: “Graham set forth a broad inquiry and invited courts, where appropriate, to look at any secondary considerations that would prove instructive.” [16] As stated by the Supreme Court, “[r]igid preventative rules that deny factfinders recourse to common sense, however, are neither necessary under our case law nor consistent with it.” [17] As such, the Supreme Court reversed the findings of the Federal Circuit, which had found the Teleflex patent valid, and remanded the case back to the lower court with directions to analyze, without rigid adherence to the TSM test, whether the Teleflex patent was obvious [18].The Supreme Court’s ruling in KSR Int’l Co. v. Teleflex, Inc. that the Federal Circuit apply its TSM test less rigidly may have implications for those seeking biotechnology patents in the future. As discussed above, the large investments necessary to develop a marketable biotechnology product demand that entrepreneurs making those investments be reasonably assured that they can predict any future legal hurdles in patenting their invention and in ultimately protecting their patent. As explained by the Biotechnology Industry Organization in its amicus curiae brief in KSR Int’l Co. v. Teleflex, Inc., “[i]nvestment thus is predicated on an expected return on investment in the form of products or services that are protected by patents whose validity can be fairly determined.” [19] Therefore, the Supreme Court’s insistence that the Federal Circuit no longer rigidly rely on the TSM test could increase uncertainty in the grant of future patents. However, the Supreme Court’s refusal to completely dismiss the TSM test, while in fact endorsing its continued use, albeit on a less rigid basis, has to be viewed as a profound victory for an industry with a significant stake in maintaining the status quo. Moreover, it is unclear how much the Supreme Court’s holding in KSR Int’l Co. v. Teleflex, Inc. will truly change the legal analysis of the lower courts, given the evidence that lower courts already were independently shifting away from rigid adherence to the TSM test before the Supreme Court’s ruling [20].More importantly, several aspects of the Supreme Court’s reasoning in KSR Int’l Co. v. Teleflex, Inc. seem to directly address relevant concerns of the biotechnology market in favorable ways. First, the Supreme Court made clear that though a product is the result of a combination of elements that were “obvious to try,” it is not necessarily “obvious” under the Patent Act. Retaining the possibility that “obvious to try” inventions still may be patentable is extremely important to the biotechnology industry in particular because “many patentable inventions in biotechnology spring from known components and methodologies found in [the] prior art.” [21] Rather than foreclosing all “obvious to try” inventions as being obvious, and therefore not patentable, the Supreme Court instead explained that where there is “a design need or market pressure to solve a problem and there are a finite number of identified, predictable solutions,” it is more likely that a person of ordinary skill would find it obvious to pursue “known options.” [22] Thus, the proper inquiry, as stated by the Supreme Court, is “whether the improvement is more than the predictable use of prior art elements according to their established functions.” [23] While this reasoning may prevent some “obvious to try” inventions from being patented, it is unlikely to have a substantial effect on inventions in the biotechnology market because “most advances in biotechnology are only won through great effort and expense, and with only a low probability of success in achieving the claimed invention at the outset.” [24] In other words, it would be hard to characterize the use of prior art in the biotechnology context as predictable based on the inherent unpredictability of obtaining favorable results. As such, most biotechnology inventions would presumably fall outside the Supreme Court’s “obvious to try” reasoning due to the very nature of the industry, meaning they would remain patentable under the Supreme Court’s KSR Int’l Co. v. Teleflex, Inc. decision.Second, the Supreme Court recognized the “distortion caused by hindsight bias” and the importance of avoiding “arguments reliant upon ex post reasoning,” though it lessened the Federal Circuit’s rigid protection against hindsight bias [24]. Hindsight bias requires that obviousness be viewed at the time the invention was made, because what may seem revolutionary at the time of invention may, upon the passage of time, seem “obvious.” Cognizance of hindsight bias is crucial for biotechnology patents because “there often is a long ‘passage of time between patent application filing and litigation with biotechnology inventions [that] can exacerbate the problem’ of hindsight bias.” [25] The problem is further exacerbated by the “significantly longer durations of commercial utility” biotechnology inventions enjoy as compared to those in other fields [25]. The more time between the filing of a patent and the subsequent litigation over its validity, the greater the risk that “reliable accounts of [the] context” in which the discovery is made will no longer exist [26]. As such, inventions that were not obvious when they were created will be inescapably colored by the passage of time and by new knowledge and discoveries; the likelihood of this occurrence is higher the further removed the litigation is from the patent filing date. Once again, however, it seems clear that despite the Supreme Court’s abandonment of the TSM test’s rigidity, strong protections against hindsight bias still were emphasized in the Supreme Court’s KSR Int’l Co. v. Teleflex, Inc. decision. In fact, lower courts applying KSR Int’l Co. v. Teleflex, Inc. acknowledge they are “cautious” to avoid “using hindsight” in biotechnology obviousness determinations [27].Finally, the Supreme Court seems to believe that the imposition of a more flexible approach will be more likely to benefit markets not directly at issue in KSR Int’l Co. v. Teleflex, Inc. The Supreme Court asserted, “[t]he diversity of inventive pursuits and of modern technology counsels against limiting the analysis” to the rigid TSM test of the Federal Circuit [28]. This language suggests that the Supreme Court expects lower courts to take into consideration the special considerations facing unique markets, such as the biotechnology market. As such, the specific concerns of the biotechnology market discussed above may receive more attention under the flexible framework asserted by the Supreme Court in KSR Int’l Co. v. Teleflex, Inc.Leading up to the oral argument in KSR Int’l Co. v. Teleflex, Inc., there was widespread speculation that the case could result in a watershed moment, significantly altering the definition of obviousness in patent law. For many, including those in the biotechnology industry, there was ample reason to be concerned. Any change in the definition of obviousness would effectively shift property rights from new patent holders to old, or vice versa. However, the Supreme Court acted with restraint. While the decision purports to make substantial changes by doing away with the Federal Circuit’s TSM test, the opinion seems more like a mild-mannered rebuke of lower courts that had become too complacent in the implementation of their beloved test. If anything, the Supreme Court’s insistence on a more flexible formula is simply a call for lower courts to employ common sense, in addition to considering the factors from Graham and the TSM test. Accordingly, the Supreme Court’s opinion in KSR Int’l Co. v. Teleflex, Inc. is unlikely to have a pronounced effect on the biotechnology market, despite the widespread concern generated before the actual decision was handed down.  相似文献   

11.
Samuel Caddick 《EMBO reports》2008,9(12):1174-1176
  相似文献   

12.
Zhang JY 《EMBO reports》2011,12(4):302-306
How can grass-roots movements evolve into a national research strategy? The bottom-up emergence of synthetic biology in China could give some pointers.Given its potential to aid developments in renewable energy, biosensors, sustainable chemical industries, microbial drug factories and biomedical devices, synthetic biology has enormous implications for economic development. Many countries are therefore implementing strategies to promote progress in this field. Most notably, the USA is considered to be the leader in exploring the industrial potential of synthetic biology (Rodemeyer, 2009). Synthetic biology in Europe has benefited from several cross-border studies, such as the ‘New and Emerging Science and Technology'' programme (NEST, 2005) and the ‘Towards a European Strategy for Synthetic Biology'' project (TESSY; Gaisser et al, 2008). Yet, little is known in the West about Asia''s role in this ‘new industrial revolution'' (Kitney, 2009). In particular, China is investing heavily in scientific research for future developments, and is therefore likely to have an important role in the development of synthetic biology.Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework…In 2010, as part of a study of the international governance of synthetic biology, the author visited four leading research teams in three Chinese cities (Beijing, Tianjin and Hefei). The main aims of the visits were to understand perspectives in China on synthetic biology, to identify core themes among its scientific community, and to address questions such as ‘how did synthetic biology emerge in China?'', ‘what are the current funding conditions?'', ‘how is synthetic biology generally perceived?'' and ‘how is it regulated?''. Initial findings seem to indicate that the emergence of synthetic biology in China has been a bottom-up construction of a new scientific framework; one that is more dynamic and comprises more options than existing national or international research and development (R&D) strategies. Such findings might contribute to Western knowledge of Chinese R&D, but could also expose European and US policy-makers to alternative forms and patterns of research governance that have emerged from a grass-roots level.…the process of developing a framework is at least as important to research governance as the big question it might eventually addressA dominant narrative among the scientists interviewed is the prospect of a ‘big-question'' strategy to promote synthetic-biology research in China. This framework is at a consultation stage and key questions are still being discussed. Yet, fieldwork indicates that the process of developing a framework is at least as important to research governance as the big question it might eventually address. According to several interviewees, this approach aims to organize dispersed national R&D resources into one grand project that is essential to the technical development of the field, preferably focusing on an industry-related theme that is economically appealling to the Chinese public.Chinese scientists have a pragmatic vision for research; thinking of science in terms of its ‘instrumentality'' has long been regarded as characteristic of modern China (Schneider, 2003). However, for a country in which the scientific community is sometimes described as an “uncoordinated ‘bunch of loose ends''” (Cyranoski, 2001) “with limited synergies between them” (OECD, 2007), the envisaged big-question approach implies profound structural and organizational changes. Structurally, the approach proposes that the foundational (industry-related) research questions branch out into various streams of supporting research and more specific short-term research topics. Within such a framework, a variety of Chinese universities and research institutions can be recruited and coordinated at different levels towards solving the big question.It is important to note that although this big-question strategy is at a consultation stage and supervised by the Ministry of Science and Technology (MOST), the idea itself has emerged in a bottom-up manner. One academic who is involved in the ongoing ministerial consultation recounted that, “It [the big-question approach] was initially conversations among we scientists over the past couple of years. We saw this as an alternative way to keep up with international development and possibly lead to some scientific breakthrough. But we are happy to see that the Ministry is excited and wants to support such an idea as well.” As many technicalities remain to be addressed, there is no clear time-frame yet for when the project will be launched. Yet, this nationwide cooperation among scientists with an emerging commitment from MOST seems to be largely welcomed by researchers. Some interviewees described the excitement it generated among the Chinese scientific community as comparable with the establishment of “a new ‘moon-landing'' project”.Of greater significance than the time-frame is the development process that led to this proposition. On the one hand, the emergence of synthetic biology in China has a cosmopolitan feel: cross-border initiatives such as international student competitions, transnational funding opportunities and social debates in Western countries—for instance, about biosafety—all have an important role. On the other hand, the development of synthetic biology in China has some national particularities. Factors including geographical proximity, language, collegial familiarity and shared interests in economic development have all attracted Chinese scientists to the national strategy, to keep up with their international peers. Thus, to some extent, the development of synthetic biology in China is an advance not only in the material synthesis of the ‘cosmos''—the physical world—but also in the social synthesis of aligning national R&D resources and actors with the global scientific community.To comprehend how Chinese scientists have used national particularities and global research trends as mutually constructive influences, and to identify the implications of this for governance, this essay examines the emergence of synthetic biology in China from three perspectives: its initial activities, the evolution of funding opportunities, and the ongoing debates about research governance.China''s involvement in synthetic biology was largely promoted by the participation of students in the International Genetically Engineered Machine (iGEM) competition, an international contest for undergraduates initiated by the Massachusetts Institute of Technology (MIT) in the USA. Before the iGEM training workshop that was hosted by Tianjin University in the Spring of 2007, there were no research records and only two literature reviews on synthetic biology in Chinese scientific databases (Zhao & Wang, 2007). According to Chunting Zhang of Tianjin University—a leading figure in the promotion of synthetic biology in China—it was during these workshops that Chinese research institutions joined their efforts for the first time (Zhang, 2008). From the outset, the organization of the workshop had a national focus, while it engaged with international networks. Synthetic biologists, including Drew Endy from MIT and Christina Smolke from Stanford University, USA, were invited. Later that year, another training camp designed for iGEM tutors was organized in Tianjin and included delegates from Australia and Japan (Zhang, 2008).Through years of organizing iGEM-related conferences and workshops, Chinese universities have strengthened their presence at this international competition; in 2007, four teams from China participated. During the 2010 competition, 11 teams from nine universities in six provinces/municipalities took part. Meanwhile, recruiting, training and supervising iGEM teams has become an important institutional programme at an increasing number of universities.…training for iGEM has grown beyond winning the student awards and become a key component of exchanges between Chinese researchers and the international communityIt might be easy to interpret the enthusiasm for the iGEM as a passion for winning gold medals, as is conventionally the case with other international scientific competitions. This could be one motive for participating. Yet, training for iGEM has grown beyond winning the student awards and has become a key component of exchanges between Chinese researchers and the international community (Ding, 2010). Many of the Chinese scientists interviewed recounted the way in which their initial involvement in synthetic biology overlapped with their tutoring of iGEM teams. One associate professor at Tianjin University, who wrote the first undergraduate textbook on synthetic biology in China, half-jokingly said, “I mainly learnt [synthetic biology] through tutoring new iGEM teams every year.”Participation in such contests has not only helped to popularize synthetic biology in China, but has also influenced local research culture. One example of this is that the iGEM competition uses standard biological parts (BioBricks), and new BioBricks are submitted to an open registry for future sharing. A corresponding celebration of open-source can also be traced to within the Chinese synthetic-biology community. In contrast to the conventional perception that the Chinese scientific sector consists of a “very large number of ‘innovative islands''” (OECD, 2007; Zhang, 2010), communication between domestic teams is quite active. In addition to the formally organized national training camps and conferences, students themselves organize a nationwide, student-only workshop at which to informally test their ideas.More interestingly, when the author asked one team whether there are any plans to set up a ‘national bank'' for hosting designs from Chinese iGEM teams, in order to benefit domestic teams, both the tutor and team members thought this proposal a bit “strange”. The team leader responded, “But why? There is no need. With BioBricks, we can get any parts we want quite easily. Plus, it directly connects us with all the data produced by iGEM teams around the world, let alone in China. A national bank would just be a small-scale duplicate.”From the beginning, interest in the development of synthetic biology in China has been focused on collective efforts within and across national borders. In contrast to conventional critiques on the Chinese scientific community''s “inclination toward competition and secrecy, rather than openness” (Solo & Pressberg, 2007; OECD, 2007; Zhang, 2010), there seems to be a new outlook emerging from the participation of Chinese universities in the iGEM contest. Of course, that is not to say that the BioBricks model is without problems (Rai & Boyle, 2007), or to exclude inputs from other institutional channels. Yet, continuous grass-roots exchanges, such as the undergraduate-level competition, might be as instrumental as formal protocols in shaping research culture. The indifference of Chinese scientists to a ‘national bank'' seems to suggest that the distinction between the ‘national'' and ‘international'' scientific communities has become blurred, if not insignificant.However, frequent cross-institutional exchanges and the domestic organization of iGEM workshops seem to have nurtured the development of a national synthetic-biology community in China, in which grass-roots scientists are comfortable relying on institutions with a cosmopolitan character—such as the BioBricks Foundation—to facilitate local research. To some extent, one could argue that in the eyes of Chinese scientists, national and international resources are one accessible global pool. This grass-roots interest in incorporating local and global advantages is not limited to student training and education, but also exhibited in evolving funding and regulatory debates.In the development of research funding for synthetic biology, a similar bottom-up consolidation of national and global resources can also be observed. As noted earlier, synthetic-biology research in China is in its infancy. A popular view is that China has the potential to lead this field, as it has strong support from related disciplines. In terms of genome sequencing, DNA synthesis, genetic engineering, systems biology and bioinformatics, China is “almost at the same level as developed countries” (Pan, 2008), but synthetic-biology research has only been carried out “sporadically” (Pan, 2008; Huang, 2009). There are few nationally funded projects and there is no discernible industrial involvement (Yang, 2010). Most existing synthetic-biology research is led by universities or institutions that are affiliated with the Chinese Academy of Science (CAS). As one CAS academic commented, “there are many Chinese scientists who are keen on conducting synthetic-biology research. But no substantial research has been launched nor has long-term investment been committed.”The initial undertaking of academic research on synthetic biology in China has therefore benefited from transnational initiatives. The first synthetic-biology project in China, launched in October 2006, was part of the ‘Programmable Bacteria Catalyzing Research'' (PROBACTYS) project, funded by the Sixth Framework Programme of the European Union (Yang, 2010). A year later, another cross-border collaborative effort led to the establishment of the first synthetic-biology centre in China: the Edinburgh University–Tianjing University Joint Research Centre for Systems Biology and Synthetic Biology (Zhang, 2008).There is also a comparable commitment to national research coordination. A year after China''s first participation in iGEM, the 2008 Xiangshan conference focused on domestic progress. From 2007 to 2009, only five projects in China received national funding, all of which came from the National Natural Science Foundation of China (NSFC). This funding totalled ¥1,330,000 (approximately £133,000; www.nsfc.org), which is low in comparison to the £891,000 funding that was given in the UK for seven Networks in Synthetic Biology in 2007 alone (www.bbsrc.ac.uk).One of the primary challenges in obtaining funding identified by the interviewees is that, as an emerging science, synthetic biology is not yet appreciated by Chinese funding agencies. After the Xiangshan conference, the CAS invited scientists to a series of conferences in late 2009. According to the interviewees, one of the main outcomes was the founding of a ‘China Synthetic Biology Coordination Group''; an informal association of around 30 conference delegates from various research institutions. This group formulated a ‘regulatory suggestion'' that they submitted to MOST, which stated the necessity and implications of supporting synthetic-biology research. In addition, leading scientists such as Chunting Zhang and Huanming Yang—President of the Beijing Genomic Institute (BGI), who co-chaired the Beijing Institutes of Life Science (BILS) conferences—have been active in communicating with government institutions. The initial results of this can be seen in the MOST 2010 Application Guidelines for the National Basic Research Program, in which synthetic biology was included for the first time, among ‘key supporting areas'' (MOST, 2010). Meanwhile, in 2010, NSFC allocated ¥1,500,000 (approximately £150,000) to synthetic-biology research, which is more than the total funding the area had received in the past three years.The search for funding further demonstrates the dynamics between national and transnational resources. Chinese R&D initiatives have to deal with the fact that scientific venture-capital and non-governmental research charities are underdeveloped in China. In contrast to the EU or the USA, government institutions in China, such as the NSFC and MOST, are the main and sometimes only domestic sources of funding. Yet, transnational funding opportunities facilitate the development of synthetic biology by alleviating local structural and financial constraints, and further integrate the Chinese scientific community into international research.This is not a linear ‘going-global'' process; it is important for Chinese scientists to secure and promote national and regional support. In addition, this alignment of national funding schemes with global research progress is similar to the iGEM experience, as it is being initiated through informal bottom-up associations between scientists, rather than by top-down institutional channels.As more institutions have joined iGEM training camps and participated in related conferences, a shared interest among the Chinese scientific community in developing synthetic biology has become visible. In late 2009, at the conference that founded the informal ‘coordination group'', the proposition of integrating national expertise through a big-question approach emerged. According to one professor in Beijing—who was a key participant in the discussion at the time—this proposition of a nationwide synergy was not so much about ‘national pride'' or an aim to develop a ‘Chinese'' synthetic biology, it was about research practicality. She explained, “synthetic biology is at the convergence of many disciplines, computer modelling, nano-technology, bioengineering, genomic research etc. Individual researchers like me can only operate on part of the production chain. But I myself would like to see where my findings would fit in a bigger picture as well. It just makes sense for a country the size of China to set up some collective and coordinated framework so as to seek scientific breakthrough.”From the first participation in the iGEM contest to the later exploration of funding opportunities and collective research plans, scientists have been keen to invite and incorporate domestic and international resources, to keep up with global research. Yet, there are still regulatory challenges to be met.…with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' mannerThe reputation of “the ‘wild East'' of biology” (Dennis, 2002) is associated with China'' previous inattention to ethical concerns about the life sciences, especially in embryonic-stem-cell research. Similarly, synthetic biology creates few social concerns in China. Public debate is minimal and most media coverage has been positive. Synthetic biology is depicted as “a core in the fourth wave of scientific development” (Pan, 2008) or “another scientific revolution” (Huang, 2009). Whilst recognizing its possible risks, mainstream media believe that “more people would be attracted to doing good while making a profit than doing evil” (Fang & He, 2010). In addition, biosecurity and biosafety training in China are at an early stage, with few mandatory courses for students (Barr & Zhang, 2010). The four leading synthetic-biology teams I visited regarded the general biosafety regulations that apply to microbiology laboratories as sufficient for synthetic biology. In short, with little social discontent and no imminent public threat, synthetic biology in China could be carried out in a ‘research-as-usual'' manner.Yet, fieldwork suggests that, in contrast to this previous insensitivity to global ethical concerns, the synthetic-biology community in China has taken a more proactive approach to engaging with international debates. It is important to note that there are still no synthetic-biology-specific administrative guidelines or professional codes of conduct in China. However, Chinese stakeholders participate in building a ‘mutual inclusiveness'' between global and domestic discussions.One of the most recent examples of this is a national conference about the ethical and biosafety implications of synthetic biology, which was jointly hosted by the China Association for Science and Technology, the Chinese Society of Biotechnology and the Beijing Institutes of Life Science CAS, in Suzhou in June 2010. The discussion was open to the mainstream media. The debate was not simply a recapitulation of Western worries, such as playing god, potential dual-use or ecological containment. It also focused on the particular concerns of developing countries about how to avoid further widening the developmental gap with advanced countries (Liu, 2010).In addition to general discussions, there are also sustained transnational communications. For example, one of the first three projects funded by the NSFC was a three-year collaboration on biosafety and risk-assessment frameworks between the Institute of Botany at CAS and the Austrian Organization for International Dialogue and Conflict Management (IDC).Chinese scientists are also keen to increase their involvement in the formulation of international regulations. The CAS and the Chinese Academy of Engineering are engaged with their peer institutions in the UK and the USA to “design more robust frameworks for oversight, intellectual property and international cooperation” (Royal Society, 2009). It is too early to tell what influence China will achieve in this field. Yet, the changing image of the country from an unconcerned wild East to a partner in lively discussions signals a new dynamic in the global development of synthetic biology.Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in ChinaFrom self-organized participation in iGEM to bottom-up funding and governance initiatives, two features are repeatedly exhibited in the emergence of synthetic biology in China: global resources and international perspectives complement national interests; and the national and cosmopolitan research strengths are mostly instigated at the grass-roots level. During the process of introducing, developing and reflecting on synthetic biology, many formal or informal, provisional or long-term alliances have been established from the bottom up. Student contests, funding programmes, joint research centres and coordination groups are only a few of the means by which scientists can drive synthetic biology forward in China.However, the inputs of different social actors has not led to disintegration of the field into an array of individualized pursuits, but has transformed it into collective synergies, or the big-question approach. Underlying the diverse efforts of Chinese scientists is a sense of ‘inclusiveness'', or the idea of bringing together previously detached research expertise. Thus, the big-question strategy cannot be interpreted as just another nationally organized agenda in response to global scientific advancements. Instead, it represents a more intricate development path corresponding to how contemporary research evolves on the ground.In comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stageIn comparison to the increasingly visible grass-roots efforts, the role of the Chinese government seems relatively small at this stage. Government input—such as the potential stewardship of the MOST in directing a big-question approach or long-term funding—remain important; the scientists who were interviewed expend a great deal of effort to attract governmental participation. Yet, China'' experience highlights that the key to comprehending regional scientific capacity lies not so much in what the government can do, but rather in what is taking place in laboratories. It is important to remember that Chinese iGEM victories, collaborative synthetic-biology projects and ethical discussions all took place before the government became involved. Thus, to appreciate fully the dynamics of an emerging science, it might be necessary to focus on what is formulated from the bottom up.The experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary researchThe experience of China in synthetic biology demonstrates the power of grass-roots, cross-border engagement to promote contemporary research. More specifically, it is a result of the commitment of Chinese scientists to incorporating national and international resources, actors and social concerns. For practical reasons, the national organization of research, such as through the big-question approach, might still have an important role. However, synthetic biology might be not only a mosaic of national agendas, but also shaped by transnational activities and scientific resources. What Chinese scientists will collectively achieve remains to be seen. Yet, the emergence of synthetic biology in China might be indicative of a new paradigm for how research practices can be introduced, normalized and regulated.  相似文献   

13.
14.
Paige Brown 《EMBO reports》2012,13(11):964-967
Many scientists blame the media for sensationalising scientific findings, but new research suggests that things can go awry at all levels, from the scientific report to the press officer to the journalist.Everything gives you cancer, at least if you believe what you read in the news or see on TV. Fortunately, everything also cures cancer, from red wine to silver nanoparticles. Of course the truth lies somewhere in between, and scientists might point out that these claims are at worst dangerous sensationalism and at best misjudged journalism. These kinds of media story, which inflate the risks and benefits of research, have led to a mistrust of the press among some scientists. But are journalists solely at fault when science reporting goes wrong, as many scientists believe [1]? New research suggests it is time to lay to rest the myth that the press alone is to blame. The truth is far more nuanced and science reporting can go wrong at many stages, from the researchers to the press officers to the diverse producers of news.Many science communication researchers suggest that science in the media is not as distorted as scientists believe, although they do admit that science reporting tends to under-represent risks and over-emphasize benefits [2]. “I think there is a lot less of this [misreported science] than some scientists presume. I actually think that there is a bit of laziness in the narrative around science and the media,” said Fiona Fox, Director of the UK Science Media Centre (London, UK), an independent press office that serves as a liaison between scientists and journalists. “My bottom line is that, certainly in the UK, a vast majority of journalists report science accurately in a measured way, and it''s certainly not a terrible story. Having said that, lots of things do go wrong for a number of reasons.”Fox said that the centre sees everything from fantastic press releases to those that completely misrepresent and sensationalize scientific findings. They have applauded news stories that beautifully reported the caveats and limitations of a particular scientific study, but they have also cringed as a radio talk show pitted a massive and influential body of research against a single non-scientist sceptic.“You ask, is it the press releases, is it the universities, is it the journalists? The truth is that it''s all three,” Fox said. “But even admitting that is admitting more complexity. So anyone who says that scientists and university press officers deliver perfectly accurate science and the media misrepresent it […] that really is not the whole story.”Scientists and scientific institutions today invest more time and effort into communicating with the media than they did a decade ago, especially given the modern emphasis on communicating scientific results to the public [3]. Today, there are considerable pressures on scientists to reach out and even ‘sell their work'' to public relations officers and journalists. “For every story that a journalist has hyped and sensationalized, there will be another example of that coming directly from a press release that we [scientists] hyped and sensationalized,” Fox said. “And for every time that that was a science press officer, there will also be a science press officer who will tell you, ‘I did a much more nuanced press release, but the academic wanted me to over claim for it''.”Although science public relations has helped to put scientific issues on the public agenda, there are also dangers inherent in the process of translation from original research to press release to media story. Previous research in the area of science communication has focused on conflicting scientific and media values, and the effects of science media on audiences. However, studies have raised awareness of the role of press releases in distorting information from the lab bench to published news [4].In a 2011 study of genetic research claims made in press releases and mainstream print media, science communication researcher Jean Brechman, who works at the US advertising and marketing research firm Gallup & Robinson, found evidence that scientific knowledge gets distorted as it is “filtered and translated for mass communication” with “slippages and inconsistencies” occurring along the way, such that the end message does not accurately represent the original science [4]. Although Brechman and colleagues found a concerning point of distortion in the transition between press release and news article, they also observed a misrepresentation of the original science in a significant portion of the press releases themselves.In a previous study, Brechman and his colleagues had also concluded that “errors commonly attributed to science journalists, such as lack of qualifying details and use of oversimplified language, originate in press releases.” Even more worrisome, as Fox told a Nature commentary author in 2009, public relations departments are increasingly filling the need of the media for quick content [5].Fox believes that a common characteristic of misrepresented science in press releases and the media is the over-claiming of preliminary studies. As such, the growing prevalence of rapid, short-format publications that publicize early results might be exacerbating the problem. Research has also revealed that over-emphasis on the beneficial effects of experimental medical treatments seen in press releases and news coverage, often called ‘spin'', can stem from bias in the abstract of the original scientific article itself [6]. Such findings warrant a closer examination of the language used in scientific articles and abstracts, as the wording and ‘spin'' of conclusions drawn by researchers in their peer-reviewed publications might have significant impacts on subsequent media coverage.Of course, some stories about scientific discoveries are just not easy to tell owing to their complexity. They are “messy, complicated, open to interpretation and ripe for misreporting,” as Fox wrote in a post on her blog On Science and the Media (fionafox.blogspot.com). They do not fit the single-page blog post or the short press release. Some scientific experiments and the peer-reviewed articles and media stories that flow from them are inherently full of caveats, contexts and conflicting results and cannot be communicated in a short format [7].In a 2012 issue of Perspectives on Psychological Science, Marco Bertamini at the University of Liverpool (UK) and Marcus R. Munafo at the University of Bristol (UK) suggested that a shift toward “bite-size” publications in areas of science such as psychology might be promoting more single-study models of research, fewer efforts to replicate initial findings, curtailed detailing of previous relevant work and bias toward “false alarm” or false-positive results [7]. The authors pointed out that larger, multi-experiment studies are typically published in longer papers with larger sample sizes and tend to be more accurate. They also suggested that this culture of brief, single-study reports based on small data sets will lead to the contamination of the scientific literature with false-positive findings. Unfortunately, false science far more easily enters the literature than leaves it [8].One famous example is that of Andrew Wakefield, whose 1998 publication in The Lancet claimed to link autism with the combined measles, mumps and rubella (MMR) vaccination. It took years of work by many scientists, and the aid of an exposé by British investigative reporter Brian Deer, to finally force retraction of the paper. However, significant damage had already been done and many parents continue to avoid immunizing their children out of fear. Deer claims that scientific journals were a large part of the problem: “[D]uring the many years in which I investigated the MMR vaccine controversy, the worst and most inexcusable reporting on the subject, apart from the original Wakefield claims in the Lancet, was published in Nature and republished in Scientific American,” he said. “There is an enormous amount of hypocrisy among those who accuse the media of misreporting science.”What factors are promoting this shift to bite-size science? One is certainly the increasing pressure and competition to publish many papers in high-impact journals, which prefer short articles with new, ground-breaking findings.“Bibliometrics is playing a larger role in academia in deciding who gets a job and who gets promoted,” Bertamini said. “In general, if things are measured by citations, there is pressure to publish as much and as often as possible, and also to focus on what is surprising; thus, we can see how this may lead to an inflation in the number of papers but also an increase in publication bias.”Bertamini points to the real possibility that measured effects emerging from a group of small samples can be much larger than the real effect in the total population. “This variability is bad enough, but it is even worse when you consider that what is more likely to be written up and accepted for publication are exactly the larger differences,” he explained.Alongside the endless pressure to publish, the nature of the peer-reviewed publication process itself prioritizes exciting and statistically impressive results. Fluke scientific discoveries and surprising results are often considered newsworthy, even if they end up being false-positives. The bite-size article aggravates this problem in what Bertamini fears is a growing similarity between academic writing and media reporting: “The general media, including blogs and newspapers, will of course focus on what is curious, funny, controversial, and so on. Academic papers must not do the same, and the quality control system is there to prevent that.”The real danger is that, with more than one million scientific papers published every year, journalists can tend to rely on only a few influential journals such as Science and Nature for science news [3]. Although the influence and reliability of these prestigious journals is well established, the risk that journalists and other media producers might be propagating the exciting yet preliminary results published in their pages is undeniable.Fox has personal experience of the consequences of hype surrounding surprising but preliminary science. Her sister has chronic fatigue syndrome (CFS), a debilitating medical condition with no known test or cure. When Science published an article in 2009 linking CFS with a viral agent, Fox was naturally both curious and sceptical [9]. “I thought even if I knew that this was an incredibly significant finding, the fact that nobody had ever found a biological link before also meant that it would have to be replicated before patients could get excited,” Fox explained. “And of course what happened was all the UK journalists were desperate to splash it on the front page because it was so surprising and so significant and could completely revolutionize the approach to CFS, the treatment and potential cure.”Fox observed that while some journalists placed the caveats of the study deep within their stories, others left them out completely. “I gather in the USA it was massive, it was front page news and patients were going online to try and find a test for this particular virus. But in the end, nobody could replicate it, literally nobody. A Dutch group tried, Imperial College London, lots of groups, but nobody could replicate it. And in the end, the paper has been withdrawn from Science.”For Fox, the fact that the paper was withdrawn, incidentally due to a finding of contamination in the samples, was less interesting than the way that the paper was reported by journalists. “We would want any journal press officer to literally in the first paragraph be highlighting the fact that this was such a surprising result that it shouldn''t be splashed on the front page,” she said. Of course to the journalist, waiting for the study to be replicated is anathema in a culture that values exciting and new findings. “To the scientific community, the fact that it is surprising and new means that we should calm down and wait until it is proved,” Fox warned.So, the media must also take its share of the blame when it comes to distorting science news. Indeed, research analysing science coverage in the media has shown that stories tend to exaggerate preliminary findings, use sensationalist terms, avoid complex issues, fail to mention financial conflicts of interest, ignore statistical limitations and transform inherent uncertainties into controversy [3,10].One concerning development within journalism is the ‘balanced treatment'' of controversial science, also called ‘false balance'' by many science communicators. This balanced treatment has helped supporters of pseudoscientific notions gain equal ground with scientific experts in media stories on issues such as climate change and biotechnology [11].“Almost every time the issue of creationism or intelligent design comes up, many newspapers and other media feel that they need to present ‘both sides'', even though one is clearly nonsensical, and indeed harmful to public education,” commented Massimo Pigliucci, author of Nonsense on Stilts: How to Tell Science from Bunk [12].Fox also criticizes false balance on issues such as global climate change. “On that one you can''t blame the scientific community, you can''t blame science press officers,” she said. “That is a real clashing of values. One of the values that most journalists have bred into them is about balance and impartiality, balancing the views of one person with an opponent when it''s controversial. So on issues like climate change, where there is a big controversy, their instinct as a journalist will be to make sure that if they have a climate scientist on the radio or on TV or quoted in the newspaper, they pick up the phone and make sure that they have a climate skeptic.” However, balanced viewpoints should not threaten years of rigorous scientific research embodied in a peer-reviewed publication. “We are not saying generally that we [scientists] want special treatment from journalists,” Fox said, “but we are saying that this whole principle of balance, which applies quite well in politics, doesn''t cross over to science…”Bertamini believes the situation could be made worse if publication standards are relaxed in favour of promoting a more public and open review process. “If today you were to research the issue of human contribution to global warming you would find a consensus in the scientific literature. Yet you would find no such consensus in the general media. In part this is due to the existence of powerful and well-funded lobbies that fill the media with unfounded skepticism. Now imagine if these lobbies had more access to publish their views in the scientific literature, maybe in the form of post publication feedback. This would be a dangerous consequence of blurring the line that separates scientific writing and the broader media.”In an age in which the way science is presented in the news can have significant impacts for audiences, especially when it comes to health news, what can science communicators and journalists do to keep audiences reading without having to distort, hype, trivialize, dramatize or otherwise misrepresent science?Pigliucci believes that many different sources—press releases, blogs, newspapers and investigative science journalism pieces—can cross-check reported science and challenge its accuracy, if necessary. “There are examples of bloggers pointing out technical problems with published scientific papers,” Pigliucci said. “Unfortunately, as we all know, the game can be played the other way around too, with plenty of bloggers, ‘twitterers'' and others actually obfuscating and muddling things even more.” Pigliucci hopes to see a cultural change take place in science reporting, one that emphasizes “more reflective shouting, less shouting of talking points,” he said.Fox believes that journalists still need to cover scientific developments more responsibly, especially given that scientists are increasingly reaching out to press officers and the public. Journalists can inform, intrigue and entertain whilst maintaining accurate representations of the original science, but need to understand that preliminary results must be replicated and validated before being splashed on the front page. They should also strive to interview experts who do not have financial ties or competing interests in the research, and they should put scientific stories in the context of a broader process of nonlinear discovery. According to Pigliucci, journalists can and should be educating themselves on the research process and the science of logical conclusion-making, giving themselves the tools to provide critical and investigative coverage when needed. At the same time, scientists should undertake proper media training so that they are comfortable communicating their work to journalists or press officers.“I don''t think there is any fundamental flaw in how we communicate science, but there is a systemic flaw in the sense that we simply do not educate people about logical fallacies and cognitive biases,” Pigliucci said, advising that scientists and communicators alike should be intimately familiar with the subjects of philosophy and psychology. “As for bunk science, it has always been with us, and it probably always will be, because human beings are naturally prone to all sorts of biases and fallacious reasoning. As Carl Sagan once put it, science (and reason) is like a candle in the dark. It needs constant protection and a lot of thankless work to keep it alive.”  相似文献   

15.
Wolinsky H 《EMBO reports》2010,11(11):830-833
Sympatric speciation—the rise of new species in the absence of geographical barriers—remains a puzzle for evolutionary biologists. Though the evidence for sympatric speciation itself is mounting, an underlying genetic explanation remains elusive.For centuries, the greatest puzzle in biology was how to account for the sheer variety of life. In his 1859 landmark book, On the Origin of Species, Charles Darwin (1809–1882) finally supplied an answer: his grand theory of evolution explained how the process of natural selection, acting on the substrate of genetic mutations, could gradually produce new organisms that are better adapted to their environment. It is easy to see how adaptation to a given environment can differentiate organisms that are geographically separated; different environmental conditions exert different selective pressures on organisms and, over time, the selection of mutations creates different species—a process that is known as allopatric speciation.It is more difficult to explain how new and different species can arise within the same environment. Although Darwin never used the term sympatric speciation for this process, he did describe the formation of new species in the absence of geographical separation. “I can bring a considerable catalogue of facts,” he argued, “showing that within the same area, varieties of the same animal can long remain distinct, from haunting different stations, from breeding at slightly different seasons, or from varieties of the same kind preferring to pair together” (Darwin, 1859).It is more difficult to explain how new and different species can arise within the same environmentIn the 1920s and 1930s, however, allopatric speciation and the role of geographical isolation became the focus of speciation research. Among those leading the charge was Ernst Mayr (1904–2005), a young evolutionary biologist, who would go on to influence generations of biologists with his later work in the field. William Baker, head of palm research at the Royal Botanic Gardens, Kew in Richmond, UK, described Mayr as “one of the key figures to crush sympatric speciation.” Frank Sulloway, a Darwin Scholar at the Institute of Personality and Social Research at the University of California, Berkeley, USA, similarly asserted that Mayr''s scepticism about sympatry was central to his career.The debate about sympatric and allopatric speciation has livened up since Mayr''s death…Since Mayr''s death in 2005, however, several publications have challenged the notion that sympatric speciation is a rare exception to the rule of allopatry. These papers describe examples of both plants and animals that have undergone speciation in the same location, with no apparent geographical barriers to explain their separation. In these instances, a single ancestral population has diverged to the extent that the two new species cannot produce viable offspring, despite the fact that their ranges overlap. The debate about sympatric and allopatric speciation has livened up since Mayr''s death, as Mayr''s influence over the field has waned and as new tools and technologies in molecular biology have become available.Sulloway, who studied with Mayr at Harvard University, in the late 1960s and early 1970s, notes that Mayr''s background in natural history and years of fieldwork in New Guinea and the Solomon Islands contributed to his perception that the bulk of the data supported allopatry. “Ernst''s early career was in many ways built around that argument. It wasn''t the only important idea he had, but he was one of the strong proponents of it. When an intellectual stance exists where most people seem to have gotten it wrong, there is a tendency to sort of lay down the law,” Sulloway said.Sulloway also explained that Mayr “felt that botanists had basically led Darwin astray because there is so much evidence of polyploidy in plants and Darwin turned in large part to the study of botany and geographical distribution in drawing evidence in The Origin.” Indeed, polyploidization is common in plants and can lead to ‘instantaneous'' speciation without geographical barriers.In February 2006, the journal Nature simultaneously published two papers that described sympatric speciation in animals and plants, reopening the debate. Axel Meyer, a zoologist and evolutionary biologist at the University of Konstanz, Germany, demonstrated with his colleagues that sympatric speciation has occurred in cichlid fish in Lake Apoyo, Nicaragua (Barluenga et al, 2006). The researchers claimed that the ancestral fish only seeded the crater lake once; from this, new species have evolved that are distinct and reproductively isolated. Meyer''s paper was broadly supported, even by critics of sympatric speciation, perhaps because Mayr himself endorsed sympatric speciation among the cichlids in his 2001 book What Evolution Is. “[Mayr] told me that in the case of our crater lake cichlids, the onus of showing that it''s not sympatric speciation lies with the people who strongly believe in only allopatric speciation,” Meyer said.…several scientists involved in the debate think that molecular biology could help to eventually resolve the issueThe other paper in Nature—by Vincent Savolainen, a molecular systematist at Imperial College, London, UK, and colleagues—described the sympatric speciation of Howea palms on Lord Howe Island (Fig 1), a minute Pacific island paradise (Savolainen et al, 2006a). Savolainen''s research had originally focused on plant diversity in the gesneriad family—the best known example of which is the African violet—while he was in Brazil for the Geneva Botanical Garden, Switzerland. However, he realized that he would never be able prove the occurrence of sympatry within a continent. “It might happen on a continent,” he explained, “but people will always argue that maybe they were separated and got together after. […] I had to go to an isolated piece of the world and that''s why I started to look at islands.”Open in a separate windowFigure 1Lord Howe Island. Photo: Ian Hutton.He eventually heard about Lord Howe Island, which is situated just off the east coast of Australia, has an area of 56 km2 and is known for its abundance of endemic palms (Sidebar A). The palms, Savolainen said, were an ideal focus for sympatric research: “Palms are not the most diverse group of plants in the world, so we could make a phylogeny of all the related species of palms in the Indian Ocean, southeast Asia and so on.”…the next challenges will be to determine which genes are responsible for speciation, and whether sympatric speciation is common

Sidebar A | Research in paradise

Alexander Papadopulos is no Tarzan of the Apes, but he has spent a couple months over the past two years aloft in palm trees hugging rugged mountainsides on Lord Howe Island, a Pacific island paradise and UNESCO World Heritage site.Papadopulos—who is finishing his doctorate at Imperial College London, UK—said the views are breathtaking, but the work is hard and a bit treacherous as he moves from branch to branch. “At times, it can be quite hairy. Often you''re looking over a 600-, 700-metre drop without a huge amount to hold onto,” he said. “There''s such dense vegetation on most of the steep parts of the island. You''re actually climbing between trees. There are times when you''re completely unsupported.”Papadopulos typically spends around 10 hours a day in the field, carrying a backpack and utility belt with a digital camera, a trowel to collect soil samples, a first-aid kit, a field notebook, food and water, specimen bags, tags to label specimens, a GPS device and more. After several days in the field, he spends a day working in a well-equipped field lab and sleeping in the quarters that were built by the Lord Howe governing board to accommodate the scientists who visit the island on various projects. Papadopulos is studying Lord Howe''s flora, which includes more than 200 plant species, about half of which are indigenous.Vincent Savolainen said it takes a lot of planning to get materials to Lord Howe: the two-hour flight from Sydney is on a small plane, with only about a dozen passengers on board and limited space for equipment. Extra gear—from gardening equipment to silica gel and wood for boxes in which to dry wet specimens—arrives via other flights or by boat, to serve the needs of the various scientists on the team, including botanists, evolutionary biologists and ecologists.Savolainen praised the well-stocked researcher station for visiting scientists. It is run by the island board and situated near the palm nursery. It includes one room for the lab and another with bunks. “There is electricity and even email,” he said. Papadoupulos said only in the past year has the internet service been adequate to accommodate video calls back home.Ian Hutton, a Lord Howe-based naturalist and author, who has lived on the island since 1980, said the island authorities set limits on not only the number of residents—350—but also the number of visitors at one time—400—as well as banning cats, to protect birds such as the flightless wood hen. He praised the Imperial/Kew group: “They''re world leaders in their field. And they''re what I call ‘Gentlemen Botanists''. They''re very nice people, they engage the locals here. Sometimes researchers might come here, and they''re just interested in what they''re doing and they don''t want to share what they''re doing. Not so with these people. Savolainen said his research helps the locals: “The genetics that we do on the island are not only useful to understand big questions about evolution, but we also always provide feedback to help in its conservation efforts.”Yet, in Savolainen''s opinion, Mayr''s influential views made it difficult to obtain research funding. “Mayr was a powerful figure and he dismissed sympatric speciation in textbooks. People were not too keen to put money on this,” Savolainen explained. Eventually, the Leverhulme Trust (London, UK) gave Savolainen and Baker £70,000 between 2003–2005 to get the research moving. “It was enough to do the basic genetics and to send a research assistant for six months to the island to do a lot of natural history work,” Savolainen said. Once the initial results had been processed, the project received a further £337,000 from the British Natural Environment Research Council in 2008, and €2.5 million from the European Research Council in 2009.From the data collected on Lord Howe Island, Savolainen and his team constructed a dated phylogenetic tree showing that the two endemic species of the palm Howea (Arecaceae; Fig 2) are sister taxa. From their tree, the researchers were able to establish that the two species—one with a thatch of leaves and one with curly leaves—diverged long after the island was formed 6.9 million years ago. Even where they are found in close proximity, the two species cannot interbreed as they flower at different times.Open in a separate windowFigure 2The two species of Howea palm. (A) Howea fosteriana (Kentia palm). (B) Howea belmoreana. Photos: William Baker, Royal Botanical Gardens, Kew, Richmond, UK.According to the researchers, the palm speciation probably occurred owing to the different soil types in which the plants grow. Baker explained that there are two soil types on Lord Howe—the older volcanic soil and the younger calcareous soils. The Kentia palm grows in both, whereas the curly variety is restricted to the volcanic soil. These soil types are closely intercalated—fingers and lenses of calcareous soils intrude into the volcanic soils in lowland Lord Howe Island. “You can step over a geological boundary and the palms in the forest can change completely, but they remain extremely close to each other,” Baker said. “What''s more, the palms are wind-pollinated, producing vast amounts of pollen that blows all over the place during the flowering season—people even get pollen allergies there because there is so much of the stuff.” According to Savolainen, that the two species have different flowering times is a “way of having isolation so that they don''t reproduce with each other […] this is a mechanism that evolved to allow other species to diverge in situ on a few square kilometres.”According to Baker, the absence of a causative link has not been demonstrated between the different soils and the altered flowering times, “but we have suggested that at the time of speciation, perhaps when calcareous soils first appeared, an environmental effect may have altered the flowering time of palms colonising the new soil, potentially causing non-random mating and kicking off speciation. This is just a hypothesis—we need to do a lot more fieldwork to get to the bottom of this,” he said. What is clear is that this is not allopatric speciation, as “the micro-scale differentiation in geology and soil type cannot create geographical isolation”, said Baker.…although molecular data will add to the debate, it will not settle it aloneThe results of the palm research caused something of a splash in evolutionary biology, although the study was not without its critics. Tod Stuessy, Chair of the Department of Systematic and Evolutionary Botany at the University of Vienna, Austria, has dealt with similar issues of divergence on Chile''s Juan Fernández Islands—also known as the Robinson Crusoe Islands—in the South Pacific. From his research, he points out that on old islands, large ecological areas that once separated species—and caused allopatric speciation—could have since disappeared, diluting the argument for sympatry. “There are a lot of cases [in the Juan Fernández Islands] where you have closely related species occurring in the same place on an island, even in the same valley. We never considered that they had sympatric origins because we were always impressed by how much the island had been modified through time,” Stuessy said. “What [the Lord Howe researchers] really didn''t consider was that Lord Howe Island could have changed a lot over time since the origins of the species in question.” It has also been argued that one of the palm species on Lord Howe Island might have evolved allopatrically on a now-sunken island in the same oceanic region.In their response to a letter from Stuessy, Savolainen and colleagues argued that erosion on the island has been mainly coastal and equal from all sides. “Consequently, Quaternary calcarenite deposits, which created divergent ecological selection pressures conducive to Howea species divergence, have formed evenly around the island; these are so closely intercalated with volcanic rocks that allopatric speciation due to ecogeographic isolation, as Stuessy proposes, is unrealistic” (Savolainen et al, 2006b). Their rebuttal has found support in the field. Evolutionary biologist Loren Rieseberg at the University of British Columbia in Vancouver, Canada, said: “Basically, you have two sister species found on a very small island in the middle of the ocean. It''s hard to see how one could argue anything other than they evolved there. To me, it would be hard to come up with a better case.”Whatever the reality, several scientists involved in the debate think that molecular biology could help to eventually resolve the issue. Savolainen said that the next challenges will be to determine which genes are responsible for speciation, and whether sympatric speciation is common. New sequencing techniques should enable the team to obtain a complete genomic sequence for the palms. Savolainen said that next-generation sequencing is “a total revolution.” By using sequencing, he explained that the team, “want to basically dissect exactly what genes are involved and what has happened […] Is it very special on Lord Howe and for this palm, or is [sympatric speciation] a more general phenomenon? This is a big question now. I think now we''ve found places like Lord Howe and [have] tools like the next-gen sequencing, we can actually get the answer.”Determining whether sympatric speciation occurs in animal species will prove equally challenging, according to Meyer. His own lab, among others, is already looking for ‘speciation genes'', but this remains a tricky challenge. “Genetic models […] argue that two traits (one for ecological specialisation and another for mate choice, based on those ecological differences) need to become tightly linked on one chromosome (so that they don''t get separated, often by segregation or crossing over). The problem is that the genetic basis for most ecologically relevant traits are not known, so it would be very hard to look for them,” Meyer explained. “But, that is about to change […] because of next-generation sequencing and genomics more generally.”Many researchers who knew Mayr personally think he would have enjoyed the challenge to his viewsOthers are more cautious. “In some situations, such as on isolated oceanic islands, or in crater lakes, molecular phylogenetic information can provide strong evidence of sympatric speciation. It also is possible, in theory, to use molecular data to estimate the timing of gene flow, which could help settle the debate,” Rieseberg said. However, he cautioned that although molecular data will add to the debate, it will not settle it alone. “We will still need information from historical biogeography, natural history, phylogeny, and theory, etc. to move things forward.”Many researchers who knew Mayr personally think he would have enjoyed the challenge to his views. “I can only imagine that it would''ve been great fun to engage directly with him [on sympatry on Lord Howe],” Baker said. “It''s a shame that he wasn''t alive to comment on [our paper].” In fact, Mayr was not really as opposed to sympatric speciation as some think. “If one is of the opinion that Mayr opposed all forms of sympatric speciation, well then this looks like a big swing back the other way,” Sulloway commented. “But if one reads Mayr carefully, one sees that he was actually interested in potential exceptions and, as best he could, chronicled which ones he thought were the best candidates.”Mayr''s opinions aside, many biologists today have stronger feelings against sympatric speciation than he did himself in his later years, Meyer added. “I think that Ernst was more open to the idea of sympatric speciation later in his life. He got ‘softer'' on this during the last two of his ten decades of life that I knew him. I was close to him personally and I think that he was much less dogmatic than he is often made out to be […] So, I don''t think that he is spinning in his grave.” Mayr once told Sulloway that he liked to take strong stances, precisely so that other researchers would be motivated to try to prove him wrong. “If they eventually succeeded in doing so, Mayr felt that science was all the better for it.”? Open in a separate windowAlex Papadopulos and Ian Hutton doing fieldwork on a very precarious ridge on top of Mt. Gower. Photo: William Baker, Royal Botanical Gardens, Kew, Richmond, UK.  相似文献   

16.
The complete sequencing of the human genome introduced a new knowledge base for decoding information structured in DNA sequence variation. My research is predicated on the supposition that the genome is the most sophisticated knowledge system known, as evidenced by the exquisite information it encodes on biochemical pathways and molecular processes underlying the biology of health and disease. Also, as a living legacy of human origins, migrations, adaptations, and identity, the genome communicates through the complexity of sequence variation expressed in population diversity. As a biomedical research scientist and academician, a question I am often asked is: “How is it that a black woman like you went to the University of Michigan for a PhD in Human Genetics?” As the ASCB 2012 E. E. Just Lecturer, I am honored and privileged to respond to this question in this essay on the science of the human genome and my career perspectives.
“Knowledge is power, but wisdom is supreme.”
  相似文献   

17.
18.
Following the publication of the Origin of Species in 1859, many naturalists adopted the idea that living organisms were the historical outcome of gradual transformation of lifeless matter. These views soon merged with the developments of biochemistry and cell biology and led to proposals in which the origin of protoplasm was equated with the origin of life. The heterotrophic origin of life proposed by Oparin and Haldane in the 1920s was part of this tradition, which Oparin enriched by transforming the discussion of the emergence of the first cells into a workable multidisciplinary research program.On the other hand, the scientific trend toward understanding biological phenomena at the molecular level led authors like Troland, Muller, and others to propose that single molecules or viruses represented primordial living systems. The contrast between these opposing views on the origin of life represents not only contrasting views of the nature of life itself, but also major ideological discussions that reached a surprising intensity in the years following Stanley Miller’s seminal result which showed the ease with which organic compounds of biochemical significance could be synthesized under putative primitive conditions. In fact, during the years following the Miller experiment, attempts to understand the origin of life were strongly influenced by research on DNA replication and protein biosynthesis, and, in socio-political terms, by the atmosphere created by Cold War tensions.The catalytic versatility of RNA molecules clearly merits a critical reappraisal of Muller’s viewpoint. However, the discovery of ribozymes does not imply that autocatalytic nucleic acid molecules ready to be used as primordial genes were floating in the primitive oceans, or that the RNA world emerged completely assembled from simple precursors present in the prebiotic soup. The evidence supporting the presence of a wide range of organic molecules on the primitive Earth, including membrane-forming compounds, suggests that the evolution of membrane-bounded molecular systems preceded cellular life on our planet, and that life is the evolutionary outcome of a process, not of a single, fortuitous event.It is generally assumed that early philosophers and naturalists appealed to spontaneous generation to explain the origin of life, but in fact, the possibility of life emerging directly from nonliving matter was seen at first as a nonsexual reproductive mechanism. This changed with the transformist views developed by Erasmus Darwin, Georges Louis Leclerc de Buffon, and, most importantly, by Jean-Baptiste de Lamarck, all of whom invoked spontaneous generation as the mechanism that led to the emergence of life, and not just its reproduction. “Nature, by means of of heat, light, electricity and moisture”, wrote Lamarck in 1809, “forms direct or spontaneous generation at that extremity of each kingdom of living bodies, where the simplest of these bodies are found”.Like his predecessors, Charles Darwin surmised that plants and animals arose naturally from some primordial nonliving matter. As early as 1837 he wrote in his Second Notebook that “the intimate relation of Life with laws of chemical combination, & the universality of latter render spontaneous generation not improbable.” However, Darwin included few statements about the origin of life in his books. He avoided the issue in the Origin of Species, in which he only wrote “… I should infer from analogy that probably all organic beings which have ever lived on this Earth have descended from some one primordial form, into which life was first breathed” (Peretó et al. 2009).Darwin added few remarks on the origin of life his book, and his reluctance surprised many of his friends and followers. In his monograph on the radiolaria, Haeckel wrote “The chief defect of the Darwinian theory is that it throws no light on the origin of the primitive organism—probably a simple cell—from which all the others have descended. When Darwin assumes a special creative act for this first species, he is not consistent, and, I think, not quite sincere …” (Haeckel 1862).Twelve years after the first publication of the Origin of Species, Darwin wrote the now famous letter to his friend Hooker in which the idea of a “warm little pond” was included. Mailed on February 1st, 1871, it stated that “It is often said that all the conditions for the first production of a living organism are now present, which could ever have been present. But if (and Oh! what a big if!) we could conceive in some warm little pond with all sorts of ammonia and phosphoric salts—light, heat, electricity &c. present, that a proteine compound was chemically formed, ready to undergo still more complex changes, at the present day such matter wd be instantly devoured, or absorbed, which would not have been the case before living creatures were formed.” Although Darwin refrained from any further public statements on how life may have appeared, his views established the framework that would lead to a number of attempts to explain the origin of life by introducing principles of historical explanation (Peretó et al. 2009). Here I will describe this history, and how it is guiding current research into the question of life’s origins.  相似文献   

19.
G Wang  Y Rong  H Chen  C Pearson  C Du  R Simha  C Zeng 《PloS one》2012,7(7):e40330
A common problem in molecular biology is to use experimental data, such as microarray data, to infer knowledge about the structure of interactions between important molecules in subsystems of the cell. By approximating the state of each molecule as “on” or “off”, it becomes possible to simplify the problem, and exploit the tools of Boolean analysis for such inference. Amongst Boolean techniques, the process-driven approach has shown promise in being able to identify putative network structures, as well as stability and modularity properties. This paper examines the process-driven approach more formally, and makes four contributions about the computational complexity of the inference problem, under the “dominant inhibition” assumption of molecular interactions. The first is a proof that the feasibility problem (does there exist a network that explains the data?) can be solved in polynomial-time. Second, the minimality problem (what is the smallest network that explains the data?) is shown to be NP-hard, and therefore unlikely to result in a polynomial-time algorithm. Third, a simple polynomial-time heuristic is shown to produce near-minimal solutions, as demonstrated by simulation. Fourth, the theoretical framework explains how multiplicity (the number of network solutions to realize a given biological process), which can take exponential-time to compute, can instead be accurately estimated by a fast, polynomial-time heuristic.  相似文献   

20.
Does the Golgi self-organize or does it form around an instructive template? Evidence on both sides is piling up, but a definitive conclusion is proving elusive.In the battle to define the Golgi, discussions easily spiral into what can appear like nitpicking. In a contentious poster session, an entire worldview rests on whether you think a particular mutant is arrested with vesicles that are close to but distinct from the ER or almost budded from but still attached to the ER.Sometimes obscured by these details are the larger issues. This debate “gets to the fundamental issue of how you think of the Golgi,” says Ben Glick of the University of Chicago (Chicago, IL). “The dogma has been that you need a template to build an organelle. But in the secretory system it''s possible in principle that you could get de novo organization of structure. That''s the issue that stirs people emotionally and intellectually.”Then there are the collateral issues. There is an ongoing controversy about the nature of forward transport through the Golgi—it may occur via forward movement of small vesicles, or by gradual maturation of one cisterna to form the next. The cisternal maturation model “argues for a Golgi that can be made and consumed,” says Graham Warren (Yale University, New Haven, CT)—a situation that is more difficult to reconcile with Warren''s template-determined universe.Even more confusing is the situation in mitosis. Accounts vary wildly on how much of the Golgi disappears into the ER during mitosis. The answer would determine to what extent the cell has to rebuild the Golgi after mitosis, and what method it might use to do so.Several laboratories have made major contributions to address these issues. But none define them so clearly as those of Warren and Jennifer Lippincott-Schwartz (National Institutes of Health, Bethesda, MD). At almost every turn, on almost every issue, it seems that Warren and Lippincott-Schwartz reach opposite conclusions, sometimes based on similar or identical data.And yet, at least in public, there is a remarkable lack of rancor. “These are not easy experiments for us to do,” says Warren. “It''s all cutting-edge research and we are pushing the technology to the limit. Part of that is that you push your own interpretation.” For her part, Lippincott-Schwartz approaches a lengthy poster-session debate with Warren with something approaching glee. This is not triumphal glee, however. Rather, Lippincott-Schwartz seems to relish the opportunity to exchange ideas, and on this point Warren agrees. “Complacency is the worst thing to have in a field,” he says. The debate “has made all of us think a lot harder.”  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号