首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
2.
3.
4.
The diffusion of ‘modern’ contraceptives—as a proxy for the spread of low-fertility norms—has long interested researchers wishing to understand global fertility decline. A fundamental question is how local cultural norms and other people''s behaviour influence the probability of contraceptive use, independent of women''s socioeconomic and life-history characteristics. However, few studies have combined data at individual, social network and community levels to simultaneously capture multiple levels of influence. Fewer still have tested if the same predictors matter for different contraceptive types. Here, we use new data from 22 high-fertility communities in Poland to compare predictors of the use of (i) any contraceptives—a proxy for the decision to control fertility—with those of (ii) ‘artificial’ contraceptives—a subset of more culturally taboo methods. We find that the contraceptive behaviour of friends and family is more influential than are women''s own characteristics and that community level characteristics additionally influence contraceptive use. Highly educated neighbours accelerate women''s contraceptive use overall, but not their artificial method use. Highly religious neighbours slow women''s artificial method use, but not their contraceptive use overall. Our results highlight different dimensions of sociocultural influence on contraceptive diffusion and suggest that these may be more influential than are individual characteristics. A comparative multilevel framework is needed to understand these dynamics.  相似文献   

5.
6.
The scientific process requires a critical attitude towards existing hypotheses and obvious explanations. Teaching this mindset to students is both important and challenging.People who read about scientific discoveries might get the misleading impression that scientific research produces a few rare breakthroughs—once or twice per century—and a large body of ‘merely incremental'' studies. In reality, however, breakthrough discoveries are reported on a weekly basis, and one can cite many fields just in biology—brain imaging, non-coding RNAs and stem cell biology, to name a few—that have undergone paradigm shifts within the past decade.The truly surprising thing about discovery is not just that it happens at a regular pace, but that most significant discoveries occurred only after the scientific community had already accepted another explanation. It is not merely the accrual of new data that leads to a breakthrough, but a willingness to acknowledge that a problem that is already ‘solved'' might require an entirely different explanation. In the case of breakthroughs or paradigm shifts, this new explanation might seem far-fetched or nonsensical and not even worthy of serious consideration. It is as if new ideas are sitting right in front of everyone, but in their blind spots so that only those who use their peripheral vision can see them.Scientists do not all share any single method or way of working. Yet they tend to share certain prevalent attitudes: they accept ‘facts'' and ‘obvious'' explanations only provisionally, at arm''s length, as it were; they not only imagine alternatives, but—almost as a reflex—ask themselves what alternative explanations are possible.When teaching students, it is a challenge to convey this critical attitude towards seemingly obvious explanations. In the spring semester of 2009, I offered a seminar entitled The Process of Scientific Discovery to Honours undergraduate students at the University of Illinois-Chicago in the USA. I originally planned to cover aspects of discovery such as the impact of funding agencies, the importance of mentoring and hypothesis-driven as opposed to data-driven research. As the semester progressed, however, my sessions moved towards ‘teaching moments'' drawn from everyday life, which forced the students to look at familiar things in unfamiliar ways. These served as metaphors for certain aspects of the process by which scientists discover new paradigms.For the first seven weeks of the spring semester, the class read Everyday Practice of Science by Frederick Grinnell [1]. During the discussion of the first chapter, one of the students noted that Grinnell referred to a scientist generically as ‘she'' rather than ‘he'' or the neutral ‘he or she''. This use is unusual and made her vaguely uneasy: she wondered whether the author was making a sexist point. Before considering her hypothesis, I asked the class to make a list of assumptions that they took for granted when reading the chapter, together with the possible explanations for the use of ‘she'' in the first chapter, no matter how far-fetched or unlikely they might seem.For example, one might assume that Frederick Grinnell or ‘Fred'' is from a culture similar to our own. How would we interpret his behaviour and outlook if we knew that Fred came from an exotic foreign land? Another assumption is that Fred is male; how would we view the remark if we discover that Frederick is short for Fredericka? We have equally assumed that Fred, as with most humans, wants us to like him. Instead, perhaps he is being intentionally provocative in order to get our attention or move us out of our comfort zone. Perhaps he planted ‘she'' as a deliberate example for us to discuss, as he does later in the second chapter, in which he deliberately hides a strange item in plain sight within one of the illustrations in order to make a point about observing anomalies. Perhaps the book was written not by Fred but by a ghost writer? Perhaps the ‘she'' was a typo?The truly surprising thing about discovery is […] that most significant discoveries occurred only after the scientific community had already accepted another explanationLooking for patterns throughout the book, and in Fred''s other writing, might persuade us to discard some of the possible explanations: does ‘she'' appear just once? Does Fred use other unusual or provocative turns of phrase? Does Fred discuss gender bias or sexism explicitly? Has anyone written or complained about him? Of course, one could ask Fred directly what he meant, although without knowing him personally, it would be difficult to know how to interpret his answer or whether to take his remarks at face value. Notwithstanding the answer, the exercise is an important lesson about considering and weighing all possible explanations.Arguably, the most prominent term used in science studies is the notion of a ‘paradigm''. I use this term with reluctance, as it is extraordinarily ambiguous. For example, it could simply refer to a specific type of experimental design: a randomized, placebo-controlled clinical trial could be considered a paradigm. In the context of science studies, however, it most often refers to the idea of large-scale leaps in scientific world views, as promoted by Thomas Kuhn in The Structure of Scientific Revolutions [2]. Kuhn''s notion of a paradigm can lead one to believe—erroneously in my opinion—that paradigm shifts are the opposite of practical, everyday scientific problem-solving.A paradigm is recognized by the set of assumptions that an observer might not realize he or she is making…Instead, I propose here a definition of ‘paradigm'' that emphasizes not the nature of the problem, the type of discovery or the scope of its implications, but rather the psychology of the scientist. A scientist viewing a problem or phenomenon resides within a paradigm when he or she does not notice, and cannot imagine, that an alternative way of looking at things needs to be considered seriously. Importantly, a paradigm is not a viewpoint, model, interpretation, hypothesis or conclusion. A paradigm is not the object that is viewed but the lenses through which it is viewed. A paradigm is recognized by the set of assumptions that an observer might not realize he or she is making, but which imply many automatic expectations and simultaneously prevent the observer from seeing the issue in any other fashion.For example, the teacher–student paradigm feels natural and obvious, yet it is merely set up by habit and tradition. It implies lectures, assignments, grades, ways of addressing the professor and so on, all of which could be done differently, if we had merely thought to consider alternatives. What feels most natural in a paradigm is often the most arbitrary. When we have a birthday, we expect to have a cake with candles, yet there is no natural relationship at all between birthdays, cakes and candles. In fact, when something is arbitrary or conventional yet feels entirely natural, that is an important clue that a paradigm is present.It is certainly natural for people to colour their observations according to their expectations: “To a man with a hammer, everything looks like a nail,” as Mark Twain put it. However, this is a pitfall that scientists (and doctors) must try hard to avoid. When I was a first-year medical student at Albert Einstein College of Medicine in New York City, we took a class on how to approach patients. As part of this course, we attended a session in which a psychiatrist interviewed a ‘normal, healthy old person'' in order to understand better the lives and perspectives of the elderly.A man came in, and the psychiatrist began to ask him some benign questions. After about 10 minutes, however, the man began to pause before answering; then his answers became terse; then he said he did not feel well, excused himself and abruptly left the room. The psychiatrist continued to lecture to the students for another half-hour, analysing and interpreting the halting responses in terms of the emotional conflicts that the man was experiencing. ‘Repression'', ‘emotional blocks'', and ‘reaction formation'' were some of the terms bandied about.However, unbeknown to the class, the man had collapsed just on the other side of the classroom door. Two cardiologists happened to be walking by and instantly realized the man was having an acute heart attack. They instituted CPR on the spot, but the man died within a few minutes.The psychiatrist had been told that the man was healthy, and thus interpreted everything that he saw in psychological terms. It never entered his mind that the man might have been dying in front of his eyes. The cardiologists saw a man having a heart attack, and it never entered their minds that the man might have had psychological issues.The movie The Sixth Sense [3] resonated particularly well with my students and served as a platform for discussing attitudes that are helpful for scientific investigation, such as “keep an open mind”, “reality is much stranger than you can imagine” and “our conclusions are always provisional at best”. Best of all, The Sixth Sense demonstrates the tension that exists between different scientific paradigms in a clear and beautiful way. When Haley Joel Osment says, “I see dead people,” does he actually see ghosts? Or is he hallucinating?…when scientists reach a conclusion, it is merely a place to pause and rest for a moment, not a final destinationIt is important to emphasize that these are not merely different viewpoints, or different ways of defining terms. If we argued about which mountain is higher, Everest or K2, we might disagree about which kind of evidence is more reliable, but we would fundamentally agree on the notion of measurement. By contrast, in The Sixth Sense, the same evidence used by one paradigm to support its assertion is used with equal strength by the other paradigm as evidence in its favour. In the movie, Bruce Willis plays a psychologist who assumes that Osment must be a troubled youth. However, the fact that he says he sees ghosts is also evidence in favour of the existence of ghosts, if you do not reject out of hand the possibility of their existence. These two explanations are incommensurate. One cannot simply weigh all of the evidence because each side rejects the type of evidence that the other side accepts, and regards the alternative explanation not merely as wrong but as ridiculous or nonsensical. It is in this sense that a paradigm represents a failure of imagination—each side cannot imagine that the other explanation could possibly be true, or at least, plausible enough to warrant serious consideration.The failure of imagination means that each side fails to notice or to seek ‘objective'' evidence that would favour one explanation over the other. For example, during the episodes when Osment saw ghosts, the thermostat in the room fell precipitously and he could see his own breath. This certainly would seem to constitute objective evidence to favour the ghost explanation, and the fact that his mother had noticed that the heating in her apartment was erratic suggests that the temperature change was not simply another imagined symptom. But the mother assumed that the problem was in the heating system and did not even conceive that this might be linked to ghosts—so the ‘objective'' evidence certainly was not compelling or even suggestive on its own.Osment did succeed eventually in convincing his mother that he saw ghosts, and he did it in the same way that any scientist would convince his colleagues: namely, he produced evidence that made perfect sense in the context of one, and only one, explanation. First, he told his mother a secret that he said her dead mother had told him. This secret was about an incident that had occurred before he was born, and presumably she had never spoken of it, so there was no obvious way that he could have learned about it. Next, he told her that the grandmother had heard her say “every day” when standing near her grave. Again, the mother had presumably visited the grave alone and had not told anyone about the visit or about what was said. So, the mother was eventually convinced that Osment must have spoken with the dead grandmother after all. No other explanation seemed to fit all the facts.Is this the end of the story? We, the audience, realize that it is possible that Osment had merely guessed about the incidents, heard them second-hand from another relative or (as with professional psychics) might have retold his anecdotes whilst looking for validation from his mother. The evidence seems compelling only because these alternatives seem even less likely. It is in this same sense that when scientists reach a conclusion, it is merely a place to pause and rest for a moment, not a final destination.Near the end of the course, I gave a pop-quiz asking each student to give a ‘yes'' or ‘no'' answer, plus a short one-sentence explanation, to the following question: Donald Trump seems to be a wealthy businessman. He dresses like one, he has a TV show in which he acts like one, he gives seminars on wealth building and so on. Everything we know about him says that he is wealthy as a direct result of his business activities. On the basis of this evidence, are we justified in concluding that he is, in fact, a wealthy businessman?About half the class said that yes, if all the evidence points in one direction, that suffices. About half the class said ‘no'', the stated evidence is circumstantial and we do not know, for example, what his bank balance is or whether he has more debt than equity. All the evidence we know about points in one direction, but we might not know all the facts.Even when looked at carefully, not every anomaly is attractive enough or ‘ripe'' enough to be pursued when first noticedHow do we know whether or not we know all the facts? Again, it is a matter of imagination. Let us review a few possible alternatives. Maybe his wealth comes from inheritance rather than business acumen; or from silent partners; or from drug running. Maybe he is dangerously over-extended and living on borrowed money; maybe his wealth is more apparent than real. Maybe Trump Casinos made up the role of Donald Trump as its symbol, the way McDonald''s made up the role of Ronald McDonald?Several students complained that this was a ridiculous question. Yet I had posed this just after Bernard Madoff''s arrest was blanketing the news. Madoff was known as a billionaire investor genius for decades and had even served as the head of the Securities and Exchange Commission. As it turned out, his money was obtained by a massive Ponzi scheme. Why was Madoff able to succeed for so long? Because it was inconceivable that such a famous public figure could be a common con man and the people around him could not imagine the possibility that his livelihood needed to be scrutinized.To this point, I have emphasized the benefits of paying attention to anomalous, strange or unwelcome observations. Yet paradoxically, scientists often make progress by (provisionally) putting aside anomalous or apparently negative findings that seem to invalidate or distract from their hypothesis. When Rita Levi-Montalcini was assaying the neurite-promoting effects of tumour tissue, she had predicted that this was a property of tumours and was devastated to find that normal tissue had the same effects. Only by ‘ignoring'' this apparent failure could she move forward to characterize nerve growth factor and eventually understand its biology [4].Another classic example is Huntington disease—a genetic disorder in which an inherited alteration in the gene that encodes a protein, huntingtin, leads to toxicity within certain types of neuron and causes a progressive movement disorder associated with cognitive decline and psychiatric symptoms. Clinicians observed that the offspring of Huntington disease patients sometimes showed symptoms at an earlier age than their parents, and this phenomenon, called ‘genetic anticipation'', could affect successive generations at earlier and earlier ages of onset. This observation was met with scepticism and sometimes ridicule, as everything that was known about genetics at the time indicated that genes do not change across generations. Ascertainment bias was suggested as a much more probable explanation; in other words, once a patient is diagnosed with Huntington disease, their doctors will look at their offspring much more closely and will thus tend to identify the onset of symptoms at an earlier age. Eventually, once the detailed genetics of the disease were understood at the molecular level, it was shown that the structure of the altered huntingtin gene does change. Genetic anticipation is now an accepted phenomenon.…in fact, schools teach a lot about how to test hypotheses but little about how to find good hypotheses in the first placeWhat does this teach us about discovery? Even when looked at carefully, not every anomaly is attractive enough or ‘ripe'' enough to be pursued when first noticed. The biologists who identified the structure of the abnormal huntingtin gene did eventually explain genetic anticipation, although they set aside the puzzling clinical observations and proceeded pragmatically according to their (wrong) initial best-guess as to the genetics. The important thing is to move forward.Finally, let us consider the case of Grigori Perelman, an outstanding mathematician who solved the Poincaré Conjecture a few years ago. He did not tell anyone he was working on the problem, lest their ‘helpful advice'' discourage him; he posted his historic proof online, bypassing peer-reviewed journals altogether; he turned down both the Fields Medal and a million dollar prize; and he has refused professorial posts at prestigious universities. Having made a deliberate decision to eschew the external incentives associated with science as a career, his choices have been written off as examples of eccentric anti-social behaviour. I suggest, however, that he might have simply recognized that the usual rules for success and the usual reward structure of the scientific community can create roadblocks, which had to be avoided if he was to solve a supposedly unsolvable problem.If we cannot imagine new paradigms, then how can they ever be perceived, much less tested? It should be clear by now that the ‘process of scientific discovery'' can proceed by many different paths. However, here is one cognitive exercise that can be applied to almost any situation. (i) Notice a phenomenon, even if (especially if) it is familiar and regarded as a solved problem; regard it as if it is new and strange. In particular, look hard for anomalous and strange aspects of the phenomenon that are ignored by scientists in the field. (ii) Look for the hidden assumptions that guide scientists'' thinking about the phenomenon, and ask what kinds of explanation would be possible if the assumptions were false (or reversed). (iii) Make a list of possible alternative explanations, no matter how unlikely they seem to be. (iv) Ask if one of these explanations has particular appeal (for example, if it is the most elegant theoretically; if it can generalize to new domains; and if it would have great practical impact). (v) Ask what kind of evidence would allow one to favour that hypothesis over the others, and carry out experiments to test the hypothesis.The process just outlined is not something that is taught in graduate school; in fact, schools teach a lot about how to test hypotheses but little about how to find good hypotheses in the first place. Consequently, this cognitive exercise is not often carried out within the brain of an individual scientist. Yet this creative tension happens naturally when investigators from two different fields, who have different assumptions, methods and ways of working, meet to discuss a particular problem. This is one reason why new paradigms so often emerge in the cross-fertilization of different disciplines.There are of course other, more systematic ways of searching for hypotheses by bringing together seemingly unrelated evidence. The Arrowsmith two-node search strategy [5], for instance, is based on distinct searches of the biomedical literature to retrieve articles on two different areas of science that have not been studied in relation to each other, but that the investigator suspects might be related in some fashion. The software identifies common words or phrases, which might point to meaningful links between them. This is but one example of ‘literature-based discovery'' as a heuristic technique [6], and in turn, is part of the larger data-driven approach of ‘text mining'' or ‘data mining'', which looks for unusual, new or unexpected patterns within large amounts of observational data. Regardless of whether one follows hypothesis-driven or data-driven models of investigation, let us teach our students to repeat the mantra: ‘odd is good''!? Open in a separate windowNeil R Smalheiser  相似文献   

7.
8.
Humans and beetles both have a species-specific Umwelt circumscribed by their sensory equipment. However, Ladislav Kováč argues that humans, unlike beetles, have invented scientific instruments that are able to reach beyond the conceptual borders of our Umwelt.You may have seen the film Microcosmos, produced in 1996 by the French biologists Claude Nuridsany and Marie Perrenou. It does not star humans, but much smaller creatures, mostly insects. The filmmakers'' magnifying camera transposes the viewer into the world of these organisms. For me, Microcosmos is not an ordinary naturalist documentary; it is an exercise in metaphysics.One sequence in the film shows a dung beetle—with the ‘philosophical'' generic name Sisyphus—rolling a ball of horse manure twice its size that becomes stuck on a twig. As the creature struggles to free the dung, it gives the impression that it is both worried and obstinate. As we humans know, the ball represents a most valuable treasure for the beetle: it will lay its eggs into the manure that will later feed its offspring. The behaviour of the beetle is biologically meaningful; it serves its Darwinian fitness.Yet, the dung beetle knows nothing of the function of manure, nor of the horse that dropped the excrement, nor of the human who owned the horse. Sisyphus lives in a world that is circumscribed by its somatic sensors—a species-specific world that the German biologist and philosopher Jakob von Uexküll would have called the dung beetle''s ‘Umwelt''. The horse, too, has its own Umwelt, as does the human. Yet, the world of the horse, just like the world of the man, does not exist for the beetle.If a ‘scholar'' among dung beetles attempted to visualize the world ‘out there'', what would be the dung-beetles'' metaphysics—their image of a part of the world about which they have no data furnished by their sensors? What would be their religions, their truths, or the Truth—revealed, and thus indisputable?Beetles are most successful animals; one animal in every four is a beetle, leading the biologist J.B.S. Haldane to quip that the Creator must have “had an inordinate fondness for beetles”. Are we humans so different from dung beetles? By birth we are similar: inter faeces et urinas nascimur—we are born between faeces and urine—as Aurelius Augustine remarked 1,600 years ago. Humans also have a species-specific Umwelt that has been shaped by biological evolution. A richer one than is the Umwelt of beetles, as we have more sensors than have they. Relative to the body size, we also possess a much larger brain and with it the capacity to make versatile movements with our hands and to finely manipulate with our fingers.This manual dexterity has enabled humans to fabricate artefacts that are, in a sense, extensions and refinements of the human hand. The simplest one, a coarse-chipped stone, represents the evolutionary origins of artefacts. Step-by-step, by a ratchet-like process, artefacts have become ever more complicated: as an example, a Boeing 777 is assembled from more than three million parts. At each step, humans have just added a tiny improvement to the previously achieved state. Over time, the evolution of artefacts has become less dependent on human intention and may soon result in artefacts with the capacity for self-improvement and self-reproduction. In fact, it is by artefacts that humans transcend their biology; artefacts make humans different from beetles. Here is the essence of the difference: humans roll their artefactual balls, no less worried and obstinate than beetles, but, in contrast to the latter, humans often do it even if the action is biologically meaningless, at the expense of their Darwinian fitness. Humans are biologically less rational than are beetles.Artefacts have immensely enriched the human Umwelt. From among them, scientific instruments should be singled out, as they function as novel, extrasomatic sensors of the human species. They have substantially fine-grained human knowledge of the Umwelt. But they are also reaching out—both to a distance and at a rate that is exponentially increasing—behind the boundary of the human Umwelt, behind its conceptual confines that we call Kant''s barriers. Into the world that has long been a subject of human ‘dung-beetle-like'' metaphysics. Nevertheless, our theories about this world could now be substantiated by data coming from the extrasomatic sensors. These instruments, fumbling in the unknown, supply reliable and reproducible data such that their messages must be true. They supersede our arbitrary guesses and fancies, but their truth seems to be out of our conceptual grasp. Conceptually, our mind confines us to our species-specific Umwelt.We continue to share the common fate of our fellow dung beetles: There is undeniably a world outside the confinements of our species-specific Umwelt, but if the world of humans is too complex for the neural ganglia of beetles, the world beyond Kant''s barriers may similarly exceed the capacity of the human brain. The physicist Richard Feynman (1965) stated, perhaps resignedly, “I can safely say that nobody today understands quantum mechanics.” Frank Gannon (2007) likewise commented that biological research, similarly to research in quantum mechanics, might be approaching a state “too complex to comprehend”. New models of the human brain itself may turn out to be “true and effective—and beyond comprehension” (Kováč, 2009).The advances of science notwithstanding, the knowledge of the universe that we have gained on the planet Earth might yet be in its infancy. However, in contrast to the limited capacity of humans, the continuing evolution of artefacts may mean that they face no limits in their explorative potential. They might soon dispense with our conceptual assistance exploring the realms that will remain closed to the human mind forever.  相似文献   

9.
Samuel Caddick 《EMBO reports》2008,9(12):1174-1176
  相似文献   

10.
Cassi D 《EMBO reports》2011,12(3):191-196
Molecular cuisine, despite popular opinion, is not science that is performed in the kitchen. It is the application of scientific understanding to the development of new cooking techniques and traditions.In January 2009, I participated in a round-table discussion, “Does ‘Molecular Cuisine'' Exist?”, at Madrid Fusion, the largest gastronomy conference in the world. It was the most popular event at that conference, which is impressive considering that, until 20 years ago, the adjective molecular was never used in conjunction with the words gastronomy, cooking or cuisine. Indeed, when the poster for the first “International Workshop on Molecular and Physical Gastronomy”, held in Erice, Italy, appeared in 1992, many people at universities around the world thought it was a joke. Actually, its original title was simply “Science and Gastronomy”, but it had to be changed to sound less ‘frivolous'' and more academic for the printed announcement of the workshop. The term molecular was chosen as molecular biology was the hot scientific field at the time (McGee, 2008).The interactions between science and cooking are as old as science itself…The participants in the first Erice workshop included not only scientists, but also chefs and writers. The goal of the meeting was to explore four points: “to what extent is the science underlying these [cooking] processes understood; whether the existing cooking methods could be improved by a better understanding of their scientific bases; whether new methods or ingredients could improve the quality of the end-products or lead to innovations; whether processes developed for food processing and large scale catering could be adapted to domestic or restaurant kitchens.” As such, the novelty of the workshop with respect to other food-science meetings was the emphasis on gastronomy and real kitchens, rather than industrial processes and products.The interactions between science and cooking are as old as science itself: the French physicist Denis Papin invented the pressure cooker in 1679 and described it in a book that can be considered the first modern text on ‘science and cooking'' (Papin, 1681). However, at the end of the twentieth century, cooking was increasingly considered a frivolous and unimportant subject for scientists, and science itself had become detached from people''s everyday lives. Nevertheless, the recent, impressive advances in biochemistry and soft-matter physics have helped scientists to analyse and comprehend culinary processes in a way that would have been unthinkable a few years ago. One of the first indications that the scientific analysis of culinary phenomena could be improved was the publication of the now classic book On Food and Cooking: the Science and Lore of the Kitchen by Harold McGee (McGee, 1984), which is still a reference for cooks around the world.However, at the end of the twentieth century, cooking was increasingly considered a frivolous and unimportant subject for scientists…Meanwhile, the young Spanish chef Ferran Adrià started the greatest culinary revolution of the century by using the siphon—originally designed to make whipped cream—to produce mousses and foams with unusual ingredients, such as vegetables, fruits, fish and meat. Adrià was looking for novelty in every area of cooking, and he started to experiment with new techniques and new ingredients, but did not interact with science or scientists. In parallel, the Erice workshop took place five more times in 1995, 1997, 1999, 2001 and 2004, and was mainly devoted to exploring the more scientific aspects of traditional cooking, namely understanding the science underlying cooking processes and ways to improve existing techniques by applying this knowledge.True collaborations between chefs and scientists only started at the beginning of the past decade: in France, chef Pierre Gagnaire teamed up with Hervé This; Heston Blumenthal in England with Peter Barham; in Spain, Andoni Luis Aduriz and later Dani Garcia with Raimundo Garcia del Moral, and Ferran Adrià with Pere Castells. In Italy, I started collaborating with Ettore Bocchia and, in 2002, we presented an experimental menu of innovative Italian cuisine that was based on scientific investigation. We declared that it was inspired by molecular gastronomy, but a newspaper article introduced a new expression: molecular cuisine (Paltrinieri, 2002).This term was unusual, but we decided to use it nonetheless because ‘cuisine'' sounded more practical and realistic than ‘gastronomy'', and it was better suited to describing our work (Cassi, 2004). In the following years, the term was unexpectedly successful, and people began to use it to describe any type of cuisine arising from collaborations between chefs and scientists. It goes without saying that each of the chef–scientist pairs mentioned above produced different types of cuisine. To more accurately define our style, we therefore decided to call it “Italian molecular cuisine”, and we published the Manifesto of Italian Molecular Cuisine (Cassi & Bocchia, 2005a,b; Sidebar A).

Sidebar A | The manifesto of Italian molecular cuisine

Italian molecular cuisine aims to develop new techniques for cooking and to create new dishes, remaining firmly loyal to the following principles.
  1. Every innovation must expand, not destroy, the Italian gastronomic tradition.
  2. The new techniques and the new dishes must enhance the natural ingredients and the high-quality raw materials.
  3. It will be a cuisine attentive to the nutritional values of food and to the well-being of those who eat it, not only to aesthetic and sensory aspects.
  4. It must meet its goals by creating new textures with ingredients chosen according to the above criteria. It will create new textures by studying the physical and chemical properties of the ingredients and planning, from these, new microscopic architectures.
These first examples of collaborations seemed to fulfil the third point of the goals of the Erice meeting—to apply new methods and ingredients to improve the quality of food and create new dishes—but eventually they also fulfilled the final point: to bring food processing techniques to domestic and restaurant kitchens. From 2003 to 2005, the European Union funded a project called INICON (Introduction of Innovative Technologies in Modern Gastronomy for Modernisation of Cooking), which helped to transfer ingredients and techniques from industrial food technology to restaurant kitchens. The most relevant result of this project was the introduction and popularization of food additives—mainly texturizers—to the haute-cuisine world and ordinary restaurants. It also created the first problems for molecular cuisine; as the greatest chefs used these additives in a creative way, an increasing number of other cooks misused them, simply for special effects.Soon, the media associated molecular cuisine with food additives, making no distinction between the great chefs and their bad imitators. As a consequence, many top chefs dissociated themselves from molecular gastronomy and molecular cuisine. At the end of 2006, Ferran Adrià, Heston Blumenthal and Thomas Keller, together with Harold McGee, published a statement on the ‘new cookery'': “The fashionable term ‘molecular gastronomy'' was introduced relatively recently, in 1992, to name a particular academic workshop for scientists and chefs on the basic food chemistry of traditional dishes. That workshop did not influence our approach, and the term ‘molecular gastronomy'' does not describe our cooking, or indeed any style of cooking” (Adrià et al, 2006).…the media associated molecular cuisine with food additives, making no distinction between the great chefs and their bad imitatorsSoon after came the first attacks on molecular cuisine, on the basis of allegations that additives dangerous to health were being used. In 2008, the Spanish chef Santi Santamaria published a book called La Cocina al Desnudo (‘The Bare Kitchen''; Santamaria, 2008) and, in 2009, the German journalist Jörg Zipprick published, in Spain, an even more explicit book, the translated title of which is I Don''t Want to Go Back to the Restaurant! How the Molecular Cuisine Serves us Wallpaper Paste and Fire Extinguisher Powder (Zipprick, 2009). In the same year, a satirical Italian television programme started an aggressive campaign against molecular cuisine and the use of additives in restaurants, which even prompted the health ministry to issue an order restricting their use. Although all the additives used in restaurants are authorized by the European Union for human consumption and are no different to the additives we eat every day in industrial products, those campaigns had a great effect on public opinion, and many people became aware of molecular cuisine only through these attacks.The round table discussion in Madrid in 2009 was organized to discuss this situation. The participants—myself, Ferran Adrià, Heston Blumenthal, Andoni Luis Aduriz and Harold McGee—agreed on two basic points: the term ‘molecular cuisine'' does not indicate a specific style of cooking, as the chefs labelled as ‘molecular'' have very different styles; and the role of science in cooking is usually limited to the development of a new technique or a new recipe and there is very little ‘science'' in the final preparation of a dish. In other words, one can learn a new technique that is the result of scientific experimentation and apply it without knowing the science, just as we can use a computer without knowing anything about the electronics inside. It is therefore necessary to distinguish between the scientific phase—or ‘scientific cooking'', in which we explore new techniques and dishes—and the practical phase, in which we realize that dish in a kitchen.It is undeniable that during the past decade, a scientific approach to cooking has produced a huge number of new techniques and recipes—more than in any other period of history—and introduced new ingredients and devices. These techniques and dishes are what the media and commentators on the internet commonly call ‘molecular cuisine''.It is undeniable that during the past decade, a scientific approach to cooking has produced a huge number of new techniques and recipes—more than in any other period of history…Many of these inventions are probably short-lived fads, but it is certain that many others will come to be commonly used in restaurants and in home kitchens, and become part of the culinary tradition. In fact, culinary tradition is not a fixed and unchanging list of old recipes, it is a structured set of ingredients, dishes, techniques and rituals, united by a common spirit, that evolves continuously to adapt to present needs.During the past few decades, it has become apparent that we need to change our diet for several reasons. First, our lifestyles have dramatically and rapidly changed, but our diet has not. Second, scientific inquiries and epidemiological data have shown that some elements of our diet—notably fats and carbohydrates—should be reduced, whereas other should be consumed in larger amounts, to meet nutritional requirements. In addition, new ingredients have become available and others are now more difficult to find in markets and supermarkets. Lastly, our tastes and our way of viewing food are changing continuously. All this takes place at an increasing rate, fostered by the greater ease of international travel and the fast dissemination of information through the media and the internet.…culinary tradition is not a fixed and unchanging list of old recipes…To better understand the need for change and adaptation with regard to food and the role that science can play, it is illuminating to consider what Auguste Escoffier, the father of modern French cuisine, wrote more than a century ago: “If everything is changing, it would be absurd to claim to fix the destiny of an art based, in many respects, on fashion, and as unstable as it. If taste is becoming more refined, the culinary art too has to conform to it. To contrast the effects of modern super activity, cooking will become more scientific and precise” (Escoffier, 1903).Even if it is impossible to determine which innovations will become an integral part of culinary tradition, we can make some predictions. The relationship between the world of haute cuisine—in which most innovations have been developed—and that of common cooking, is similar to the relationship between Formula 1 racing and the consumer car market; inventions only enter into common use if they meet certain basic requirements. Specifically, they have to be sufficiently simple to use, widely applicable, easily available and affordable, and in line with the main trends of the consumer market. Of course, trends tend to change and evolve over time, but general trends have a much longer lifespan than mere fashions. For several years, these trends have been a nutritional-dietetic trend (food for health), a natural-biological trend (no ‘chemistry'', no synthetic ingredients), and an aesthetic trend. Taking into account these requirements, we can now discuss the main innovations introduced by molecular cuisine, and evaluate which ones are most likely to survive.Texturizers are generally easy to use and allow the chef to, for example, simply and quickly transform a liquid into a gel or foamInnovations can be broadly grouped into three classes: ingredients, tools and devices, and processing techniques, even with usual ingredients. New ingredients are generally food additives—which is the main focus of the criticism levelled at molecular cuisine. However, the definition of a food additive is not scientific, but legal: the European Union defines these as any substance not normally consumed as a food in itself—even if it has nutritional value—and not normally used as a characteristic ingredient in food, but which is added for a technological purpose in the manufacture, processing, preparation, treatment or packaging. This definition also does not say anything about the origin or possible health risks of these substances, which can be very different from each other.These new ingredients are mostly texturizers—that is, substances that give food a desired texture—and they are usually sold as powders. It is not difficult to understand the reason for their success among cooks. To add taste, flavour or colour to a dish, we just add a pinch of a powder or a few drops of a liquid. Creating textures is considerably more complex: texture depends on the microscopic arrangement of molecules, and altering it can require both the addition of ingredients and the use of specific procedures. Texturizers are generally easy to use and allow the chef to, for example, simply and quickly transform a liquid into a gel or foam. The main categories of texturisers used in molecular cuisine are gelling agents, emulsifiers and thickeners. If they are used well, chefs can obtain results that are not possible with traditional ingredients (Sidebars B, C).

Sidebar B | Guggenheim Bilbao (Quique Dacosta)

Ingredients (serves 4)Shellfish stock400 g cockles200 g barnacles25 g shallots3 oystersHalf a clove of garlic with skin1 l mineral water25 g aloe veraBase of the plate0.5 dl shellfish stock0.3 g agar2 drops lemon juice0.2 g silver powder2 ml aloe vera juiceSilver and titanium veil100 g shellfish stock0.7 g agar2 g gelatine5 ml centrifuged aloe vera juice0.2 g silver powder0.2 ml liquid titaniumSilver and aloe vera sheet200 g shellfish stock35 g tapioca1 g silver powder35 g aloe veraOysters4 large oystersJuniper emberPreparationShellfish stock. Clean all ingredients, cover with water and bring to the boil. Skim and simmer for 1 h over a low heat, without boiling. Let the stock stand for 2 h, then strain.Base of the plate. Boil the shellfish stock with agar, then add the juice of centrifuged aloe vera, cool to 40 °C and add the silver powder with the lemon juice. Pour 12 g of this preparation on to the bottom of the plate and let it solidify.Silver and titanium veil. Boil the shellfish stock with agar and gelatine. Remove from the heat and let it cool to 40 °C, then add silver and titanium. Pour it into a pan, to form a 1 mm-thick layer. Let it stand until a gel forms that can be handled and heated under the grill.Silver and aloe vera sheet. Bring the stock to the boil, add tapioca and aloe vera juice and cook for 15 min. Blend, strain and add the silver, stirring with a whip to get a thick paste. Roll it up on a sheet of parchment paper. Bake at 60 °C and let it dry until you get a crispy, thin and brittle layer.Oysters. Shuck the oysters and heat them on the grill for 30 s, using juniper ember to flavour them.Final preparationArrange the heated oysters on the plate, cover with the veil, heat under the grill for 30 s, let thicken and decorate with the silver and aloe vera sheet (Meldolesi & Noto, 2006).).

Sidebar C | Encapsulated olive oil with virtual Iberian bacon (Ferran Adrià)

Ingredients (serves 4)For the solution of sodium alginate0.5 l water3 g sodium alginateFor the solution of calcium chloridel 1 water10 g calcium chlorideFor the olive oil capsules500 g solution of sodium alginate1 kg solution of calcium chloride60 g olive oilFor the ham consommé250 g scraps of Iberian ham0.5 l waterFor the melted ham fat100 g Iberian ham fatFor the hot ham jelly2.5 dl Iberian ham broth4.5 g agarMaldon salt to tastePreparationFor the solution of sodium alginate. Mix water and sodium alginate in a blender until sodium alginate is completely dissolved and store in refrigerator for 24 h.For the solution of calcium chloride. Dissolve the calcium chloride in water and set aside.For the olive oil capsules. Encapsulate the olive oil with an encapsulator, producing spherical capsules of 4 mm diameter. Prepare 15 g of capsules per person and store in refrigerator.For the ham consommé. Cut the ham into small pieces and cover with water. Boil over medium heat for 15 min, skimming constantly. Filter and degrease the broth.For the melted ham fat. Remove the lean part from the ham fat. Cook on a low heat for 20 min. Pour and store the liquid fat.For the hot ham jelly. Dilute the agar in the ham consommé at room temperature and bring to the boil, stirring with a whisk. Remove from the heat and skim. Pour the gelatine on a flat plate and roll it up to get the sheets 1 mm thick. Let it solidify in the fridge for 2 h.Final preparationMelt the fat of Iberian ham and brush the sheets of jelly with the consommé. Place 15 g of oil capsules on the bases of four oval gold tray and sprinkle with Maldon salt. Heat the jelly under the grill and place 8 pieces of gelatine of approximately 2.5 cm above the capsules, to simulate the appearance of bacon. Heat under the grill and serve (Meldolesi & Noto, 2006).).Until a few years ago, the only gelling agents used in the kitchen were gelatine and pectin for jams. Gelatine produces pleasant gels such as aspic, but it melts at 35 °C and therefore does not allow the creation of hot gels. When Ferran Adrià realized that agar, a common ingredient in the Far East, melts at 85 °C, he began to use it for a new class of preparations that were unusual for Western cuisine. Since then, other gelling agents with specific properties have been introduced into the kitchen: the most popular ones are carrageenans, gellan gum, methylcellulose and sodium alginate. The latter two enabled the creation of very original dishes. Methylcellulose behaves oppositely to gelatine: at temperatures above 55 °C it forms a firm gel that melts as it cools. It is used to prepare so-called ‘hot ice cream''. Sodium alginate polymerizes into a gel in aqueous solutions that contain calcium ions: one calcium ion replaces two sodium ions and links two polymer chains together. Adrià uses it in a peculiar technique called spherification: sodium alginate is added to a liquid, which is dropped into an aqueous solution of calcium chloride. The alginate at the surface of the droplet becomes a gel and forms a thin film around the liquid inside.The most widely used emulsifier is soy lecithin. It is useful not only for creating a variety of sauces based on fat-in-water emulsions, but also for producing extremely soft foams called ‘airs''. The latter contain a small amount of liquid with respect to their air content, have a pleasing appearance and are particularly suitable for diluting aromas and flavours to distribute them evenly in a dish. However, soy lecithin is not suitable for water-in-fat emulsions and air-in-oil foams; for this kind of preparation mono- and diglycerides of fatty acids are commonly used.…science can help us to think of new ways to transform food, even in traditional contextsThickeners—substances that increase the viscosity of sauces and, more generally, of liquids—are already widely used in traditional cooking, most commonly flours and starches. However, large amounts of these traditional thickeners are usually required, which is a problem from the gastronomic point of view, because they dilute tastes and flavours. Cooks have therefore started to use xanthan gum, which produces a significant thickening effect, even in small amounts.All of these new ingredients are generally not too expensive and could become popular in household kitchens, despite attacks in the media against food additives. The biggest problem probably relates to methylcellulose, which is a synthetic compound and not a natural substance. At present, most of these additives can only be purchased at specialty retailers—with the exception of soy lecithin, which is sold in supermarkets in Italy—and this does not help their dissemination. In addition, they are not part of traditional food culture and people do not know how to use them. It is likely that their use will become more common when a sufficient number of recipes are published by trusted chefs, or a sufficient number of dishes that make use of them are prepared on television cooking programmes.Turning to new tools and devices, it is important to consider those that have wider applications. A good example of science applied to cooking is the microwave oven, which can now be found in nearly every kitchen. It also demonstrates the point that most people only invest in equipment that they will use regularly. If we limit ourselves to considering devices that might be used often, the most interesting new techniques are sous-vide cooking and ultra-rapid cooling in liquid nitrogen. The former was first used in France in 1974 by Georges Pralus, but only began to spread to restaurant kitchens in the 1990s. It involves cooking food—usually meat, poultry and fish—in vacuum-sealed plastic bags that are immersed in a water bath for long periods of time. The temperature is accurately maintained and is usually much lower than 100 °C—typical cooking temperatures for sous-vide range between 50 and 70 °C—and the cooking time can extend to three days. The vacuum-sealed bags are mainly used to prevent oxidation and exchange of matter between food and water, but the key point of this technique is temperature control, which makes it possible to produce a variety of textures and flavours.The main reason for the slow uptake of this technique apart from in restaurants has been cost: the price of the most popular digital thermostat with thermal immersion circulator exceeds €1,500. However, one year ago a water oven was launched that costs only €600, and Heston Blumenthal announced a sous-vide cooking device for €300. At this point, it is easy to imagine that sous-vide cooking will arrive in home kitchens in the coming years, as the microwave did years ago.Becoming more experienced, the interested cook can develop new custom dishes by applying the techniques that he or she has learned, or more general scientific principlesThe second technique uses liquid nitrogen to cool food at a speed that is impossible by any other method. It allows not only deep-freezing of food at home—even just-cooked food, preserving all its flavours—but also new textures and dishes to be produced. Ultra-rapid cooling of a liquid below its solidification temperature generally produces many small crystals rather than a few big crystals, but it can also give rise to glassy structures with peculiar mechanical and thermal properties. Without going into more detail, by using liquid nitrogen cooks can make a smooth ice cream from almost any liquid—fruit juice, wine or beer, a cup of coffee or soup—without the use of additives such as thickeners or emulsifiers.The main problems for the uptake of this technique are the availability and price of Dewar containers for storing liquid nitrogen—these usually cost a few hundred Euros. However, no special tools are required to use liquid nitrogen in a home kitchen, and these problems could be solved by selling it in small quantities, which can be stored for a day in a common metal thermos flask.Several other interesting devices have been introduced in top restaurants, such as vacuum-pressure cookers, rotary evaporators and lyophilizers. However, their prices make them unaffordable for most restaurants and even more so for home users. Nevertheless, some small manufacturers have begun to successfully market food that has been processed with these devices, thereby increasing the availability of new ingredients.In any case, it is not necessary to use new ingredients or new tools and devices to create new foods. We can invent new processing techniques for normal ingredients using normal tools and devices. This is one of the distinguishing features of the Italian approach to molecular cuisine. Indeed, science can help us to think of new ways to transform food, even in traditional contexts.In 2002, I was looking for ‘frying'' methods that do not use fats. I needed a liquid that could be heated to temperatures high enough to generate Maillard reactions without evaporating or burning. The solution was molten glucose: glucose powder that is molten in a pot on fire. It conducts heat and retains flavours better than oil, and the results were excellent, from a gastronomic point of view.Other good examples of new foods are the egg curd and marinated egg-yolk, which are created by using room-temperature techniques to denature and coagulate the egg proteins. For the former, we pour alcohol on the egg, stir and then wash the curd in cold water and wring it in cheesecloth. The second method, introduced by the Italian chef Carlo Cracco, denatures and coagulates the egg-yolk proteins in a mixture of salt, sugar and dry bean puree.Another product in line with the Italian tradition is the legume-flour pasta that I introduced in 2007 with the chef Fulvio Pierangelini. The gluten-free legume flour is cooked for several hours at 90 °C in a dry oven and, once cooled, it is mixed with water and kneaded. The heat denatures the legume proteins, thereby facilitating the formation of bonds between them in the presence of water during kneading. This gives rise to a network structure without gluten. Subsequent cooking in boiling water reinforces the network and produces a unique al dente texture.Moreover, it could encourage people to spend more time preparing and enjoying their food and, hopefully, adopt a healthier diet along the wayFor obvious reasons, this type of innovation is the easiest to disseminate and it can be done in the home kitchen with common ingredients and tools. Anyone who is intrigued by this novel dish might then ask about its basis and might be stimulated to learn more about the underlying science. Becoming more experienced, the interested cook can develop new custom dishes by applying the techniques that he or she has learned, or more general scientific principles. He or she can, for example, produce emulsified sauces without cholesterol, by using egg white or soy lecithin instead of egg yolk. He or she can also invent vegetarian versions of prawn crackers, by frying retrograded starch gels.All of this is useful for both the popularization of science and the creation of new foods. It also enables the creation of a new cooking culture, in which the consumer is able to adapt cooking processes to his or her dietary needs and taste. Moreover, it could encourage people to spend more time preparing and enjoying their food and, hopefully, adopt a healthier diet along the way. The application of science to cooking has another dimension: as scientists increasingly analyse what we eat, why we prefer certain foods and what we should eat to be healthier, it is therefore logical that science should also investigate and help us to improve the ways in which we prepare our food—not just for the culinary pleasure of haute cuisine, but for everyone who enjoys cooking.? Open in a separate windowPhoto by Bob Noto, from the book Grandi chef di SpagnaOpen in a separate windowPhoto by Bob Noto, from the book Grandi chef di SpagnaOpen in a separate windowDavide Cassi

Science & Society Series on Food and Science

This article is part of the EMBO reports Science & Society series on ‘food and science'' to highlight the role of natural and social sciences in understanding our relationship with food. We hope that the series serves a delightful menu of interesting articles for our readers.  相似文献   

11.
12.
A P Hendry 《Heredity》2013,111(6):456-466
Increasing acceptance that evolution can be ‘rapid'' (or ‘contemporary'') has generated growing interest in the consequences for ecology. The genetics and genomics of these ‘eco-evolutionary dynamics'' will be—to a large extent—the genetics and genomics of organismal phenotypes. In the hope of stimulating research in this area, I review empirical data from natural populations and draw the following conclusions. (1) Considerable additive genetic variance is present for most traits in most populations. (2) Trait correlations do not consistently oppose selection. (3) Adaptive differences between populations often involve dominance and epistasis. (4) Most adaptation is the result of genes of small-to-modest effect, although (5) some genes certainly have larger effects than the others. (6) Adaptation by independent lineages to similar environments is mostly driven by different alleles/genes. (7) Adaptation to new environments is mostly driven by standing genetic variation, although new mutations can be important in some instances. (8) Adaptation is driven by both structural and regulatory genetic variation, with recent studies emphasizing the latter. (9) The ecological effects of organisms, considered as extended phenotypes, are often heritable. Overall, the study of eco-evolutionary dynamics will benefit from perspectives and approaches that emphasize standing genetic variation in many genes of small-to-modest effect acting across multiple traits and that analyze overall adaptation or ‘fitness''. In addition, increasing attention should be paid to dominance, epistasis and regulatory variation.  相似文献   

13.
The differentiation of pluripotent stem cells into various progeny is perplexing. In vivo, nature imposes strict fate constraints. In vitro, PSCs differentiate into almost any phenotype. Might the concept of ‘cellular promiscuity'' explain these surprising behaviours?John Gurdon''s [1] and Shinya Yamanaka''s [2] Nobel Prize involves discoveries that vex fundamental concepts about the stability of cellular identity [3,4], ageing as a rectified path and the differences between germ cells and somatic cells. The differentiation of pluripotent stem cells (PSCs) into progeny, including spermatids [5] and oocytes [6], is perplexing. In vivo, nature imposes strict fate constraints. Yet in vitro, reprogrammed PSCs liberated from the body government freely differentiate into any phenotype—except placenta—violating even somatic cell against germ cell segregations. Albeit that it is anthropomorphic, might the concept of ‘cellular promiscuity'' explain these surprising behaviours?Fidelity to one''s differentiated state is nearly universal in vivo—even cancers retain some allegiance. Appreciating the mechanisms in vitro that liberate reprogrammed cells from the numerous constraints governing development in vivo might provide new insights. Similarly to highway guiderails, a range of constraints preclude progeny cells within embryos and organisms from travelling too far away from the trajectory set by their ancestors. Restrictions are imposed externally—basement membranes and intercellular adhesions; internally—chromatin, cytoskeleton, endomembranes and mitochondria; and temporally by ageing.‘Cellular promiscuity'' was glimpsed previously during cloning; it was seen when somatic cells successfully ‘fertilized'' enucleated oocytes in amphibians [1] and later with ‘Dolly'' [7]. Embryonic stem cells (ESCs) corroborate this. The inner cell mass of the blastocyst cells develops faithfully, but liberation from the trophoectoderm generates pluripotent ESCs in vitro, which are freed from fate and polarity restrictions. These freedom-seeking ESCs still abide by three-dimensional rules as they conform to chimaera body patterning when injected into blastocysts. Yet if transplanted elsewhere, this results in chaotic teratomas or helter-skelter in vitro differentiation—that is, pluripotency.August Weismann''s germ plasm theory, 130 years ago, recognized that gametes produce somatic cells, never the reverse. Primordial germ cell migrations into fetal gonads, and parent-of-origin imprints, explain how germ cells are sequestered, retaining genomic and epigenomic purity. Left uncontaminated, these future gametes are held in pristine form to parent the next generation. However, the cracks separating germ and somatic lineages in vitro are widening [5,6]. Perhaps, they are restrained within gonads not for their purity but to prevent wild, uncontrolled misbehaviours resulting in germ cell tumours.The ‘cellular promiscuity'' concept regarding PSCs in vitro might explain why cells of nearly any desired lineage can be detected using monospecific markers. Are assays so sensitive that rare cells can be detected in heterogeneous cultures? Certainly population heterogeneity is considered for transplantable cells—dopaminergic neurons and islet cells—compared with applications needing few cells—sperm and oocytes. This dilemma of maintaining cellular identity in vitro after reprogramming is significant. If not addressed, the value of unrestrained induced PSCs (iPSCs) as reliable models for ‘diseases in a dish'', let alone for subsequent therapeutic transplantations, might be diminished. X-chromosome re-inactivation variants in differentiating human PSCs, epigenetic imprint errors and copy number variations are all indicators of in vitro infidelity. PSCs, which are held to be undifferentiated cells, are artefacts after all, as they undergo their programmed development in vivo.If correct, the hypothesis accounts for concerns raised about the inherent genomic and epigenomic unreliability of iPSCs; they are likely to be unfaithful to their in vivo differentiation trajectories due to both the freedom from in vivo developmental programmes, as well as poorly characterized modifications in culture conditions. ‘Memory'' of the PSC''s identity in vivo might need to be improved by using approaches that might not fully erase imprints. Regulatory authorities, including the Food & Drug Administration, require evidence that cultured PSCs do retain their original cellular identity. Notwithstanding fidelity lapses at the organismal level, the recognition that our cells have intrinsic freedom-loving tendencies in vitro might generate better approaches for only partly releasing somatic cells into probation, rather than full emancipation.  相似文献   

14.
Indirect genetic effects (IGEs) describe how an individual''s behaviour—which is influenced by his or her genotype—can affect the behaviours of interacting individuals. IGE research has focused on dyads. However, insights from social networks research, and other studies of group behaviour, suggest that dyadic interactions are affected by the behaviour of other individuals in the group. To extend IGE inferences to groups of three or more, IGEs must be considered from a group perspective. Here, I introduce the ‘focal interaction’ approach to study IGEs in groups. I illustrate the utility of this approach by studying aggression among natural genotypes of Drosophila melanogaster. I chose two natural genotypes as ‘focal interactants’: the behavioural interaction between them was the ‘focal interaction’. One male from each focal interactant genotype was present in every group, and I varied the genotype of the third male—the ‘treatment male’. Genetic variation in the treatment male''s aggressive behaviour influenced the focal interaction, demonstrating that IGEs in groups are not a straightforward extension of IGEs measured in dyads. Further, the focal interaction influenced male mating success, illustrating the role of IGEs in behavioural evolution. These results represent the first manipulative evidence for IGEs at the group level.  相似文献   

15.
The authors of “The anglerfish deception” respond to the criticism of their article.EMBO reports (2012) advanced online publication; doi: 10.1038/embor.2012.70EMBO reports (2012) 13 2, 100–105; doi: 10.1038/embor.2011.254Our respondents, eight current or former members of the EFSA GMO panel, focus on defending the EFSA''s environmental risk assessment (ERA) procedures. In our article for EMBO reports, we actually focused on the proposed EU GMO legislative reform, especially the European Commission (EC) proposal''s false political inflation of science, which denies the normative commitments inevitable in risk assessment (RA). Unfortunately the respondents do not address this problem. Indeed, by insisting that Member States enjoy freedom over risk management (RM) decisions despite the EFSA''s central control over RA, they entirely miss the relevant point. This is the unacknowledged policy—normative commitments being made before, and during, not only after, scientific ERA. They therefore only highlight, and extend, the problem we identified.The respondents complain that we misunderstood the distinction between RA and RM. We did not. We challenged it as misconceived and fundamentally misleading—as though only objective science defined RA, with normative choices cleanly confined to RM. Our point was that (i) the processes of scientific RA are inevitably shaped by normative commitments, which (ii) as a matter of institutional, policy and scientific integrity must be acknowledged and inclusively deliberated. They seem unaware that many authorities [1,2,3,4] have recognized such normative choices as prior matters, of RA policy, which should be established in a broadly deliberative manner “in advance of risk assessment to ensure that [RA] is systematic, complete, unbiased and transparent” [1]. This was neither recognized nor permitted in the proposed EC reform—a central point that our respondents fail to recognize.In dismissing our criticism that comparative safety assessment appears as a ‘first step'' in defining ERA, according to the new EFSA ERA guidelines, which we correctly referred to in our text but incorrectly referenced in the bibliography [5], our respondents again ignore this widely accepted ‘framing'' or ‘problem formulation'' point for science. The choice of comparator has normative implications as it immediately commits to a definition of what is normal and, implicitly, acceptable. Therefore the specific form and purpose of the comparison(s) is part of the validity question. Their claim that we are against comparison as a scientific step is incorrect—of course comparison is necessary. This simply acts as a shield behind which to avoid our and others'' [6] challenge to their self-appointed discretion to define—or worse, allow applicants to define—what counts in the comparative frame. Denying these realities and their difficult but inevitable implications, our respondents instead try to justify their own particular choices as ‘science''. First, they deny the first-step status of comparative safety assessment, despite its clear appearance in their own ERA Guidance Document [5]—in both the representational figure (p.11) and the text “the outcome of the comparative safety assessment allows the determination of those ‘identified'' characteristics that need to be assessed [...] and will further structure the ERA” (p.13). Second, despite their claims to the contrary, ‘comparative safety assessment'', effectively a resurrection of substantial equivalence, is a concept taken from consumer health RA, controversially applied to the more open-ended processes of ERA, and one that has in fact been long-discredited if used as a bottleneck or endpoint for rigorous RA processes [7,8,9,10]. The key point is that normative commitments are being embodied, yet not acknowledged, in RA science. This occurs through a range of similar unaccountable RA steps introduced into the ERA Guidance, such as judgement of ‘biological relevance'', ‘ecological relevance'', or ‘familiarity''. We cannot address these here, but our basic point is that such endless ‘methodological'' elaborations of the kind that our EFSA colleagues perform, only obscure the institutional changes needed to properly address the normative questions for policy-engaged science.Our respondents deny our claim concerning the singular form of science the EC is attempting to impose on GM policy and debate, by citing formal EFSA procedures for consultations with Member States and non-governmental organizations. However, they directly refute themselves by emphasizing that all Member State GM cultivation bans, permitted only on scientific grounds, have been deemed invalid by EFSA. They cannot have it both ways. We have addressed the importance of unacknowledged normativity in quality assessments of science for policy in Europe elsewhere [11]. However, it is the ‘one door, one key'' policy framework for science, deriving from the Single Market logic, which forces such singularity. While this might be legitimate policy, it is not scientific. It is political economy.Our respondents conclude by saying that the paramount concern of the EFSA GMO panel is the quality of its science. We share this concern. However, they avoid our main point that the EC-proposed legislative reform would only exacerbate their problem. Ignoring the normative dimensions of regulatory science and siphoning-off scientific debate and its normative issues to a select expert panel—which despite claiming independence faces an EU Ombudsman challenge [12] and European Parliament refusal to discharge their 2010 budget, because of continuing questions over conflicts of interests [13,14]—will not achieve quality science. What is required are effective institutional mechanisms and cultural norms that identify, and deliberatively address, otherwise unnoticed normative choices shaping risk science and its interpretive judgements. It is not the EFSA''s sole responsibility to achieve this, but it does need to recognize and press the point, against resistance, to develop better EU science and policy.  相似文献   

16.
17.
18.
Christian De Duve''s decision to voluntarily pass away gives us a pause to consider the value and meaning of death. Biologists have much to contribute to the discussion of dying with dignity.Christian de Duve''s voluntary passing away on 4 May 2013 could be seen as the momentous contribution of an eminent biologist and Nobel laureate to the discussion about ‘last things''. In contrast to his fellow scientists Ludwig Boltzmann and Allan Turing, who had made a deliberate choice to end their life in a state of depression and despair, de Duve “left with a smile and a good-bye”, as his daughter told a newspaper.What is the value and meaning of life? Is death inevitable? Should dying with dignity become an inalienable human right? Theologians, philosophers, doctors, politicians, sociologists and jurists have all offered their answers to these fundamental questions. The participation of biologists in the discussion is long overdue and should, in fact, dominate the discourse.We can start from de Duve''s premise—expressed as a subtitle of his book Cosmic Dust—that life is a cosmic imperative; a phenomenon that inevitably takes place anywhere in the universe as permitted by appropriate physicochemical conditions. Under such conditions, the second law of thermodynamics rules—prebiotic organic syntheses proceed, matter self-organizes into more complex structures and darwinian evolution begins, with its subsequent quasi-random walks towards increasing complexity. The actors of this cosmic drama are darwinian individuals—cells, bodies, groups and species—who strive to maintain their structural integrity and to survive as entities. By virtue of the same law, their components undergo successive losses of correlation, so that structures sustain irreparable damage and eventually break down. Because of this ‘double-edge'' of the second law, life progresses in cycles of birth, maturation, ageing and rejuvenation.Death is the inevitable link in this chain of events. ‘The struggle for existence'' is very much the struggle for individual survival, yet it is the number of offspring—the expression of darwinian fitness—that ultimately counts. Darwinian evolution is creative, but its master sculptor is death.Humans are apparently the only species endowed with self-consciousness and thereby a strongly amplified urge to survive. However, self-consciousness has also made humans aware of the existence of death. The clash between the urge for survival and the awareness of death must have inevitably engendered religion, with its delusion of an existence after death, and it might have been one of the main causes of the emergence of culture. Culture divides human experience into two parts: the sacred and the profane. The sacred constitutes personal transcendence: the quest for meaning, the awe of mystery, creativity and aesthetic feelings, the capacity for boundless love and hate, the joy of playing, and peaks of ecstasy. The psychologist Jonathan Haidt observed in his book The Righteous Mind: Why Good People Are Divided by Politics and Religion that “The great trick that humans developed at some point in the last few hundred thousand years is the ability to circle around a tree, rock, ancestor, flag, book or god, and then treat that thing as sacred. People who worship the same idol can trust one another, work as a team and prevail over less cohesive groups.” He considers sacredness as crucial for understanding morality. At present, biology knows almost nothing about human transcendence. Our ignorance of the complexity of human life bestows on it both mystery and sacredness.The religious sources of Western culture, late Judaism and Christianity, adopted Plato''s idea of the immortality of the human soul into their doctrines. The concept of immortality and eternity has continued to thrive in many secular versions and serves as a powerful force to motivate human creativity. Yet, immortality is ruled out by thermodynamics, and the religious version of eternal life in continuous bliss constitutes a logical paradox—eternal pleasure would mean eternal recurrence of everything across infinite time, with no escape; Heaven turned Hell. It is not immortality but temporariness that gives human life its value and meaning.There is no ‘existence of death''. Dying exists, but death does not. Death equals nothingness—no object, no action, no thing. Death is out of reach to human imagination, the intentionality of consciousness—its directedness towards objects—does not allow humans to grasp it. Death is no mystery, no issue at all—it does not concern us, as the philosopher Epicurus put it. The real human issue is dying and the terror of it. We might paraphrase Michel Montaigne''s claim that a mission of philosophy is to learn to die, and say that a mission of biology is to teach to die. Biology might complement its research into apoptosis—programmed cell death—by efforts to discover or to invent a ‘mental apoptosis''. A hundred years ago, the micro-biologist Ilya Mechnikov envisaged, in his book Essais Optimistes, that a long and gratifying personal life might eventually reach a natural state of satiation and evoke a specific instinct to withdraw, similar to the urge to sleep. Biochemistry could assist the process of dying by nullifying fear, pain and distress.In these days of advanced healthcare and technologies that can artificially extend the human lifespan, dying with dignity should become the principal concern of all humanists, not only that of scientists. It would therefore be commendable if Western culture could abandon the fallacy of immortality and eternity, whilst Oriental and African cultures ought to be welcomed to the discussion about the ‘last things''. Dying with dignity will become the ultimate achievement of a dignified life.  相似文献   

19.
Predicting food web structure in future climates is a pressing goal of ecology. These predictions may be impossible without a solid understanding of the factors that structure current food webs. The most fundamental aspect of food web structure—the relationship between the number of links and species—is still poorly understood. Some species interactions may be physically or physiologically ‘forbidden''—like consumption by non-consumer species—with possible consequences for food web structure. We show that accounting for these ‘forbidden interactions'' constrains the feasible link-species space, in tight agreement with empirical data. Rather than following one particular scaling relationship, food webs are distributed throughout this space according to shared biotic and abiotic features. Our study provides new insights into the long-standing question of which factors determine this fundamental aspect of food web structure.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号