首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Dr. Manners     
Good manners make a difference—in science and elsewhere. This includes our social media etiquette as researchers. Subject Categories: S&S: History & Philosophy of Science, Methods & Resources, S&S: Ethics

Elbows off the table, please. Don’t chew with your mouth open. Don’t blow your nose at the table. Don’t put your feet up on the chair or table. And please, do not yuck my yum. These are basic table manners that have come up at some of our lab meals, and I have often wondered if it was my job to teach my trainees social graces. A good fellow scientist and friend of mine once told me it was absolutely our place as mentors to teach our trainees not only how to do science well, but also how to be well‐mannered humans. While these Emily Post‐approved table manners might seem old‐fashioned (I’m guessing some readers will have to look up Emily Post), I strongly believe they still hold a place in modern society; being in good company never goes out of style.Speaking of modern society: upon encouragement by several of my scientist friends, I joined Twitter in 2016. My motivation was mainly to hear about pre‐prints and publications, conference announcements and relevant news, science or otherwise. I also follow people who just make me laugh (I highly recommend @ConanOBrien or @dog_rates). I (re)tweet job openings, conference announcements, and interesting new data. Occasionally, I post photos from conferences, or random science‐related art. I also appreciate the sense of community that social media brings to the table. However, social media is a venue where I have also seen manners go to die. Rapidly.It is really shocking to read what some people feel perfectly comfortable tweeting. While most of us can agree that foul language and highly offensive opinions are generally considered distasteful, there are other, subtler but nonetheless equally—if not more—cringe‐worthy offenses online when I am fairly certain these people would never utter such words in real life. In the era of pandemic, the existence of people tweeting about not being able to eat at their favorite restaurant or travel to some destination holiday because of lockdown shows an egregious lack of self‐awareness. Sure it sucks to cancel a wedding due to COVID‐19, but do you need to moan to your followers—most of whom are likely total strangers—about it while other people have lost their jobs? If I had a nickel for every first‐world complaint I have seen on Twitter, I’d have retired a long time ago; although to be honest, I would do science for free. However, these examples pale in comparison with another type of tweeter: Reader, I submit to you, “the Humblebragger.”From the MacMillan Buzzword dictionary (via Google): a humblebrag is “a statement in which you pretend to be modest but which you are really using as a way of telling people about your success or achievements.” I would further translate this definition to indicate that humblebraggers are starved for attention. After joining Twitter, I quickly found many people using social media to announce how “humble and honored” they are for receiving grant or prize X, Y, or Z. In general, these are junior faculty who have perhaps not acquired the self‐awareness more senior scientists have. Perhaps the most off‐putting posts I have seen are from people who post photos of their NIH application priority scores right after study section, or their Notice of Awards (NOA). When did we ever, before social media, send little notes to each other—let alone to complete strangers—announcing our priority scores or NOAs? (Spoiler: NEVER)Some of you reading this opinion piece might have humblebragged at one or time or another, and might not understand why it is distasteful. Please let me explain. For every person who gets a fundable score, there are dozens more people who do not, and they are sad (I speak from many years of experience). While said fundable‐score person might be by someone we like—and I absolutely, positively wish them well—there are many more people who will feel lousy because they did not get funding from the same review round. When has anyone ever felt good about other people getting something that they, too, desire? I think as children, none of us liked the kid on the playground who ran around with the best new Toy of the Season. As adults, do we feel differently? Along these lines, I have never been a fan of “best poster/talk/abstract” prizes. Trainees should not be striving for these fleeting recognitions and should focus on doing the best science for Science’s sake; I really believe this competition process sets people up for life in a negative way—there, I’ve said it.Can your friends and colleagues tweet about your honors? Sure, why not, and by all means please let your well‐wishers honor you, and do thank them and graciously congratulate your trainees or colleagues for helping you to get there. But to post things yourself? Please. Don’t be surprised if you have been muted by many of your followers.It is notable that many of our most decorated scientists are not on Twitter, or at least never tweet about their accomplishments. I do not recall ever seeing a single Nobel laureate announce how humbled and honored they are about their prize. Of course, I might be wrong, but I am willing to bet the numbers are much lower than what I have observed for junior faculty. True humility will never be demonstrated by announcing your achievements to your social media followers, and I believe humblebragging reveals insecurity more than anything. I hope that many more of us can follow the lead of our top scientists both in creativity, rigor, and social media politeness.  相似文献   

2.
Borrowed robes     
Should scientists indulge their fantasies by writing fiction? Subject Categories: Careers, Economics, Law & Politics, History & Philosophy of Science

Like a substantial fraction of the literate population, I have a collection of unpublished novels in the drawer. Six of them in fact. Some of them were composed in barely more than a week, and others I have been struggling to complete for over 10 years: so maybe it is more accurate to say five and a half. Anyhow, most of them are good to go, give or take a bit of editorial redlining. Or, as my helpful EMBO editor would say, the removal of thousands of unnecessary adverbs and dubiously positioned commas.What do I write about and why? My style is not unique but rather particular. I write fiction in the style of non‐fiction. My subject matter is somewhere in the general realms of science fiction, alternate history and political drama. Putting these ingredients together, and taking account of my purported day job as a serious scientist, it is easy to see why my fictional work is potentially subversive—which is one reason why I have been rather reluctant thus far to let it out of the drawer. At the very least, I should take pains to conceal my identity, lest it corrupts perceptions of my scientific work. Even if I regularly tell my students not to believe everything they read, it would impose far too great a burden on them if they came to question my peer‐reviewed articles purely on the basis of untrue statements published in my name, spoken by jaded politicians, washed‐up academics or over‐credulous journalists. Even if they are imaginary. Real journalists are theoretically bound by strict rules of conduct. But imaginary ones can do whatever they like.Today, I noticed a passage in one of these unpublished works that is clearly written in the style of a young William Shakespeare, dealing with a subject matter that fits neatly into one of his most famous plays. In fact, the illusion was such that I was sure I must have lifted the passage from the play in question and set about searching for the quote, which I then could and should cite. Yet, all Internet searches failed to find any match. The character in whose mouth I placed the words was depicted as being in a delirious state where the boundaries of fact and fiction in his life were already blurred; borrowed identities being one of the themes of the entire novel and arguably of my entire oeuvre. But am I guilty here of plagiarism or poetry, in adopting the borrowed identity of my national playwright?In another work, I lay great emphasis on the damaging role of mitochondrial reactive oxygen species (ROS) as the cause of biological ageing. I have even grafted this explanation onto a thinly disguised version of one of my most valued colleagues. Although there is some support for such a hypothesis from real science, including some papers that I have myself co‐authored, it is also a dangerously broad generalization that leads easily into wrong turnings and misconstructions—let alone questionable policies and diet advice. But, by advancing this misleading and overly simplistic idea in print, have I potentially damaged not only my own reputation, but that of other scientists whom I respect? Even if the author’s identity remains hidden.In one novel, I fantasize that nuclear weapons, whilst they do undoubtedly exist, have in fact been engineered by their inventors so as never actually to work, thus preventing their possible misuse by vainglorious or lunatic politicians unconcerned with the loss of millions of lives and planetary ruin. But if any insane national leader—of which there are unfortunately far too many—would actually come to believe that my fiction in the style of non‐fiction were true, they might indeed risk the outbreak of nuclear war by starting a conventional one in order to secure their strategic goals.Elsewhere, I vindicate one author of published claims that were manifestly based on falsified data, asserting him to have instead been the victim of a conspiracy launched to protect the family of an otherwise much respected American President. None of which is remotely true. Or at least there is no actual evidence supporting my ridiculous account.I have great fun writing fiction of this kind. It is both liberating and relaxing to be able to ignore facts and the results of real experiments and just invent or distort them to suit an imaginary scenario. In an age when the media and real politicians have no qualms about propagating equally outrageous “alternative facts”, I can at least plead innocent by pointing out that my lies are deliberate and labelled as such, even if people might choose to believe them.In a further twist, the blurb I have written to describe my latest work characterizes it as the “semi‐fictionalized” biography of a real person, who was, in fact, a distant relative of mine. But if it is semi‐fictionalized, which bits are true and which are made up? Maybe almost the whole thing is invented? Or maybe 99% of it is based on demonstrable facts? Maybe the subject himself concocted his own life story and somehow planted it in falsified documents and newspaper articles to give it an air of truth. Or maybe the assertion that the story is semi‐fictionalized is itself a fictional device, that is, a lie. Perhaps the central character never existed at all.It is true (sic) that the most powerful fiction is grounded in fact—if something is plausible, it is all the more demanding of our attention. And, it can point the way to truths that are not revealed by a simple catalogue of factual information, such as in a scientific report.But I have already said too much: if any of my novels ever do find their way into print, and should you chance to read them, I will be instantly unmasked. So maybe I’ll have to slot in something else in place of my pseudo‐Shakespearean verse, mitochondrial ROS hypothesis, defunct weapons of mass destruction and manipulated data manipulation.  相似文献   

3.
4.
Public funding for basic research rests on a delicate balance between scientists, governments and the public. COVID could further shift this equilibrium towards translation and application.

Keeping a research laboratory well‐funded to pay for salaries, reagents, infrastructure, travel, and publications is surely a challenging task that can consume most of a PI’s time. The risk is that if the funding decreases, the laboratory will have to shrink, which means less publications and a decreased probability of getting new grants. This downward spiral is hard to reverse and can end up with a token research activity and increased teaching instead. Some would see this is as an unwelcome career change. Apart from the personal challenge for PIs to keep the income flowing, there is no guarantee that the overall funding wallet for research will continue to grow and no certainty that the covenant between the funder and the funded will remain unchanged. The COVID‐19 pandemic could in fact accelerate ongoing changes in the way public funding for research is justified and distributed.There are three legs that support the delicate stool of competitive funding. The first is the scientists or, more precisely, the primary investigators. To get to that position, they had be high achievers as they moved from primary degree to PhD to post doc to the Valhalla of their own laboratory. Along the way they showed to be hard‐working, intelligent, competitive, innovative, lucky, and something between a team player and a team leader. The judgment to grant independence is largely based on publications—and given their track record of great papers to get there, most young PIs assume they will continue to get funding. This is not a narcissistic sense of entitlement; it is a logical conclusion of their career progression.They will get started by recruiting a few PhD or higher degree students. Although this is about educating students, a PI of course hopes that their students generate the results needed for the next grant application. The minimum time for a PhD is about three years, which explains that many grants are structured around a 3‐ to 5‐year research project. The endpoints are rarely the finishing line for a group’s overall research program: Hence, the comments in reviews along the line that “this paper raises more questions than it answers and more work will be required…” Work is carried on with a relay of grants edging asymptotically to answer a question raised decades before. I recall a lecturer from my PhD days who said that he would not do an obvious experiment that would prove or disprove his hypothesis, because “If I did that experiment, it would be the end of my career”. Others are less brazen but still continue to search for the mirage of truth when they know deep in their hearts that they are in a barren desert.The funding from the competitive grants is rarely enough to feed the ever‐growing demand for more people and resources and to make provisions for a hiatus in grant income. Eventually, an additional income stream comes from industry attracted by the knowledge and skills in the laboratory. The PI signs a contract for a one‐year period and allocates some resources to deliver the answers required when due. Similarly, some other resources are shepherded to fulfill the demands of a private donor who wants rapid progress on a disease that afflicts a loved one. The research group is doing a marathon run working on their core challenges with occasional sprints to generate deliverables and satisfy funders who require rapid success—a juggling act that demands much intellectual flexibility.State funding is the second leg and governments have multiple reasons for supporting academic research, even if these are not always presented clearly. Idealistically, the public supports research to add to the pool of knowledge and to understand the world in which we live but this is not how public funding started. The first universities began as theological seminars that expanded to include the arts, law, philosophy and medicine. Naturalists and natural scientists found them a serene and welcoming place to ponder important questions, conduct experiments, and discuss with their colleagues. The influence of Wilhelm von Humboldt who championed the concept of combining teaching and research at universities was immense: both became more professional with codified ways of generating and sharing knowledge. Government funding was an inevitable consequence: initially to pay for the education of students, it expanded to provide for research activities.While that rationale for supporting teaching and research remains, additional reasons for funding research emerged, mostly in the wake of World War 2: the military, national economies, and medicine required new products and services enabled by knowledge. It also required new structures and bodies to control and distribute public resources: hence the establishment of research funding agencies to decide which projects deserve public money. The US National Science Foundation was founded in 1950 following the analysis of Vannevar Bush that the country’s economic well‐being and security relied on research. The NIH extramural program started tentatively in the late 1930s. The Deutsche Forschungsgemeinschaft (DFG) was established in 1951. The EU Framework Programmes started in 1984 with the explicit goal to strengthen the economy of the community. It was only in 2007 that the European Research Council (ERC) was established to support excellence in research rather than to look at practical benefits.But the tide is inevitably moving toward linking state research funding with return on investment measured in jobs, economic growth, or improved health. Increasingly, the rationale for government investment is not just generation of knowledge or publications, but more products and services. As science is seen as the driver of innovation and future economic growth, the goal has been to invest 3% or more of a country’s GDP into research—although almost two‐thirds of this money comes from industry in advanced economies. Even nations without a strong industrial base strive to strengthen their economies by investing in brains. This message about government’s economic expectations is not lost on the funding agencies and softly trickles down to selection committees, analysts, and program officers. The idealistic image of the independent scientist pursuing knowledge for knowledge’s sake no longer fits into this bigger picture. They are now cajoled into research collaborations and partnerships and, hooked to the laboratories’ funding habit, willingly promise that the outcome of the work will somehow have practical applications: “This work will help efforts to cure cancer”.The third leg that influences research directions is the public who pay for research through their taxes. Mostly, they do not get overly excited or concerned about those few percentages of the national budget that go to laboratories. However, the COVID‐19 crisis could change that: Now, the people in the white coats are expected to provide rapid solutions to pressing and complex problems. The scientists have so far performed extremely well: understanding SARS‐CoV‐2 pathology, genetics, and impact on the immune system along with diagnostic tests and vaccine candidates came in record time. Vaccine development moved with lightning speed from discovery of the crucial receptor proteins to mass‐producing jabs, employing many new technologies for the first time. 2020 has been a breath‐taking and successful year for scientists who delivered a great return on investment.The public have also seen what a galvanized and cooperative scientific community across disciplines can achieve. “Aha,” they may say, “why don’t you now move on to tackle triple‐negative breast cancer, Alzheimer’s or Parkinson’s?” This is a fair and challenging question. And the increasing involvement of consumers and patients in research, at the behest of funding agencies, will further this expectation until the researchers respond. And respond they will, as they have always done to every hint of what might be needed to obtain funding.The old order is changing. The days of the independent academics getting funding for life to do what they like in the manner they chose will not survive the pressures from government to show a return on investment and from society to provide solutions to their problems. The danger is that early‐stage research—I did not say “basic” as it has joined “academic” as a pejorative term—will be suffocated. Governments appoint the heads of funding agencies, and it is not surprising if the appointees share the dominant philosophy of their employer. Peer‐review committees are being discouraged, subtly, from supporting early‐stage research. Elsewhere, the guidelines for decisions on grant applications give an increasing score for implementation, translation, IP generation, and so on. Those on the panels get the message and bring it back to their institutions that slowly move away from working to understand what we are ignorant about to using our (partial) understanding to develop cures and drugs.As in all areas, balance is needed. Those at the forefront of translating knowledge into outcomes for society have to remind the public as well as the government that the practical today is only possible because of the “impractical” research of yesterday. Industry is well aware of this and has become a strong champion for excellent early‐stage research to lay the groundwork for solving the next set of hard problems in the future. The ERC and its national counterparts have a special role to play in highlighting the benefits of supporting research with excellence as the sole criterion. In the meantime, scientists have to embrace the new task of developing solutions to societal problems without abandoning the hard slog of innovative research that opens up new understanding from which flows translation into practical applications.  相似文献   

5.
The response by the author. Subject Categories: S&S: Economics & Business, S&S: Ethics

I thank Michael Bronstein and Sophia Vinogradov for their interest and comments. I would like to respond to a few of their points.First, I agree with the authors that empirical studies should be conducted to validate any approaches to prevent the spread of misinformation before their implementation. Nonetheless, I think that the ideas I have proposed may be worth further discussion and inspire empirical studies to test their effectiveness.Second, the authors warn that informing about the imperfections of scientific research may undermine trust in science and scientists, which could result in higher vulnerability to online health misinformation (Roozenbeek et al, 2020; Bronstein & Vinogradov, 2021). I believe that transparency about limitations and problems in research does not necessarily have to diminish trust in science and scientists. On the contrary, as Veit et al put it, “such honesty… is a prerequisite for maintaining a trusting relationship between medical institutions (and practitioners) and the public” (Veit et al, 2021). Importantly, to give an honest picture of scientific research, information about its limitations should be put in adequate context. In particular, the public also should be aware that “good science” is being done by many researchers; we do have solid evidence of effectiveness of many medical interventions; and efforts are being taken to address the problems related to quality of research.Third, Bronstein and Vinogradov suggest that false and dangerous information should be censored. I agree with the authors that “[c]ensorship can prevent individuals from being exposed to false and potentially dangerous ideas” (Bronstein & Vinogradov, 2021). I also recognize that some information is false beyond any doubt and its spread may be harmful. What I am concerned about are, among others, the challenges related to defining what is dangerous and false information and limiting censorship only to this kind of information. For example, on what sources should decisions to censor be based and who should make such decisions? Anyone, whether an individual or an organization, with a responsibility to censor information will likely not only be prone to mistakes, but also to abuses of power to foster their interests. Do the benefits we want to achieve by censorship outweigh the potential risks?Fourth, we need rigorous empirical studies examining the actual impact of medical misinformation. What exactly are the harms we try to protect against and what is their scale? This information is necessary to choose proportionte and effective measures to reduce the harms. Bronstein and Vinogradov give an example of a harm which may be caused by misinformation—an increase in methanol poisoning in Iran. Yet, as noticed by the authors, misinformation is not the sole factor in this case; there are also cultural and other contexts (Arasteh et al, 2020; Bronstein & Vinogradov, 2021). Importantly, the methods of studies exploring the effects of misinformation should be carefully elaborated, especially when study participants are asked to self‐report. A recent study suggests that some claims about the prevalence of dangerous behaviors, such as drinking bleach, which may have been caused by misinformation are largely exaggerated due to the presence of problematic respondents in surveys (preprint: Litman et al, 2021).Last but not least, I would like to call attention to the importance of how veracity of information is determined in empirical studies on misinformation. For example, in a study of Roozenbeek et al, cited by Bronstein and Vinogradov, the World Health Organization (WHO) was used as reliable source of information, which raises questions. For instance, Roozenbeek et al (2020) used a statement “the coronavirus was bioengineered in a military lab in Wuhan” as an example of false information, relying on the judgment of the WHO found on its “mythbusters” website (Roozenbeek et al, 2020). Yet, is there a solid evidence to claim that this statement is false? At present, at least some scientists declare that we cannot rule out that the virus was genetically manipulated in a laboratory (Relman, 2020; Segreto & Deigin, 2020). Interestingly, the WHO also no longer excludes such a possibility and has launched an investigation on this issue (https://www.who.int/health‐topics/coronavirus/origins‐of‐the‐virus, https://www.who.int/emergencies/diseases/novel‐coronavirus‐2019/media‐resources/science‐in‐5/episode‐21‐‐‐covid‐19‐‐‐origins‐of‐the‐sars‐cov‐2‐virus); the information about the laboratory origin of the virus being false is no longer present on the WHO “mythbusters” website (https://www.who.int/emergencies/diseases/novel‐coronavirus‐2019/advice‐for‐public/myth‐busters). Against this backdrop, some results of the study by Roozenbeek et al (2020) seem misleading. In particular, the perception of the reliability of the statement about bioengineered virus by study participants in Roozenbeek et al (2020) does not reflect the susceptibility to misinformation, as intended by the researchers, but rather how the respondents perceive reliability of uncertain information.I hope that discussion and research on these and related issues will continue.  相似文献   

6.
When will COVID‐19 ever end? Various countries employ different strategies to address this; time will tell what the best response was. Subject Categories: S&S: Economics & Business, Microbiology, Virology & Host Pathogen Interaction, S&S: Ethics

Peter Seeger’s anti‐war song with its poignant refrain, stretching out the second “ever” to convey hopeless fatigue with the continuing loss of life, applies to the pandemic too. “Where have all the old folks gone?” may replace the loss of young men in Seeger’s song. But they keep going, and it is not happening on distant continents; it is happening with them distanced in places they called home. At the time of writing in early March, there are a few answers to Seeger’s question from around the world. There are the isolationists who say that maintaining a tight cordon around a COVID‐free zone is the way to get out of the pandemic. There are the optimists with undiluted faith in the vaccines who say it will be all over when everyone will get a jab. And there are the fatalists who say it will eventually end when herd immunity stops the pandemic after many people have died or fallen ill.Living in Australia where there are only sporadic cases of COVID, it is tempting to see the merits of the isolationist strategy. Only a small number of international travelers can enter the continent every week. Coming back from Europe in November, arrival at Brisbane airport was followed by police‐cordoned transfer to a pre‐allocated hotel—no choice, no balcony, no open windows—where we stayed (and paid) for a 14‐day confinement. On release, it was strange to find that life was close to normal: no masks and nearly no restrictions for public and private meetings. Sporting events and concerts do not have attendance restrictions. All that was different were easy‐to‐follow rules about social distancing in shops or on the streets, limited numbers of people on lifts, and a requirement to register when going to a restaurant or bar.Since I settled back to COVID‐free life in Australia, the last incident in Queensland occurred a month ago when a cleaner at a quarantined hotel got infected. It was “treated” with an instant 3‐day “circuit‐breaker” lockdown for the whole community. Forensic contact tracing was easy, and large numbers of people lined up for testing. Seven days later, the outbreak was declared over. A police inquiry examined the case to see whether regulations needed to be changed. The same rapid and uncompromising lockdown protocols have been employed in Melbourne, Perth, or New Zealand whenever somebody in the community tested positive. There is also continuous monitoring of public wastewater for viral RNA to quickly identify any new outbreak. Small numbers of positive cases are treated with maximum restrictions until life can return to “normal”. The plan is to expand these state policies to achieve a COVID‐free in Australia along with New Zealand and eventually the Pacific Islands.The strict isolationist policy has its downsides. Only Australian citizens or permanent residents are allowed to enter the country. Families have been separated for months. Sudden closing of borders makes the country play some musical chair game: When the whistle is blown, you stay where you are. Freedoms that have been considered as human rights have been side‐stepped. Government control is overt. Nonetheless, the dominant mood is that the good of the community trumps that the individual rights, which may come as a surprise in a liberal democratic society. People benefit from the quality of (local) life, and while there is an economic hiatus for tourism and international student business, the overall economy will come out without too much damage. Interestingly, the most draconian State leaders get the highest rating in the polls and elections. Clear, unwavering leadership is appreciated.Given their geographical situation, Australia, New Zealand, and other islands have clear advantages in pursuing their successful isolationist policies. For most of the rest of the world though, the answer to “when will it ever end” points resolutely and confidently to vaccines. With amazing speed and fantastic efforts, scientists in university and industry laboratories all over the world developed these silver bullets, the Krypton that will put the virus in its place. Most countries have now placed all their chips on the vaccine square of the roulette table.However, there are some aspects to consider before COVID will raise the white flag. It will take months to achieve herd immunity; a long time during which deaths, illness, and restrictions will continue. With different vaccines in production and use, it is likely that some will protect better against the virus than others. The duration of their protection is still unclear, and hence, the vaccine roll‐out could be interminable. More SARS‐CoV‐2 variants are on the rise challenging the long‐term efficacy of the vaccine(s). The logistics and production demands are significant and will become even more acute as the vaccines go to developing countries. Anti‐vaxxers already see this as an opportunity to spread their mixture of lies, exaggerations, and selective information, which may make it more difficult to inoculate sufficient numbers in some communities. And yet, for most countries, there is no real alternative to breaking the vicious cycle of persistent local infections that are slowed by restrictions only to explode again when Christmas or business or the public mood demands a break. The optimists are realists in this scenario.The third cohort are the fatalists. The Spanish flu ended after two years, and 50 million deaths and COVID will also run out of susceptible targets in due course. But herd immunity is a crude concept when the herd is people: our families, friends, and neighbors. Fatalism could translate into doing nothing and let people die and that is not a great policy when facing disaster.The alternative of doing nothing is to combine various strategies as Israel and the UK are doing: to adopt some of the isolationist approaches while vaccinating as many people as quickly as possible. The epidemiological data indeed show that restrictions on interactions do reduce the number of cases. Some countries, Ireland for example, have seen ten‐fold reductions in daily cases even before the first needle hit an arm following tightening of social interactions. This shows that the real impact of the vaccination will only be known when a sufficient percentage of the population has been immunized and the social restrictions are lifted. Australia with its significant travel restrictions is another successful example. In addition, contact tracing and testing are very helpful to contain outbreaks and create corona‐free zones that can be expanded in a controlled manner. Of course, there are local, political, and economic factors at play, but these should not block attempts to lower infection rates until sufficient numbers of vaccine doses become available.So, the answer to the question “when will it ever end?” will require a combination of the isolationists and the optimists such that the fatalist solution does not prevail. It will be interesting to revisit this question in two years’ time to see what the correct answer turns out to be.  相似文献   

7.
8.
Policymakers should treat DIY‐biology laboratories as legitimate parts of the scientific enterprise and pay attention to the role of community norms. Subject Categories: Synthetic Biology & Biotechnology, S&S: Economics & Business, S&S: Ethics

DIY biology – very broadly construed as the practice of biological experiments outside of traditional research environments such as universities, research institutes or companies – has, during the past decade, gained much prominence. This increased attention has raised a number of questions about biosafety and biosecurity, both in the media and by policy makers who are concerned about safety and security lapses in “garage biology”. There are a number of challenges here though when it comes to policies to regulate DIY biology. For a start, the term itself escapes easy definition: synonyms or related terms abound, including garage biotechnology, bio‐hacking, self‐modification/grinding, citizen science, bio‐tinkering, bio‐punk, even transhumanism. Some accounts even use ‘DIY‐bio’ interchangeably with synthetic biology, even though these terms refer to different emerging trends in biology. Some of these terms are more charged than others but each carries its own connotations with regard to practice, norms and legality. As such, conversations about the risk, safety and regulation of DIY‐bio can be fraught.
Synonyms or related terms abound, including garage biotechnology, bio‐hacking, self‐modification/grinding, citizen science, bio‐tinkering, bio‐punk, even transhumanism.
Given the increasing policy discussions about DIY‐bio, it is crucial to consider prevailing practice thoughtfully, and accurately. Key questions that researchers, policy makers and the public need to contemplate include the following: “How do different DIY‐bio spaces exist within regulatory frameworks, and enact cultures of (bio)safety?”, “How are these influenced by norms and governance structures?”, “If something is unregulated, must it follow that it is unsafe?” and “What about the reverse: does regulatory oversight necessarily lead to safer practice?”.The DIY‐bio movement emerged from the convergence of two trends in science and technology. The first one is synthetic biology, which can broadly be defined as a conception of genetic engineering as systematic, modular and programmable. While engineering living organisms is obviously a complex endeavour, synthetic biology has sought to re‐frame it by treating genetic components as inherently modular pieces to be assembled, through rational design processes, into complex but predictable systems. This has prompted many “LEGO” metaphors and a widespread sense of democratisation, making genetic engineering accessible not only to trained geneticists, but also to anyone with an “engineering mindset”.The second, much older, trend stems from hacker‐ and makerspaces, which are – usually not‐for‐profit – community organisations that enable groups of enthusiasts to share expensive or technically complex infrastructure, such as 3D printers or woodworking tools, for their projects. These provide a model of community‐led initiatives based on the sharing of infrastructure, equipment and knowledge. Underpinning these two trends is an economic aspect. Many of the tools of synthetic biology – notably DNA sequencing and synthesis – have seen a dramatic drop in cost, and much of the necessary physical apparatus is available for purchase, often second‐hand, through auction sites.DIY‐bio labs are often set‐up under widely varying management schemes. While some present themselves as community outreach labs focusing on amateur users, others cater specifically to semi‐ or professional members with advanced degrees in the biosciences. Other such spaces act as incubators for biotech startups with an explicitly entrepreneurial culture. Membership agreements, IP arrangements, fees, access and the types of project that are encouraged in each of these spaces can have a profound effect on the science being done.  相似文献   

9.
The Global Health 2035 report notes that the “grand convergence”—closure of the infectious, maternal, and child mortality gap between rich and poor countries—is dependent on research and development (R&D) of new drugs, vaccines, diagnostics, and other health tools. However, this convergence (and the R&D underpinning it) will first require an even more fundamental convergence of the different worlds of public health and innovation, where a largely historical gap between global health experts and innovation experts is hindering achievement of the grand convergence in health.The Global Health 2035 report notes that the “grand convergence”—closure of the infectious, maternal, and child mortality gap between rich and poor countries—is dependent on research and development (R&D) of new drugs, vaccines, diagnostics, and other health tools. New tools alone are estimated to deliver a 2% decline each year in the under-5 mortality rate, maternal mortality ratio, and deaths from HIV/AIDS and tuberculosis (TB) [1].However, this convergence (and the R&D underpinning it) is unlikely unless we first have an even more fundamental convergence of the parallel worlds of public health and innovation. At the moment, these worlds are often disconnected, with major gaps to be bridged at both the intellectual and practical levels before we can truly reach a grand convergence in health.  相似文献   

10.
11.
12.
There is no perfect recipe to balance work and life in academic research. Everyone has to find their own optimal balance to derive fulfilment from life and work. Subject Categories: S&S: Careers & Training

A few years ago, a colleague came into my office, looking a little irate, and said, “I just interviewed a prospective student, and the first question was, ‘how is work‐life balance here?’”. Said colleague then explained how this question was one of his triggers. Actually, this sentiment isn''t unusual among many PIs. And, yet, asking about one''s expected workload is a fair question. While some applicants are actually coached to ask it at interviews, I think that many younger scientists have genuine concerns about whether or not they will have enough time away from the bench in order to have a life outside of work.In a nutshell, I believe there is no one‐size‐fits‐all definition of work–life balance (WLB). I also think WLB takes different forms depending on one''s career stage. As a new graduate student, I didn''t exactly burn the midnight oil; it took me a couple of years to get my bench groove on, but once I did, I worked a lot and hard. I also worked on weekends and holidays, because I wanted answers to the questions I had, whether it was the outcome of a bacterial transformation or the result from a big animal experiment. As a post‐doc, I worked similarly hard although I may have actually spent fewer hours at the bench because I just got more efficient and because I read a lot at home and on the six train. But I also knew that I had to do as much as I could to get a job in NYC where my husband was already a faculty member. The pressure was high, and the stress was intense. If you ask people who knew me at the time, they can confirm I was also about 30 pounds lighter than I am now (for what it''s worth, I was far from emaciated!).As an assistant professor, I still worked a lot at the bench in addition to training students and writing grant applications (it took me three‐plus years and many tears to get my first grant). As science started to progress, work got even busier, but in a good way. By no means did I necessarily work harder than those around me—in fact, I know I could have worked even more. And I’m not going to lie, there can be a lot of guilt associated with not working as much as your neighbor.My example is only one of millions, and there is no general manual on how to handle WLB. Everyone has their own optimal balance they have to figure out. People with children or other dependents are particularly challenged; as someone without kids, I cannot even fathom how tough it must be. Even with some institutions providing child care or for those lucky enough to have family take care of children, juggling home life with “lab life” can create exceptional levels of stress. What I have observed over the years is that trainees and colleagues with children become ridiculously efficient; they are truly remarkable. One of my most accomplished trainees had two children, while she was a post‐doc and she is a force to be reckoned with—although no longer in my laboratory, she still is a tour de force at work, no less with child number three just delivered! I think recruiters should view candidates with families as well—if not better—equipped to multi‐task and get the job done.There are so many paths one can take in life, and there is no single, “correct” choice. If I had to define WLB, I would say it is whatever one needs to do in order to get the work done to one''s satisfaction. For some people, putting in long days and nights might be what is needed. Does someone who puts in more hours necessarily do better than one who doesn''t, or does a childless scientist produce more results than one with kids? Absolutely not. People also have different goals in life: Some are literally “wedded” to their work, while others put much more emphasis on spending time with their families and see their children grow up. Importantly, these goals are not set in stone and can fluctuate throughout one''s life. Someone recently said to me that there can be periods of intense vertical growth where “balance” is not called for, and other times in life where it is important and needed. I believe this sentiment eloquently sums up most of our lives.Now that I''m a graying, privileged professor, I have started to prioritize other areas of life, in particular, my health. I go running regularly (well, maybe jog very slowly), which takes a lot of time but it is important for me to stay healthy. Pre‐pandemic, I made plans to visit more people in person as life is too short not to see family and friends. In many ways, having acquired the skills to work more efficiently after many years in the laboratory and office, along with giving myself more time for my health, has freed up my mind to think of science differently, perhaps more creatively. It seems no matter how much I think I’m tipping the balance toward life, work still creeps in, and that’s perfectly OK. At the end of the day, my work is my life, gladly, so I no longer worry about how much I work, nor do I worry about how much time I spend away from it. If you, too, accomplish your goals and derive fulfillment from your work and your life, neither should you.  相似文献   

13.
2020 has been one of the craziest and strangest years we have lived through. Now that it’s over, it’s an opportunity to show gratitude for all the good things. Subject Categories: S&S: History & Philosophy of Science

I moved to New York City the year of the attacks on September 11, 2001, one of the bleakest moments in the history of the United States. I was also in New York City when Superstorm Sandy hit in 2012. Luckily, much fewer people died due to the storm, but it was incredibly disruptive to many scientists in the affected area—my laboratory had to move four times over a period of 6 years in the storm’s aftermath. These were awful, tragic events, but 2020 may go down in the record books as one of the most stressful and crazy years in modern times. Not to be outdone, 2021 has started terribly as well with COVID‐19 still ravaging the world and an attack on the US Capitol, something I thought I’d never see in my lifetime. The unnecessary deaths and the damage to America’s “House of the People” were heartbreaking.While these events were surely awful, nothing will be as crushing as the deaths of family members, close friends, and the children of friends; perhaps, it is these experiences—and the death of a beloved dog—that prepared me for this year and made me grateful, maybe even more than usual, for what I have. But in the age of a pandemic, what am I particularly grateful for?I''m ridiculously grateful to have a job, a roof over my head, and food security. The older I get, the more I see illness and injury affect my colleagues, family, and friends, I increasingly appreciate my good health. I am grateful for Zoom (no, I have no investment in Zoom)—not for the innumerable seminars or meetings I have attended, but for the happy hours that helped to keep me sane during the lockdown. Some of these were with my laboratory, others with friends or colleagues, sometimes spread over nine time zones. Speaking of which, I’m also grateful for getting a more powerful router for the home office.I''m immeasurably grateful to be a scientist, as it allows me to satisfy my curiosity. While not a year‐round activity, it is immensely gratifying to be able to go to my laboratory, set up experiments, and watch the results coming in. Teaching and learning from students is an incredible privilege and educating the next generation of scientists how to set up a PCR or run a protein gel can, as a well‐known lifestyle guru might say, spark serious joy. For this reason, I’m eternally grateful to my trainees; their endless curiosity, persistence, and energy makes showing up to the laboratory a pleasure. My dear friend Randy Hampton recently told me he received a student evaluation, thanking him for telling his virtually taught class that the opportunity to educate and to be educated is something worth being grateful for, a sentiment I passed onto a group of students I taught this past fall. I believe they, too, were grateful.While all of the above things focus on my own life, there are much broader things. For one, I am so grateful to all of those around the globe who wear masks and keep their distance and thereby keep themselves and others safe. I am grateful for the election of an American president who proudly wears a mask—often quite stylishly with his trademark Ray‐Ban Aviators—and has made fighting the COVID‐19 pandemic his top priority. President Biden''s decision to ramp up vaccine production and distribution, along with his federal mask mandate, will save lives, hopefully not just in the United States but worldwide.This Gen‐X‐er is also especially grateful to the citizens of Generations Y and Z around the world for fighting for social justice; I am hopeful that the Black Lives Matter movement has got traction and that we may finally see real change in how communities of color are treated. I have been heartened to see that in my adopted home state of New York, our local politicians ensure that communities that have been historically underserved are prioritized for COVID‐19 testing and vaccinations. Along these lines, I am also incredibly encouraged by the election of the first woman who also happens to be of African and Asian heritage to the office of vice president. Times are a changin''...While it is difficult to choose one, top thing to be grateful for, I would personally go for science. I am stoked that, faced with a global crisis, science came to the rescue, as it often has in the past. If I had to find a silver lining in COVID‐19—albeit it would be for the darkest of clouds—I am grateful for all of our colleagues, who despite their usual arguing, quickly and effectively developed tests, provided advice, epidemiological data and a better understanding of the virus and its mode of infection, and ultimately developed therapies and vaccines to save lives. The same can be said for the biotech and pharmaceutical industry that, notwithstanding its often‐noted faults, has been instrumental in developing, testing and mass‐producing efficient and safe vaccines in blistering, record time. Needless to say, I have also much gratitude to all of the scientists and regulators at the FDA and elsewhere who work hard to make life as we once knew it come back to us, hopefully in the near future.Once again, thank you for everything, Science.  相似文献   

14.
We need more openness about age‐related infertility as it is a particular risk for many female scientists in academia who feel that they have to delay having children. Subject Categories: S&S: Careers & Training, Genetics, Gene Therapy & Genetic Disease

Balancing motherhood and a career in academic research is a formidable challenge, and there is substantial literature available on the many difficulties that scientists and mothers face (Kamerlin, 2016). Unsurprisingly, these challenges are very off‐putting for many female scientists, causing us to keep delaying motherhood while pursuing our hypercompetitive academic careers with arguments “I’ll wait until I have a faculty position”, “I’ll wait until I have tenure”, and “I’ll wait until I’m a full professor”. The problem is that we frequently end up postponing getting children based on this logic until the choice is no longer ours: Fertility unfortunately does decline rapidly over the age of 35, notwithstanding other potential causes of infertility.This column is therefore not about the challenges of motherhood itself, but rather another situation frequently faced by women in academia, and one that is still not discussed openly: What if you want to have children and cannot, either because biology is not on your side, or because you waited too long, or both? My inspiration for writing this article is a combination of my own experiences battling infertility in my path to motherhood, and an excellent piece by Dr. Arghavan Salles for Time Magazine, outlining the difficulties she faced having spent her most fertile years training to be a surgeon, just to find out that it might be too late for motherhood when she came out the other side of her training (Salles, 2019). Unfortunately, as academic work models remain unsupportive of parenthood, despite significant improvements, this is not a problem faced only by physicians, but also one faced by both myself and many other women I have spoken to.I want to start by sharing my own story, because it is a bit more unusual. I have a very rare (~ 1 in 125,000 in women (Laitinen et al, 2011)) congenital endocrine disorder, Kallmann syndrome (KS) (Boehm et al, 2015); as a result, my body is unable to produce its own sex hormones and I don’t have a natural cycle. It doesn’t take much background in science to realize that this has a major negative impact on my fertility—individuals with KS can typically only conceive with the help of fertility treatment. It took me a long time to get a correct diagnosis, but even before that, in my twenties, I was being told that it is extremely unlikely I will ever have biological children. I didn’t realize back then that KS in women is a very treatable form of infertility, and that fertility treatments are progressing forward in leaps and bounds. As I was also adamant that I didn’t even want to be a mother but rather focus on my career, this was not something that caused me too much consternation at the time.In parallel, like Dr. Salles, I spent my most fertile years chasing the academic career path and kept finding—in my mind—good reasons to postpone even trying for a child. There is really never a good time to have a baby in academia (I tell any of my junior colleagues who ask to not plan their families around “if only X…” because there will always be a new X). Like many, I naïvely believed that in vitro fertilization (IVF) would be the magic bullet that can solve all my fertility problems. I accordingly thought it safe to pursue first a faculty position, then tenure, then a full professorship, as I will have to have fertility treatment anyhow. In my late twenties, my doctors suggested that I consider fertility preservation, for example, through egg freezing. At the time, however, the technology was both extravagantly expensive and unreliable and I brushed it off as unnecessary: when the time comes, I would just do IVF. In reality, the IVF success rates for women in their mid‐to‐late 30s are typically only ~ 40% per egg retrieval, and this only gets worse with age, something many women are not aware of when planning parenthood and careers. It is also an extremely strenuous process both physically and emotionally, as one is exposed to massive doses of hormones, multiple daily injections, tremendous financial cost, and general worries about whether it will work or not.Then reality hit. What I believed would be an easy journey turned out to be extremely challenging, and took almost three years, seven rounds of treatment, and two late pregnancy losses. While the driving factor for my infertility remained my endocrine disorder, my age played an increasing role in problems responding to treatment, and it was very nearly too late for me, despite being younger than 40. Despite these challenges, we are among the lucky ones and there are many others who are not.I am generally a very open person, and as I started the IVF process, I talked freely about this with female colleagues. Because I was open about my own predicament, colleagues from across the world, who had never mentioned it to me before, opened up and told me their own children were conceived through IVF. However, many colleagues also shared stories of trying, and how they are for various—not infrequently age‐related—reasons unable to have children, even after fertility treatment. These experiences are so common in academia, much more than you could ever imagine, but because of the societal taboos that still surround infertility and pregnancy and infant loss, they are not discussed openly. This means that many academic women are unprepared for the challenges surrounding infertility, particularly with advanced age. In addition, the silence surrounding this issue means that women lose out on what would have otherwise been a natural support network when facing a challenging situation, which can make you feel tremendously alone.There is no right or wrong in family planning decisions, and having children young, delaying having children or deciding to not have children at all are all equally valid choices. However, we do need more openness about the challenges of infertility, and we need to bring this discussion out of the shadows. My goal with this essay is to contribute to breaking the silence, so that academics of both genders can make informed choices, whether about the timing of when to build a family or about exploring fertility preservation—which in itself is not a guaranteed insurance policy—as relevant to their personal choices. Ultimately, we need an academic system that is supportive of all forms of family choices, and one that creates an environment compatible with parenthood so that so many academics do not feel pressured to delay parenthood until it might be too late.  相似文献   

15.
Lazy hazy days     
Scientists have warned about the looming climate crisis for decades, but the world has been slow to act. Are we in danger of making a similar mistake, by neglecting the dangers of other climactic catastrophes? Subject Categories: Biotechnology & Synthetic Biology, Economics, Law & Politics, Evolution & Ecology

On one of my trips to Antarctica, I was enjoined to refer not to “global warming” or even to “climate change.” The former implies a uniform and rather benign process, while the second suggests just a transition from one state to another and seems to minimize all the attendant risks to survival. Neither of these terms adequately or accurately describes what is happening to our planet''s climate system as a result of greenhouse gas emissions; not to mention the effects of urbanization, intensive agriculture, deforestation, and other consequences of human population growth. Instead, I was encouraged to use the term “climate disruption,” which embraces the multiplicity of events taking place, some of them still hard to model, that are altering the planetary ecosystem in dramatic ways.With climate disruption now an urgent and undeniable reality, policymakers are finally waking up to the threats that scientists have been warning about for decades. They have accepted the need for action (UNFCCC Conference of the Parties, 2021), even if the commitment remains patchy or lukewarm. But to implement all the necessary changes is a massive undertaking, and it is debatable whether we have enough time left. The fault lies mostly with those who resisted change for so long, hoping the problem would just go away, or denying that it was happening at all. The crisis situation that we face today is because the changes needed simply cannot be executed overnight. It will take time for the infrastructure to be put in place, whether for renewable electricity, for the switch to carbon‐neutral fuels, for sustainable agriculture and construction, and for net carbon capture. If the problems worsen, requiring even more drastic action, at least we do have a direction of travel, though we would be starting off from an even more precarious situation.However, given the time that it has taken—and will still take—to turn around the juggernaut of our industrial society, are we in danger of making the same mistakes all over again, by ignoring the risks of the very opposite process happening in our lifetime? The causes of historic climate cooling are still debated, and though we have fairly convincing evidence regarding specific, sudden events, there is no firm consensus on what is behind longer‐term and possibly cyclical changes in the climate.The two best‐documented examples are the catastrophe of 536–540 AD and the effects of the Laki Haze of 1783–1784. The cause of the 536–540 event is still debated, but is widely believed to have been one or more massive volcanic eruptions that created a global atmospheric dust‐cloud, resulting in a temperature drop of up to 2°C with concomitant famines and societal crises (Toohey et al, 2016; Helama et al, 2018). The Laki Haze was caused by the massive outpouring of sulfurous fumes from the Laki eruption in Iceland. Its effects on the climate, though just as immediate, were less straightforward. The emissions, combined with other meteorological anomalies, produced a disruption of the jetstream, as well as other localized effects. In northwest Europe, the first half of the summer of 1783 was exceptionally hot, but the following winters were dramatically cold, and the mean temperature across much of the northern hemisphere is estimated to have dropped by around 1.3°C for 2–3 years (Thordarson & Self, 2003). In Iceland itself, as well as much of western and northern Europe, the effects were even more devastating, with widespread crop failures and deaths of both livestock and humans exacerbated by the toxicity of the volcanic gases (Schmidt et al, 2011).Other volcanic events in recorded time have produced major climactic disturbances, such as the 1816 Tambora eruption in Indonesia, which resulted in “the year without a summer,” marked by temperature anomalies of up to 4°C (Fasullo et al, 2017), again precipitating worldwide famine. The 1883 Krakatoa eruption produced similar disruption, albeit of a lesser magnitude, though the effects are proposed to have been much longer lasting (Gleckler et al, 2006).Much more scientifically challenging is the so‐called Little Ice Age in the Middle Ages, approximately from 1250 to 1700 AD, when global temperatures were significantly lower than in the preceding and following centuries. It was marked by particularly frigid and prolonged winters in the northern hemisphere. There is no strong consensus as to its cause(s) or even its exact dates; nor even that it can be considered a global‐scale event rather than a summation of several localized phenomena. A volcanic eruption in 1257 with similar effects to the one of 1816 has been suggested as an initiating event. Disruption of the oceanic circulation system resulting from prolonged anomalies in solar activity is another possible explanation (Lapointe & Bradley, 2021). Nevertheless, and despite an average global cooling of < 1°C, the effects on global agriculture, settlement, migration and trade, pandemics such as the Black Death and perhaps even wars and revolutions, were profound.Once or twice in the past century, we have faced devastating wars, tsunamis and pandemics that seemed to come out of the blue and exacted massive tolls on humanity. From the most recent of each of these, there is a growing realization that, although these events are rare and poorly predictable, we can greatly limit the damage if we prepare properly. Devoting a small proportion of our resources over time, we can build the infrastructure and the mechanisms to cope, when these disasters do eventually strike.Without abandoning any of the emergency measures to combat anthropogenic warming, I believe that the risk of climate cooling needs to be addressed in the same way. The infrastructure for burning fossil fuels needs to be mothballed, not destroyed. Carbon capture needs to be implemented in a way that is rapidly reversible, if this should ever be needed. Alternative transportation routes need to be planned and built in case existing ones become impassable due to ice or flooding. Properly insulated buildings are not just a way of saving energy. They are essential for survival in extreme cold, as those of us who live in the Arctic countries are well aware—but many other regions also experience severe winters, for which we should all prepare.Biotechnology needs to be set to work to devise ways of mitigating the effects of sudden climactic events such as the Laki Haze or the Tambora and Krakatoa eruptions, as well as longer‐term phenomena like the Little Ice Age. Could bacteria be used, for example, to detoxify and dissipate a sulfuric aerosol such as the one generated by the Laki eruption? Methane is generally regarded as a major contributor to the greenhouse effect, but it is short‐lived in the atmosphere. So, could methanogens somehow be harnessed to bring about a temporary rise in global temperatures to offset short‐term cooling effects of a volcanic dust‐cloud?We already have a global seed bank in Svalbard (Asdal & Guarino, 2018): It might easily be expanded to include a greater representation of cold‐resistant varieties of the world''s crop plants that might one day be vital to human survival. And, the experience of the Laki Haze indicates a need for varieties capable of withstanding acid rains and other volcanic pollutants, as well as drought and water saturation. An equivalent (embryo) bank for strains of agriculturally important animals potentially threatened by the effects of abrupt cooling of the climate or catastrophic toxification of the atmosphere is also worth considering.It has generally been thought impractical and pointless to prepare for even rarer events, such as cometary impacts, but events that have occurred repeatedly in recorded history and over an even longer time scale (Helama et al, 2021) are likely to happen again. We should and can be better prepared. This is not to say that we should pay attention to every conspiracy theorist or crank, or paid advocates for energy corporations that seek short‐term profits at the expense of long‐term survival, but the dangers of climate disruption of all kinds are too great to ignore. Instead of our current rather one‐dimensional thinking, we need an “all‐risks” approach to the subject: learning from the past and the present to prepare for the future.  相似文献   

16.
Academia has fostered an unhealthy relationship with alcohol that has an undeniable impact on the health and behaviour of students and staff. Subject Categories: S&S: History & Philosophy of Science, Chemical Biology, S&S: Ethics

University life has a lot to offer. And, for better or worse, much of it goes hand in hand with a bottle. Believe it or not, I was a bit of teetotaler in my undergraduate days but quickly made up for it in graduate school, where each celebration included inebriation. Indeed, my initial tour of the laboratory I eventually worked in included a refreshing visit to the grad club. Orientation week ended with a marathon beer blitz at a nightclub. The semester’s first invited seminar speaker was welcomed with the sounds of loose change, ice buckets and the clickity‐clack of organic microbrews being opened. Our inaugural genome evolution journal club was such a success that we vowed to spill even more red wine onto our notebooks the following week. In hindsight, I should have realized at this early stage in my studies that I was fostering an unhealthy and unsustainable relationship between biology and booze. Unfortunately, my post‐graduate education in alcohol didn’t stop there.Like many keen students, I arrived at my first scientific conference with a belly full of nerves and a fistful of drink tickets, which I quickly put to good use at the poster session. The successful completion of my PhD proposal assessment was met with pats on the back as I was swiftly marched off to a local pub with no chance of escape. My first peer‐reviewed paper literally arrived with a pop as Champagne was generously poured into plastic cups for the entire laboratory group. My failures, too, were greeted with a liberal dose of ethanol. “Sorry you came up short on that scholarship application, Smitty. It’s nothing a little weapons‐grade Chianti won’t cure.” “That experiment failed again! Come on, let me buy you a lunchtime martini to make up for it.” Soon I learnt that every academic event, achievement or ailment, no matter how big or small, could be appropriately paired with beer, wine or spirit. Missing from the menu were two crucial ingredients for any burgeoning researcher: moderation and mindfulness.But it was the older vintages that really inspired me – the legendary drinking escapades of my scientific mentors, advisors and idols. The tale of professor so‐and‐so who at that epic meeting in 1993 polished off an entire magnum of rosé at dinner and then went on to deliver among the greatest keynote lectures on record at 9 am the following morning. That celebrated chaired researcher who kept the single malt next to the pipette tips for quick and easy access. The grizzled evolutionary ecologist who never went into the field without half a dozen cans of high‐end smoked oysters and two hip flaks, which didn’t contain water. And so, when I was told by someone in the know of how the most famous geneticist on campus wrote that monumental Nature paper (the one I’d read ten times!) while locked in his office for twelve hours with a six‐pack, I bought into the romance hook, line and sinker. The result: I’ve been nursing a recurring headache for nearly two decades and I’m still waiting on that Nature paper. Most importantly, I now realize the various dangers of romanticizing the bottle, especially for individuals in mentorship positions.Like my idols before me, I’ve accrued a cask full of well‐oaked academic drinking stories, except that they haven’t aged well. There is that heroic evening of intense scotch‐fueled scientific discussion, which led to me forfeiting two front teeth to the concrete sidewalk (my mother still thinks it was a squash accident). Or that time I commemorated the end of a great conference in Barcelona by throwing up on the front window of a café while the most prominent minds in my field sipped aperitifs inside (thank god this was before Twitter). Even more romantic: me buying a bottle of Cotes de Nuits Burgundy at Calgary airport on route to a job interview, discreetly opening the bottle in‐flight because economy class wine sucks, and then being met by airport security upon landing. Let’s just say I didn’t get the job. To some, these anecdotes might seem light‐hearted or silly, but they are actually rather sad and underscore the seriousness of substance abuse. Many readers will have their own complicated experiences with alcohol in academia and, I believe, will agree that it is high time we asked ourselves: are we training our graduate students to be great thinkers or great drinkers? Moreover, this question does not address the equally if not more serious issue of excessive drinking among undergraduate students.As I sit at my desk writing this, I think to myself: is it normal that within a two‐minute walk of my university office there are three different places on campus that I can have a beer before lunch, not including the minifridge behind my desk? Is it normal that in my department the first thing we do after a student defends their thesis is go to the grad club where they can have any alcoholic drink of their choosing for free from the goblet of knowledge, which is kept on a pedestal behind the bar? Is it normal that before the COVID pandemic when I was visiting a prominent university for an invited talk, one of the professors I met with offered me a glass of expensive Japanese gin at 11 am in the morning? (And, yes, I accepted the drink.)Of course, if you don’t want to drink you can just say no. But we are learning more and more how institutional cultures – “the deeply embedded patterns of organisational behaviour and the shared values, assumptions, beliefs or ideologies that members have about their organisation or its work” (Peterson & Spencer, 1991) – can have powerful effects on behaviour. Excessive alcohol consumption is undeniably an aspect of collegial culture, one that is having major impacts on the health and behaviour of students and staff, and one that I’ve been an active participant in for far too long. I’ll be turning forty in a few months and I have to face the fact that I’ve already drunk enough alcohol for two lifetimes, and not one drop of it has made me a better scientist, teacher or mentor. The question remains: how much more juice can I squeeze into this forty‐year‐old pickled lemon? Well, cheers to that.  相似文献   

17.
Giving undergraduate students an opportunity to partake in a research project pays back for both students and the lab. Subject Categories: S&S: Careers & Training

Participating hands‐on in an academic research project can be a fascinating and valuable educational experience for undergraduate students. It not just teaches them additional and transferable skills—such as written and oral communication, critical thinking, or information literacy—but also could be an important factor for deciding on an academic research career. Even if the level of involvement in research projects varies between labs and institutions, students still gain such valuable experience, much more than they gain from the standard laboratory courses that usually perform only pre‐tested experiments with expected outcomes. On the other end, the research labs that accommodate undergraduate students also benefit from overall research progress and mentoring experience.  相似文献   

18.
The COVID‐19 pandemic has rekindled debates about gain‐of‐function experiments. This is an opportunity to clearly define safety risks and appropriate countermeasures. Subject Categories: Economics, Law & Politics, Microbiology, Virology & Host Pathogen Interaction, Science Policy & Publishing

The so‐called “gain of function” research has been recently debated in the context of viral research on coronaviruses and whether it is too risky to undertake such experiments. However, the meaning of “gain of function” or “GOF” in a science policy context has changed over time. The term was originally coined to describe two controversial research projects on H5N1 avian influenza virus and was later applied to specific experiments on coronavirus. Subsequent policies and discussions have attempted to define GOF in different ways, but no single definition has been widely accepted by the community. The fuzzy and imprecise nature of the term has led to misunderstandings and has hampered discussions on how to properly assess the benefit of such experiments and biosafety measures.
The fuzzy and imprecise nature of the term GOF has led to misunderstandings and has hampered discussions on how to properly assess the benefit of such experiments and biosafety measures
  相似文献   

19.
20.
Increasing diversity in academia is not just a matter of fairness but also improves science. It is up to individual scientists and research organisations to support underrepresented minorities. Subject Categories: S&S: Economics & Business, S&S: Ethics

There has been a large body of research on diversity in the workplace—in both academic and non‐academic settings—that highlights the benefits of an inclusive workplace. This is perhaps most clearly visible in industry where the rewards are immediate: A study by McKinsey showed that companies with a more diverse workforce perform better financially and by substantial margins, compared to their respective national industry medians (https://www.mckinsey.com/business-functions/organization/our-insights/why-diversity-matters#).It is easy to measure success in financial terms, but since there is no similar binary metric for research performance (https://sfdora.org), it is harder to quantify the rewards of workplace diversity in academic research. However, research shows that diversity actually provides research groups with a competitive edge in other quantifiable terms, such as citation counts (Powell, 2018), and the scientific process obviously benefits from diversity in perspectives. Bringing together individuals with different ways of thinking will allow us to solve more challenging scientific problems and lead to better decision‐making and leadership. Conversely, there is a direct cost to bias in recruitment, tenure, and promotion processes. When such processes are affected by bias—whether explicit or implicit—the whole organization is losing by not tapping into the wider range of skills and assets that could otherwise have been brought to the workplace. Promoting diversity in academia is therefore not simply an issue of equity, which in itself is a sufficient reason, but also a very practical question: how do we create a better work environment for our organization, both in terms of collegiality and in terms of performance?Notwithstanding the fact that there is now substantial awareness of the importance of diversity and that significant work is being invested into addressing the issue, the statistics do not look good. Despite a substantial improvement at the undergraduate and graduate student levels in the EU, women remain significantly underrepresented in research at the more senior levels (Directorate‐General for Research and Innovation European Commission, 2019). In addition, the lion’s share of diversity efforts, at least in Sweden where I work, is frequently focused on gender. Gender is clearly important, but other diversity axes with problematic biases deserve the same attention. As one example, while statistics on ethnic diversity is readily available for US Universities (Davis & Fry, 2019), this information is much harder to find in Europe. While there is an increased awareness of diversity at the student level, this does not necessarily translate into initiatives to support faculty diversity (Aragon & Hoskins, 2017). There are examples of progress and concrete actions on these fronts, including the Athena Swan Charter (https://www.ecu.ac.uk/equality-charters/athena-swan/), the more recent Race Equality Charter (https://www.advance-he.ac.uk/charters/race-equality-charter), and the EMBO journals that regularly analyze their decisions for gender bias. However, progress remains frustratingly slow. In 2019, the World Economic Forum suggested that, at the current rate of progress, the global gender gap will take 108 years to close (https://www.weforum.org/reports/the-global-gender-gap-report-2018). I worry that it may take even longer for other diversity axes since these receive far less attention.It is clear that there is a problem, but what can we do to address it? Perhaps one of the single most important contributions we can make as faculty is to address the implicit (subconscious) biases we all carry. Implicit bias will manifest itself in many ways: gender, ethnicity, socioeconomic status, or disability, just to mention a few. These are the easily identifiable ones, but implicit bias also extends to, for example, professional titles (seniority level), institutional affiliation and even nationality. These partialities affect our decision‐making—for example, in recruitment, tenure, promotion, and evaluation committees—and how we interact with each other.The “Matilda effect” (Rossiter, 1993), which refers to the diminishment of the value of contributions made by female researchers, is now well recognized, and it is not unique to gender (Ross, 2014). When we diminish the contributions of our colleagues, it affects how we evaluate them in competitive scenarios, and whether we put them forward for grants, prizes, recruitment, tenure, and so on. In the hypercompetitive environment that is academia today, even small and subtle injuries can tremendously amplify their negative impact on success, given the current reward system that appears to favor “fighters” over “collaborators”. Consciously working to correct for this, stepping back to rethink our first assessment, is imperative.Women and other minorities also frequently suffer from imposter syndrome, which can impact self‐confidence and make members of these groups less likely to self‐promote in the pursuit of prestigious funding, awards, and competitive career opportunities. This effect is further amplified by a globally mobile academic workforce who, when moving to new cultural contexts (whether locally or internationally), can be unaware of the unwritten rules that guide a department’s work environment and decision‐making processes. Here, mentoring can play a tremendous role in reducing barriers to success; however, for such mentorship to be productive, mentors need to be aware of the specific challenges faced by minorities in academia, as well as their own implicit biases (Hinton et al, 2020).Other areas where we, as individual academics, can contribute to a more diverse work environment include meeting cultures and decision‐making. Making sure that the members of decision‐making bodies have diverse composition so that a variety of views are represented is an important first step. One complication to bear in mind though is that implicit biases are not limited to individuals outside the group: A new UN report shows that almost 90% of people—both men and women—carry biases against women, which in turn is what contributes to the glass‐ceiling effect (United Nations Development Program, 2020). However, equally important is inclusiveness in the meeting culture. Studies from the business world show that even high‐powered women often struggle to speak up and be heard at meetings, and the onus for solving this is often passed back onto themselves. The same holds true for other minority groups, and in an academic setting, it extends to seminars and conferences. The next time you plan a meeting, think about the setting and layout. Who gets to talk? Why? Is the distribution of time given to participants representative of the composition of the meeting participants? If not, why not?As a final example of personal action, we can take: language matters (Ås, 1978). Even without malicious intent, there can be a big gap between what we say and mean, and how it comes across to the recipient. Some examples of this are given by Harrison and Tanner (Harrison & Tanner, 2018), who discuss microagressions in an academic setting and the underlying message one might be unintentionally sending. Microaggressions, when built up over a long period of time, and coming from different people, can significantly impact someone’s confidence and sense of self‐worth. Taking a step back and thinking about why we choose the language, we do is a vital part of creating an inclusive work environment.Addressing diversity challenges in academia is a highly complex multi‐faceted topic that is impossible to do justice in a short opinion piece. This is, therefore, just a small set of examples: By paying attention to our own biases and thinking carefully about how we interact with those around us, both in terms of the language we use and the working environments we create, we can personally contribute to improving both recruitment and retention of a diverse academic workforce. In addition, it is crucial to break the culture of silence and to speak up when we see others committing micro‐ or not so microaggressions or otherwise contributing to a hostile environment. There is a substantial amount of work that needs to be done, at both the individual and organization levels, before we have a truly inclusive academic environment. However, this is not a reason to not do it, and if each of us contributes, we can accelerate this change to a better and more equitable future, while all winning from the benefits of diversity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号