首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

Local anesthetics are used clinically for peripheral nerve blocks, epidural anesthesia, spinal anesthesia and pain management; large concentrations, continuous application and long exposure time can cause neurotoxicity. The mechanism of neurotoxicity caused by local anesthetics is unclear. Neurite outgrowth and apoptosis can be used to evaluate neurotoxic effects. Mouse neuroblastoma cells were induced to differentiate and generate neurites in the presence of local anesthetics. The culture medium was removed and replaced with serum-free medium plus 20 μl combinations of epidermal growth factor and fibroblast growth factor containing tetracaine, prilocaine, lidocaine or procaine at concentrations of 1, 10, 25, or 100 μl prior to neurite measurement. Cell viability, iNOS, eNOS and apoptosis were evaluated. Local anesthetics produced toxic effects by neurite inhibition at low concentrations and by apoptosis at high concentrations. There was an inverse relation between local anesthetic concentrations and cell viability. Comparison of different local anesthetics showed toxicity, as assessed by cell viability and apoptotic potency, in the following order: tetracaine > prilocaine > lidocaine > procaine. Procaine was the least neurotoxic local anesthetic and because it is short-acting, may be preferred for pain prevention during short procedures.  相似文献   

2.
Anesthetics impact the resolution of inflammation   总被引:1,自引:0,他引:1  

Background

Local and volatile anesthetics are widely used for surgery. It is not known whether anesthetics impinge on the orchestrated events in spontaneous resolution of acute inflammation. Here we investigated whether a commonly used local anesthetic (lidocaine) and a widely used inhaled anesthetic (isoflurane) impact the active process of resolution of inflammation.

Methods and Findings

Using murine peritonitis induced by zymosan and a systems approach, we report that lidocaine delayed and blocked key events in resolution of inflammation. Lidocaine inhibited both PMN apoptosis and macrophage uptake of apoptotic PMN, events that contributed to impaired PMN removal from exudates and thereby delayed the onset of resolution of acute inflammation and return to homeostasis. Lidocaine did not alter the levels of specific lipid mediators, including pro-inflammatory leukotriene B4, prostaglandin E2 and anti-inflammatory lipoxin A4, in the cell-free peritoneal lavages. Addition of a lipoxin A4 stable analog, partially rescued lidocaine-delayed resolution of inflammation. To identify protein components underlying lidocaine''s actions in resolution, systematic proteomics was carried out using nanospray-liquid chromatography-tandem mass spectrometry. Lidocaine selectively up-regulated pro-inflammatory proteins including S100A8/9 and CRAMP/LL-37, and down-regulated anti-inflammatory and some pro-resolution peptides and proteins including IL-4, IL-13, TGF-â and Galectin-1. In contrast, the volatile anesthetic isoflurane promoted resolution in this system, diminishing the amplitude of PMN infiltration and shortening the resolution interval (Ri) ∼50%. In addition, isoflurane down-regulated a panel of pro-inflammatory chemokines and cytokines, as well as proteins known to be active in cell migration and chemotaxis (i.e., CRAMP and cofilin-1). The distinct impact of lidocaine and isoflurane on selective molecules may underlie their opposite actions in resolution of inflammation, namely lidocaine delayed the onset of resoluion (Tmax), while isoflurane shortened resolution interval (Ri).

Conclusions

Taken together, both local and volatile anesthetics impact endogenous resolution program(s), altering specific resolution indices and selective cellular/molecular components in inflammation-resolution. Isoflurane enhances whereas lidocaine impairs timely resolution of acute inflammation.  相似文献   

3.
This Formal Comment provides clarifications on the authors’ recent estimates of global bacterial diversity and the current status of the field, and responds to a Formal Comment from John Wiens regarding their prior work.

We welcome Wiens’ efforts to estimate global animal-associated bacterial richness and thank him for highlighting points of confusion and potential caveats in our previous work on the topic [1]. We find Wiens’ ideas worthy of consideration, as most of them represent a step in the right direction, and we encourage lively scientific discourse for the advancement of knowledge. Time will ultimately reveal which estimates, and underlying assumptions, came closest to the true bacterial richness; we are excited and confident that this will happen in the near future thanks to rapidly increasing sequencing capabilities. Here, we provide some clarifications on our work, its relation to Wiens’ estimates, and the current status of the field.First, Wiens states that we excluded animal-associated bacterial species in our global estimates. However, thousands of animal-associated samples were included in our analysis, and this was clearly stated in our main text (second paragraph on page 3).Second, Wiens’ commentary focuses on “S1 Text” of our paper [1], which was rather peripheral, and, hence, in the Supporting information. S1 Text [1] critically evaluated the rationale underlying previous estimates of global bacterial operational taxonomic unit (OTU) richness by Larsen and colleagues [2], but the results of S1 Text [1] did not in any way flow into the analyses presented in our main article. Indeed, our estimates of global bacterial (and archaeal) richness, discussed in our main article, are based on 7 alternative well-established estimation methods founded on concrete statistical models, each developed specifically for richness estimates from multiple survey data. We applied these methods to >34,000 samples from >490 studies including from, but not restricted to, animal microbiomes, to arrive at our global estimates, independently of the discussion in S1 Text [1].Third, Wiens’ commentary can yield the impression that we proposed that there are only 40,100 animal-associated bacterial OTUs and that Cephalotes in particular only have 40 associated bacterial OTUs. However, these numbers, mentioned in our S1 Text [1], were not meant to be taken as proposed point estimates for animal-associated OTU richness, and we believe that this was clear from our text. Instead, these numbers were meant as examples to demonstrate how strongly the estimates of animal-associated bacterial richness by Larsen and colleagues [2] would decrease simply by (a) using better justified mathematical formulas, i.e., with the same input data as used by Larsen and colleagues [2] but founded on an actual statistical model; (b) accounting for even minor overlaps in the OTUs associated with different animal genera; and/or (c) using alternative animal diversity estimates published by others [3], rather than those proposed by Larsen and colleagues [2]. Specifically, regarding (b), Larsen and colleagues [2] (pages 233 and 259) performed pairwise host species comparisons within various insect genera (for example, within the Cephalotes) to estimate on average how many bacterial OTUs were unique to each host species, then multiplied that estimate with their estimated number of animal species to determine the global animal-associated bacterial richness. However, since their pairwise host species comparisons were restricted to congeneric species, their estimated number of unique OTUs per host species does not account for potential overlaps between different host genera. Indeed, even if an OTU is only found “in one” Cephalotes species, it might not be truly unique to that host species if it is also present in members of other host genera. To clarify, we did not claim that all animal genera can share bacterial OTUs, but instead considered the implications of some average microbiome overlap (some animal genera might share no bacteria, and other genera might share a lot). The average microbiome overlap of 0.1% (when clustering bacterial 16S sequences into OTUs at 97% similarity) between animal genera used in our illustrative example in S1 Text [1] is of course speculative, but it is not unreasonable (see our next point). A zero overlap (implicitly assumed by Larsen and colleagues [2]) is almost certainly wrong. One goal of our S1 Text [1] was to point out the dramatic effects of such overlaps on animal-associated bacterial richness estimates using “basic” mathematical arguments.Fourth, Wiens’ commentary could yield the impression that existing data are able to tell us with sufficient certainty when a bacterial OTU is “unique” to a specific animal taxon. However, so far, the microbiomes of only a minuscule fraction of animal species have been surveyed. One can thus certainly not exclude the possibility that many bacterial OTUs currently thought to be “unique” to a certain animal taxon are eventually also found in other (potentially distantly related) animal taxa, for example, due to similar host diets and or environmental conditions [47]. As a case in point, many bacteria in herbivorous fish guts were found to be closely related to bacteria in mammals [8], and Song and colleagues [6] report that bat microbiomes closely resemble those of birds. The gut microbiome of caterpillars consists mostly of dietary and environmental bacteria and is not species specific [4]. Even in animal taxa with characteristic microbiota, there is a documented overlap across host species and genera. For example, there are a small number of bacteria consistently and specifically associated with bees, but these are found across bee genera at the level of the 99.5% similar 16S rRNA OTUs [5]. To further illustrate that an average microbiome overlap between animal taxa at least as large as the one considered in our S1 Text (0.1%) [1] is not unreasonable, we analyzed 16S rRNA sequences from the Earth Microbiome Project [6,9] and measured the overlap of microbiota originating from individuals of different animal taxa. We found that, on average, 2 individuals from different host classes (e.g., 1 mammalian and 1 avian sample) share 1.26% of their OTUs (16S clustered at 100% similarity), and 2 individuals from different host genera belonging to the same class (e.g., 2 mammalian samples) share 2.84% of their OTUs (methods in S1 Text of this response). A coarser OTU threshold (e.g., 97% similarity, considered in our original paper [1]) would further increase these average overlaps. While less is known about insect microbiomes, there is currently little reason to expect a drastically different picture there, and, as explained in our S1 Text [1], even a small average microbiome overlap of 0.1% between host genera would strongly limit total bacterial richness estimates. The fact that the accumulation curve of detected bacterial OTUs over sampled insect species does not yet strongly level off says little about where the accumulation curve would asymptotically converge; rigorous statistical methods, such as the ones used for our global estimates [1], would be needed to estimate this asymptote.Lastly, we stress that while the present conversation (including previous estimates by Louca and colleagues [1], Larsen and colleagues [2], Locey and colleagues [10], Wiens’ commentary, and this response) focuses on 16S rRNA OTUs, it may well be that at finer phylogenetic resolutions, e.g., at bacterial strain level, host specificity and bacterial richness are substantially higher. In particular, future whole-genome sequencing surveys may well reveal the existence of far more genomic clusters and ecotypes than 16S-based OTUs.  相似文献   

4.
Beryne Odeny discusses strategies to improve equity in health care and health research.

WHO defines health equity as “the absence of unfair and avoidable or remediable differences in health among population groups defined socially, economically, demographically, or geographically or by other means of stratification” [1]. Yet, contrary to this fundamental aspiration and the international mandate on universal health coverage (UHC), almost 50% of the world’s population does not receive needed health services, and progress toward health equity remains elusive [2].  相似文献   

5.
Engaging, hands-on design experiences are key for formal and informal Science, Technology, Engineering, and Mathematics (STEM) education. Robotic and video game design challenges have been particularly effective in stimulating student interest, but equivalent experiences for the life sciences are not as developed. Here we present the concept of a "biotic game design project" to motivate student learning at the interface of life sciences and device engineering (as part of a cornerstone bioengineering devices course). We provide all course material and also present efforts in adapting the project''s complexity to serve other time frames, age groups, learning focuses, and budgets. Students self-reported that they found the biotic game project fun and motivating, resulting in increased effort. Hence this type of design project could generate excitement and educational impact similar to robotics and video games.
This Education article is part of the Education Series.
Hands-on robotic and video game design projects and competitions are widespread and have proven particularly effective at sparking interest and teaching K–12 and college students in mechatronics, computer science, and Science, Technology, Engineering, and Mathematics (STEM). Furthermore, these projects foster teamwork, self-learning, design, and presentation skills [1,2]. Such playful and interactive media that provide fun, creative, open-ended learning experiences for all ages are arguably underdeveloped in the life sciences. Most hands-on education occurs in traditionally structured laboratory courses with a few exceptions like the International Genetically Engineered Machine (iGEM) competition [3]. Furthermore, there is an increasing need to bring the traditional engineering and life science disciplines together. In order to fill these gaps, we present the concept of a biotic game design project to foster student development in a broad set of engineering and life science skills in an integrated manner (Fig. 1). Though we primarily discuss our specific implementation as a cornerstone project-based class [4], alternative implementations are possible to motivate a variety of learning goals under various constraints such as student age and cost (see supplements for all course material).Open in a separate windowFig 1We developed a bioengineering devices course that employed biotic game design as a motivating project scheme. A: Biotic games enable human players to interact with cells. B: Conceptual overview of a biotic game setup. C: Students built and played biotic games. Image credits: A C64 joystick by Speed-link, 1984 (http://commons.wikimedia.org/wiki/File:Joystick_black_red_petri_01.svg); Euglena viridis by C. G. Ehrenberg, 1838; C Photo, N. J. C.Biotic games are games that operate on biological processes (Fig. 1) [5]. The biotic games we present here involve the single-celled phototactic eukaryote, Euglena gracilis. These microscopic organisms are housed in a microfluidic chip and are displayed in a magnified image on a video screen. Players interact with these cells by modulating the intensity and direction of light perpendicular to the microfluidic chip via a joystick, thereby influencing the cells’ phototactic motion. Software tracks the position of individual euglena with respect to virtual objects overlaid on the screen, creating myriad opportunities for creative game design and play. For example, in a simple game, points might be scored when a cell hits a virtual box (see S1 Video).The biotic game design project we developed was intended to motivate all the broad categories of theoretical and hands-on skills for creating any integrated instrument intended to house and to interface with biological materials, i.e., optics, electronics, sensing, actuation, microfluidics, fabrication, image processing, programming, and creative design. We termed the synthesis of these skills “biotics” in analogy to mechatronics. Our intended audience for this course was bioengineering undergraduate students at Stanford University who already had some programming experience but little to no experience in device design, fabrication, and integration. We also incorporated bioethics into the curriculum to emphasize the social responsibility of every engineer and demonstrate the potential for the biotic game project to motivate multiple fields. The course we taught spanned ten weeks, divided roughly equally into a set of technical units and the biotic game project, with two 4-hour lab sections and a single 1.5-hour lecture each week. For details and all course documents, please refer to the supplemental material.The technical section of the course focused on developing hands-on skills and theoretical understanding related to devices in a conventionally structured laboratory setting. We introduced students to fundamental electronics concepts and components such as voltage, current, resistors, capacitors, LEDs, filters, operational amplifiers, motors, microcontrollers (Arduino Uno), and breadboards. We followed a similar traditional approach in introducing optics, presenting the thin lens equation, ray tracing, conjugate planes, basic optical system design, and Köhler illumination. We covered additional topics in less detail: MATLAB programming, particle tracking, computer-aided design (CAD), fabrication, and microfluidics (learning objectives are provided at the beginning of each unit in the supplemental material).During the project-based section, students built their own biotic games. We left specific choices of implementation, architecture, and design to the students to encourage creativity and exploration but required students to revisit the technical skills they learned in the first section by integrating some specific requirements into their games (Fig. 2). Students built a bright field microscope with Köhler illumination and projected their images onto a webcam (optics). Glass and polydimethylsiloxane (PDMS) components comprised the microfluidic chip (microfluidics) and housed the euglena (microbiology). The holder for the chip and euglena-steering LEDs was designed in Solidworks (CAD) and 3-D printed (fabrication). The students constructed a polycarbonate housing for the game controller using a band saw and drill press (fabrication). The students revisited electronic breadboarding and soldering when creating the electronic circuits to communicate between the LEDs, joystick, microcontroller, and computer. Finally, they used MATLAB to program the microcontroller, implement real time image recognition, and provide the user interface for the game experience (image processing and programming).Open in a separate windowFig 2Biotic game-based courses encourage students to integrate a versatile set of relevant STEM topics.Image credits: Taken by N. J. C. (credit for the work and artifacts to the students who took the course).We challenged students to consider the ethical implications [6] of manipulating life in a game context before building their projects. Although phototaxis experiments with euglena are commonplace in education, and have hitherto raised no ethical concerns, the equivalent manipulation in the form of a game warrants its own ethical analysis as provided by Harvey et al. [7]. The students read and discussed this paper, then wrote a 200-word essay on whether they found it permissible or not to make and play biotic games. Students had the choice to switch to a nongame project of equivalent complexity. All students found euglena-based games permissible, pointing out that “they are nonsentient and cannot feel pain,” followed by a diverse range of considerations such as “the euglena are still free to act as they please,” “there needs to be an educational intention,” or “a pet…provides a way…to work on responsibility and caring.” Based on further student-initiated discussions that spontaneously emerged throughout the course, we believe that biotic games are effective in providing a stimulating, student-relevant, in-class context for bioethics.We motivated the game design project to the students as having educational potential at two levels, i.e., learning by building and learning by playing; we lectured them about the needs and opportunities for new approaches to K–12 STEM education [8,9]. The students were then asked to consider building a game that had educational value for the player. Educational value has many aspects, which was reflected in students’ statements regarding their intended educational outcomes for their games on their course project websites. These ranged from more factual learning objectives (“learn about…” “…inner working,” “…structural detail,” “… light responses,” “…euglena behavior”) to objectives affecting attitude (“spark interest,” “generate fascination,” “encourage to explore,” “respect for life”). We also had a game designer give a guest lecture to the students. For pragmatic reasons, we requested the students keep games very simple (ideally having just a single in-game objective) and cap game duration at one minute. Before, during, and after their projects, students received feedback from instructors as well as from their peers on their games from technical and user perspectives.The games that the students ultimately produced were diverse and creative (Fig. 2 and S1 Video), including single and multiplayer scenarios, games where euglena hit virtual targets, and games where euglena pushed virtual objects. Games that involved pushing objects across the screen (relying on collective motion of many organisms) were generally more consistent at correlating player strategy to scored points than those that involved hitting target objects. The quality and robustness of these integrated projects naturally varied, and individual groups placed more or less emphasis on different aspects based on personal preferences and learning goals (for example, fabricating a more elaborate housing for the game controller versus programming more complex game mechanics). A key point was that the students did not rely on prepared materials or platforms to develop their games but rather had to design, build, and test their game setups from scratch, thereby revisiting and deepening the primary learning goals of the course with some freedom to follow their own learning aspirations (Fig. 2). The final project deliverables were a two-minute project demonstration video, a website describing the elements of the project, and a game that all instructors and students played on the final day (Fig. 1B), which led to lots of laughter as well as in-depth discussions on technical details.Many students self-reported that they enjoyed the project and that it led to increased motivation and effort during the course. In response to the question “Do you think you were motivated to try harder or had more fun (and thereby learned more) during your final project because you were making a game (rather than just building a technical instrument, for example)? If so—please give some examples:” 15 out of 17 students responded “Very/definitely” on a five point scale. As examples, students listed: “wanted to make the best game,” “want to make it clever and cool in the eyes of classmates who are play testing,” “motivated during final push,” “willing to put in more time,” “was fun”/”made it fun,” “create a game that actually works,” “reinforced what was learned before,” and “provided room for creativity.” These comments reflect the overall excitement we saw for the biotic game project. While these responses do not constitute rigorous proof regarding course effectiveness (which will require more detailed and controlled assessments in the future), we consider this course a success based on our teaching experiences.45 students have now taken this class over the past three years, with 18 students in our most recent offering. We used each year to iterate and improve our implementation. For example, we changed the organism and stimulus from Paramecia galvanotaxis [5] to Euglena phototaxis, which gave more reliable long-term responses. We also added a simple microfluidics unit enabling students to build more robust organism housing chambers. We changed the microscope structure from LEGO to Thorlabs parts (essentially trading the emphasis on 3-D structural design, flexibility, and cost for a more in-depth focus on high-end optics and their alignment). Finally, we explicitly asked the students to design and fabricate a housing for the game controller to better incorporate fabrication skills like using a band saw and tapping screw threads. So far, we primarily used MATLAB as the programming component given its widespread use in education and research and the available Arduino interface. However, MATLAB is not particularly well-suited to support game design and is also not free, making translation into lower resource settings challenging. For the future, we are considering moving to smartphone-based control (such as Android) given that these mobile environments are very flexible and increasingly used for control of scientific and consumer instruments and are becoming more widespread in education. We also see the opportunity to better emphasize and teach the approach of iterative design; for example, by letting students prototype and test their game ideas on paper [10] and simple programming environments like Scratch [11] first, before attempting the full implementation. It would likely also be very rewarding for the students to be able to take their project home at the end of the course. In summary, many different course design decisions can be made based on specific intended educational outcomes. Not all of these can be fit into one course at the same time, and clear decisions should be made on how to balance covering a breadth of topics with depth on a selected few.As a preliminary test of another age range, time frame, and budget, we taught a greatly simplified 3-hour workshop where high school and middle school students assembled a low-cost microscope and microfluidics chamber, attached it to a smartphone, and stimulated euglena using a preprogrammed Arduino-based controller (see supplements). We had no game interface implemented yet on the phone, but the students could observe the euglena responses to the light stimuli. All students were able to complete the project and take their microscopes home. Over half of our undergraduate student teams also volunteered to present their game projects for this outreach event which took place multiple weeks after their class had ended. This separate experience suggests that the biotic game concept holds promise for reaching a wider age range in a shortened timespan and at a greatly reduced budget, and that completed games can be used in outreach activities. We are currently developing a kit modeled after this unit.In conclusion, we consider biotic games promising in motivating integrated, hands-on learning at the interface of life science and engineering. Our efforts so far indicate that this concept could be adapted to various age groups and learning goals with the potential for wider future impacts on education. We draw upon the analogy to robotics, where microcontrollers went from initially unfathomable as an educational tool to the vision of Papert and collaborators and their use of programmable robotics with children [12], eventually leading to multiple commercial realizations (LEGO mindstorm, Arduino, etc.), a large public following, and a major role in education both in the classroom and through competitions such as First Robotics [1]. We also see additional potential for integrating more creative and artistic aspects into STEM, i.e., leading to generalized Science, Technology, Engineering, Arts, and Mathematics (STEAM) disciplines [13]. We invite others to join us in these endeavors—all instructional materials are available in the appendix for further adaptations and educational use.  相似文献   

6.
7.
With the increasing appreciation for the crucial roles that microbial symbionts play in the development and fitness of plant and animal hosts, there has been a recent push to interpret evolution through the lens of the “hologenome”—the collective genomic content of a host and its microbiome. But how symbionts evolve and, particularly, whether they undergo natural selection to benefit hosts are complex issues that are associated with several misconceptions about evolutionary processes in host-associated microbial communities. Microorganisms can have intimate, ancient, and/or mutualistic associations with hosts without having undergone natural selection to benefit hosts. Likewise, observing host-specific microbial community composition or greater community similarity among more closely related hosts does not imply that symbionts have coevolved with hosts, let alone that they have evolved for the benefit of the host. Although selection at the level of the symbiotic community, or hologenome, occurs in some cases, it should not be accepted as the null hypothesis for explaining features of host–symbiont associations.The ubiquity and importance of microorganisms in the lives of plants and animals are ever more apparent, and increasingly investigated by biologists. Suddenly, we have the aspiration and tools to open up a new, complicated world, and we must confront the realization that almost everything about larger organisms has been shaped by their history of evolving from, then with, microorganisms [1]. This development represents a dramatic shift in perspective—arguably a revolution—in modern biology.Do we need to revamp basic tenets of evolutionary theory to understand how hosts evolve with associated microorganisms? Some scientists have suggested that we do [2], and the recently introduced terms “holobiont” and “hologenome” encapsulate what has been described as an “emerging postmodern synthesis” [3]. Holobiont was initially used to refer to a host and a single inherited symbiont [4] but was later extended to a host and its community of associated microorganisms, specifically for the case of corals [5]. The idea of the holobiont is that a host and its associated microorganisms must be considered as an integrated unit in order to understand many biological and ecological features.The later introduction of the term hologenome [2,6,7] sought to describe a holobiont by its genetic composition. The term has been used in different ways by different authors, but in most contexts a hologenome is considered a genetic unit that represents the combined genomes of a host and its associated microorganisms [8]. This non-controversial definition of hologenome is linked to the idea that this entity has a role in evolution. For example, Gordon et al. [1,9] state, "The genome of a holobiont, termed the hologenome, is the sum of the genomes of all constituents, all of which can evolve within that context." That last phrase is sufficiently general that it can be interpreted in any number of ways. Like physical conditions, associated organisms can be considered as part of the environment and thus can be sources of natural selection, affecting evolution in each lineage.But a more sweeping and problematic proposal is given by originators of the term, which is that "the holobiont with its hologenome should be considered as the unit of natural selection in evolution" [2,7] or by others, that “an organism’s genetics and fitness are inclusive of its microbiome” [3,4]. The implication is that differential success of holobionts influences evolution of participating organisms, such that their observed features cannot be fully understood without considering selection at the holobiont level. Another formulation of this concept is the proposal that the evolution of host–microbe systems is “most easily understood by equating a gene in the nuclear genome to a microbe in the microbiome” [8]. Under this view, interactions between host and microbial genotypes should be considered as genetic epistasis (interactions among alleles at different loci in a genome) rather than as interactions between the host’s genotype and its environment.While biologists would agree that microorganisms have important roles in host evolution, this statement is a far cry from the claim that they are fused with hosts to form the primary units of selection, or that hosts and microorganisms provide different portions of a unified genome. Broadly, the hologenome concept contends, first, that participating lineages within a holobiont affect each other’s evolution, and, second, that that the holobiont is a primary unit of selection. Our aim in this essay is to clarify what kinds of evidence are needed for each of these claims and to argue that neither should be assumed without evidence. We point out that some observations that superficially appear to support the concept of the hologenome have spawned confusion about real biological issues (Box 1).

Box 1. Misconceptions Related to the Hologenome Concept

Misconception #1: Similarities in microbiomes between related host species result from codiversification. Reality: Related species tend to be similar in most traits. Because microbiome composition is a trait that involves living organisms, it is tempting to assume that these similarities reflect a shared evolutionary history of host and symbionts. This has been shown to be the case for some symbioses (e.g., ancient maternally inherited endosymbionts in insects). But for many interactions (e.g., gut microbiota), related hosts may have similar effects on community assembly without any history of codiversification between the host and individual microbial species (Fig 1B).Open in a separate windowFig 1Alternative evolutionary processes can result in related host species harboring similar symbiont communities.Left panel: Individual symbiont lineages retain fidelity to evolving host lineages, through co-inheritance or other mechanisms, with some gain and loss of symbiont lineages over evolutionary time. Right panel: As host lineages evolve, they shift their selectivity of environmental microbes, which are not evolving in response and which may not even have been present during host diversification. In both cases, measures of community divergence will likely be smaller for more closely related hosts, but they reflect processes with very different implications for hologenome evolution. Image credit: Nancy Moran and Kim Hammond, University of Texas at Austin. Misconception #2: Parallel phylogenies of host and symbiont, or intimacy of host and symbiont associations, reflect coevolution. Reality: Coevolution is defined by a history of reciprocal selection between parties. While coevolution can generate parallel phylogenies or intimate associations, these can also result from many other mechanisms. Misconception #3: Highly intimate associations of host and symbionts, involving exchange of cellular metabolites and specific patterns of colonization, result from a history of selection favoring mutualistic traits. Reality: The adaptive basis of a specific trait is difficult to infer even when the trait involves a single lineage, and it is even more daunting when multiple lineages contribute. But complexity or intimacy of an interaction does not always imply a long history of coevolution nor does it imply that the nature of the interaction involves mutual benefit. Misconception #4: The essential roles that microbial species/communities play in host development are adaptations resulting from selection on the symbionts to contribute to holobiont function. Reality: Hosts may adapt to the reliable presence of symbionts in the same way that they adapt to abiotic components of the environment, and little or no selection on symbiont populations need be involved. Misconception #5: Because of the extreme importance of symbionts in essential functions of their hosts, the integrated holobiont represents the primary unit of selection. Reality: The strength of natural selection at different levels of biological organization is a central issue in evolutionary biology and the focus of much empirical and theoretical research. But insofar as there is a primary unit of selection common to diverse biological systems, it is unlikely to be at the level of the holobiont. In particular cases, evolutionary interests of host and symbionts can be sufficiently aligned such that the predominant effect of natural selection on genetic variation in each party is to increase the reproductive success of the holobiont. But in most host–symbiont relationships, contrasting modes of genetic transmission will decouple selection pressures.  相似文献   

8.
Pentameric ligand-gated ion channels are targets of general anesthetics. Although the search for discrete anesthetic binding sites has achieved some degree of success, little is known regarding how anesthetics work after the events of binding. Using the crystal structures of the bacterial Gloeobacter violaceus pentameric ligand-gated ion channel (GLIC), which is sensitive to a variety of general anesthetics, we performed multiple molecular dynamics simulations in the presence and absence of the general anesthetic isoflurane. Isoflurane bound to several locations within GLIC, including the transmembrane pocket identified crystallographically, the extracellular (EC) domain, and the interface of the EC and transmembrane domains. Isoflurane also entered the channel after the pore was dehydrated in one of the simulations. Isoflurane disrupted the quaternary structure of GLIC, as evidenced in a striking association between the binding and breakage of intersubunit salt bridges in the EC domain. The pore-lining helix experienced lateral and inward radial tilting motion that contributed to the channel closure. Isoflurane binding introduced strong anticorrelated motions between different subunits of GLIC. The demonstrated structural and dynamical modulations by isoflurane aid in the understanding of the underlying mechanism of anesthetic inhibition of GLIC and possibly other homologous pentameric ligand-gated ion channels.  相似文献   

9.
Despite the clinical ubiquity of anesthesia, the molecular basis of anesthetic action is poorly understood. Amongst the many molecular targets proposed to contribute to anesthetic effects, the voltage gated sodium channels (VGSCs) should also be considered relevant, as they have been shown to be sensitive to all general anesthetics tested thus far. However, binding sites for VGSCs have not been identified. Moreover, the mechanism of inhibition is still largely unknown. The recently reported atomic structures of several members of the bacterial VGSC family offer the opportunity to shed light on the mechanism of action of anesthetics on these important ion channels. To this end, we have performed a molecular dynamics “flooding” simulation on a membrane-bound structural model of the archetypal bacterial VGSC, NaChBac in a closed pore conformation. This computation allowed us to identify binding sites and access pathways for the commonly used volatile general anesthetic, isoflurane. Three sites have been characterized with binding affinities in a physiologically relevant range. Interestingly, one of the most favorable sites is in the pore of the channel, suggesting that the binding sites of local and general anesthetics may overlap. Surprisingly, even though the activation gate of the channel is closed, and therefore the pore and the aqueous compartment at the intracellular side are disconnected, we observe binding of isoflurane in the central cavity. Several sampled association and dissociation events in the central cavity provide consistent support to the hypothesis that the “fenestrations” present in the membrane-embedded region of the channel act as the long-hypothesized hydrophobic drug access pathway.  相似文献   

10.
Lymph nodes are meeting points for circulating immune cells. A network of reticular cells that ensheathe a mesh of collagen fibers crisscrosses the tissue in each lymph node. This reticular cell network distributes key molecules and provides a structure for immune cells to move around on. During infections, the network can suffer damage. A new study has now investigated the network’s structure in detail, using methods from graph theory. The study showed that the network is remarkably robust to damage: it can still support immune responses even when half of the reticular cells are destroyed. This is a further important example of how network connectivity achieves tolerance to failure, a property shared with other important biological and nonbiological networks.Lymph nodes are critical sites for immune cells to connect, exchange information, and initiate responses to foreign invaders. More than 90% of the cells in each lymph node—the T and B lymphocytes of the adaptive immune system—only reside there temporarily and are constantly moving around as they search for foreign substances (antigen). When there is no infection, T and B cells migrate within distinct regions. But lymph node architecture changes dramatically when antigen is found, and an immune response is mounted. New blood vessels grow and recruit vast numbers of lymphocytes from the blood circulation. Antigen-specific cells divide and mature into “effector” immune cells. The combination of these two processes—increased influx of cells from outside and proliferation within—can make a lymph node grow 10-fold within only a few days [1]. Accordingly, the structural backbone supporting lymph node function cannot be too rigid; otherwise, it would impede this rapid organ expansion. This structural backbone is provided by a network of fibroblastic reticular cells (FRCs) [2], which secrete a form of collagen (type III alpha 1) that produces reticular fibers—thin, threadlike structures with a diameter of less than 1 μm. Reticular fibers cross-link and form a spider web–like structure. The FRCs surrounding this structure form the reticular cell network (Fig 1), which was first observed in the 1930s [3]. Interestingly, experiments in which the FRCs were destroyed showed that the collagen fiber network remained intact [4].Open in a separate windowFig 1Structure of the reticular cell network.The reticular cell network is formed by fibroblastic reticular cells (FRCs) whose cell membranes ensheathe a core of collagen fibers that acts as a conduit system for the distribution of small molecules [5]. In most other tissues, collagen fibers instead reside outside cell membranes, where they form the extracellular matrix. Inset: graph structure representing the FRCs in the depicted network as “nodes” (circles) and the direct connections between them as “edges” (lines). Shape and length of the fibers are not represented in the graph.Reticular cell networks do not only support lymph node structure; they are also important players in the immune response. Small molecules from the tissue environment or from pathogens, such as viral protein fragments, can be distributed within the lymph node through the conduit system formed by the reticular fibers [5]. Some cytokines and chemokines that are vital for effective T cell migration—and the nitric oxide that inhibits T cell proliferation [6]—are even produced by the FRCs themselves. Moreover, the network is thought of as a “road system” for lymphocyte migration [7]: in 2006, a seminal study found that lymphocytes roaming through lymph nodes were in contact with network fibers most of the time [8]. A few years before, it had become possible to observe lymphocyte migration in vivo by means of two-photon microscopy [9]. Movies from these experiments strikingly demonstrated that individual cells were taking very different paths, engaging in what appeared to be a “random walk.” But these movies did not show the structures surrounding the migrating cells, which created an impression of motion in empty space. Appreciating the role of the reticular cell network in this pattern of motion [8] suggested that the complex cell trajectories reflect the architecture of the network along which the cells walk.Given its important functions, it is surprising how little we know about the structure of the reticular cell network—compared to, for instance, our wealth of knowledge on neuron connectivity in the brain. In part this is because the reticular cells are hard to visualize. In vivo techniques like two-photon imaging do not provide sufficient resolution to reliably capture the fine-threaded mesh. Instead, thin tissue sections are stained with fluorescent antibodies that bind to the reticular fibers and are imaged with high-resolution confocal microscopy to reveal the network structure. One study [10] applied this method to determine basic parameters such as branch length and the size of gaps between fibers. Here, we discuss a recent study by Novkovic et al. [11] that took a different approach to investigating properties of the reticular cell network structure: they applied methods from graph theory.Graph theory is a classic subject in mathematics that is often traced back to Leonhard Euler’s stroll through 18th-century Königsberg, Prussia. Euler could not find a circular route that crossed each of the city’s seven bridges exactly once, and wondered how he could prove that such a route does not exist. He realized that this problem could be phrased in terms of a simple diagram containing points (parts of the city) and lines between them (bridges). Further detail, such as the layout of city’s streets, was irrelevant. This was the birth of graph theory—the study of objects consisting of points (nodes) connected by lines (edges). Graph theory has diverse applications ranging from logistics to molecular biology. Since the beginning of this century, there has been a strong interest in applying graph theory to understand the structure of networks that occur in nature—including biological networks, such as neurons in the brain, and more recently, social networks like friendships on Facebook. Various mathematical models of network structures have been developed in an attempt to understand network properties that are relevant in different contexts, such as the speed at which information spreads or the amount of damage that a network can tolerate before breaking into disconnected parts. Three well-known network topologies are random, small-world, and scale-free networks (Box 1). Novkovic et al. modeled reticular cell networks as graphs by considering each FRC to be a node and the fiber connections between FRCs to be edges (Fig 1).

Box 1. Graph Theory and the Robustness of Real Networks

After the publication of several landmark papers on network topology at the end of the previous century, the science of complex networks has grown explosively. One of these papers described “small-world” networks [16] and demonstrated that several natural networks have the amazing property that the average length of shortest paths between arbitrary nodes is unexpectedly small (making it a “small world”), even if most of the network nodes are clustered (that is, when neighbors of neighbors tend to be neighbors). The Barabasi group published a series of papers describing “scale-free” networks [17,18] and demonstrated that scale-free networks are extremely robust to random deletions of nodes—the vast majority of the nodes can be deleted before the network falls apart [15]. In scale-free networks, the number of edges per node is distributed according to a power law, implying that most nodes have very few connections, and a few nodes are hubs having very many connections. Thus, the topology of complex networks can be scale-free, small-world, or neither, such as with random networks [19]. Novkovic et al. [11] describe the clustering of the edges of neighbors and the average shortest path–length between arbitrary nodes, finding that reticular cell networks have small-world properties. Whether or not these networks have scale-free properties is not explicitly examined in the paper, but given that they are embedded in a three-dimensional space, that they “already” lose functionality when about 50% of the FRCs are ablated, and that the number of connected protrusions per FRC is not distributed according to a power law (see the data underlying their Figure 2g), reticular cell networks are not likely to be scale-free. Thus, the enhanced robustness of reticular cell networks is most likely due to their high local connectivity: Networks lose functionality when they fall apart in disconnected components, and high clustering means that the graph is unlikely to split apart when a single node is removed, because the neighbors of that node tend to stay connected [14]. Additionally, since the reticular cell network has a spatial structure (unlike the internet or the Facebook social network), its high degree of clustering is probably due to the preferential attachment to nearby FRCs when the network develops, which agrees well with Novkovic et al.’s recent classification as a small-world network with lattice-like properties [11].Some virus infections are known to damage reticular cell networks [12], either through infection of the FRCs or as a bystander effect of inflammation. It is therefore important to understand to what extent the network structure is able to survive partial destruction. Novkovic et al. first approached this question by performing computer simulations, in which they randomly removed FRC nodes from the networks they had reconstructed from microscopy images. They found that they had to remove at least half of the nodes to break the network apart into disconnected parts. To study the effect of damage on the reticular cell network in vivo rather than in silico, Novkovic et al. used an experimental technique called conditional cell ablation. In this technique, a gene encoding the diphtheria toxin receptor (DTR) is inserted after a specific promoter that leads it to be expressed in a particular cell type of interest. Administration of diphtheria toxin kills DTR-expressing cells, leaving other cells unaffected. By expressing DTR under the control of the FRC-specific Ccl19 promoter, Novkovic et al. were able to selectively destroy the reticular cell network and then watch it grow back over time. Regrowth took about four weeks, and the resulting network properties were no different from a network formed naturally during development. Thus, it seems that the reticular cell network structure is imprinted and reemerges even after severe damage. This finding ties in nicely with previous data from the same group [13], showing that reticular cell networks form even in the absence of lymphotoxin-beta receptor, an otherwise key player in many aspects of lymphoid tissue development. Together, these data make a compelling case that network formation is a robust fundamental trait of FRCs.Next, Novkovic et al. varied the dose of diphtheria toxin such that only a fraction of FRCs were destroyed, effectively removing a random subset of the network nodes. They measured in two ways how FRC loss affects the immune system: they tracked T cell migration using two-photon microscopy and they determined the amount of antiviral T cells produced by the mice after an infection. Remarkably, as predicted by their computer simulations, lymph nodes appeared capable of tolerating the loss of up to half of FRCs with little effect on either T cell migration or the numbers of activated antiviral T cells. Only when more than half of the FRCs were destroyed did T cell motion slow down significantly and the mice were no longer able to mount effective antiviral immune responses. Such a tolerance of damage is impressive—for comparison, consider what would happen if one were to close half of London’s subway stations!Robustness to damage is of interest for many different networks, from power grids to the internet [14]. In particular, the “scale-free” architecture that features rare, strongly connected “hub” nodes is highly robust to random damage [15]. Novkovic et al. did not address whether the reticular cell network is scale-free, but it is likely that it isn’t (Box 1). Instead, the network’s robustness probably arises from its high degree of clustering, which means that the neighbors of each node are likely to be themselves also neighbors. If a node is removed from a clustered network, then there is still likely a short detour available by going through two of the neighbors. Therefore, one would have to randomly remove a large fraction of the nodes before the network structure breaks down. High clustering in the network could be a consequence of the fact that multiple fibers extend from each FRC and establish connections to many FRCs in its vicinity. A question not yet addressed by Novkovic et al. is how robust reticular cell networks would be to nonrandom damage, such as a locally spreading viral infection. In fact, scale-free networks are drastically more vulnerable to targeted rather than random damage: the United States flight network can come to a grinding halt by closing a few hub airports [15]. Less is known about the robustness to nonrandom damage for other network architectures, and the findings by Novkovic et al. motivate future research in this direction.Novkovic et al. did not yet explicitly identify all mechanisms that hamper T cell responses when more than half of the FRCs are depleted. But given the reticular cell network’s many different functions, this could occur in several ways. For instance, severe depletion might prevent the secretion of important molecules, halt the migration of T cells, prevent the anchoring of antigen-presenting dendritic cells (DCs) on the network, or cause structural disarray in the tissue. In addition to the effects on T cell migration, Novkovic et al. also showed that the amount of DCs in fact decreased when FRCs were depleted, emphasizing that several mechanisms are likely at play. Disentangling these mechanisms will require substantial additional research efforts.The current reticular cell network reconstruction by Novkovic et al. is based on thin tissue slices. It will be exciting to study the network architecture when it can be visualized in the whole organ. Some aspects of the network may then look different. For instance, those FRCs that are near the border of a slice will have their degree of connectivity underestimated, as not all of their neighbors in the network can be seen. Further refinements of the network analysis may also consider that reticular fibers are real physical objects situated in a three-dimensional space (unlike abstract connections such as friendships). Migrating T cells may travel quicker via a short, straight fiber than on a long, curved one, but the network graph does not make this distinction. More generally, it would be interesting to understand conceptually how reticular cell networks help foster immune cell migration. While at first it appears obvious that having a “road system” should make it easier for cells to roam lymph node tissue, three different theoretical studies have in fact all concluded that effective T cell migration should also be possible in the absence of a network [2022]. A related question is whether T cells are constrained to move only on the network or are merely using it for loose guidance. For instance, could migrating T cells be in contact with two or more network fibers at once or with none at all? This would make the relationship between cell migration and network structure more complex than the graph structure alone suggests.There is also some evidence that T cells can migrate according to what is called a Lévy walk [23]—a kind of random walk where frequent short steps are interspersed with few very long steps, a search strategy that appears to occur frequently in nature (though this is debated [24]). While there is so far no evidence that T cells perform a Lévy walk when roaming the lymph node [25], this may be in part due to limitations of two-photon imaging, and one could speculate that reticular cell networks might in fact be constructed in a way that facilitates this or another efficient kind of “search strategy.” Resolving this question will require substantial improvements in imaging technology, allowing individual T cells to be tracked across an entire lymph node.No doubt further studies will address these and other questions, and provide further insights on how reticular cell networks benefit immune responses. Such advances may help us design better treatments against infections that damage the network. It may also help us understand how we can best administer vaccines or tumor immune therapy treatments in a way that ensures optimal delivery to immune cells in the lymph node. As is nicely illustrated by the study of Novkovic et al., mathematical methods may well play key roles in this quest.  相似文献   

11.
In the last 15 years, antiretroviral therapy (ART) has been the most globally impactful life-saving development of medical research. Antiretrovirals (ARVs) are used with great success for both the treatment and prevention of HIV infection. Despite these remarkable advances, this epidemic grows relentlessly worldwide. Over 2.1 million new infections occur each year, two-thirds in women and 240,000 in children. The widespread elimination of HIV will require the development of new, more potent prevention tools. Such efforts are imperative on a global scale. However, it must also be recognised that true containment of the epidemic requires the development and widespread implementation of a scientific advancement that has eluded us to date—a highly effective vaccine. Striving for such medical advances is what is required to achieve the end of AIDS.In the last 15 years, antiretroviral therapy (ART) has been the most globally impactful life-saving development of medical research. Antiretrovirals (ARVs) are used with great success for both the treatment and prevention of HIV infection. In the United States, the widespread implementation of combination ARVs led to the virtual eradication of mother-to-child transmission of HIV from 1,650 cases in 1991 to 110 cases in 2011, and a turnaround in AIDS deaths from an almost 100% five-year mortality rate to a five-year survival rate of 91% in HIV-infected adults [1]. Currently, the estimated average lifespan of an HIV-infected adult in the developed world is well over 40 years post-diagnosis. Survival rates in the developing world, although lower, are improving: in sub-Saharan Africa, AIDS deaths fell by 39% between 2005 and 2013, and the biggest decline, 51%, was seen in South Africa [2].Furthermore, the association between ART, viremia, and transmission has led to the concept of “test and treat,” with the hope of reducing community viral load by testing early and initiating treatment as soon as a diagnosis of HIV is made [3]. Indeed, selected regions of the world have begun to actualize the public health value of ARVs, from gains in life expectancy to impact on onward transmission, with a potential 1% decline in new infections for every 10% increase in treatment coverage [2]. In September 2015, WHO released new guidelines removing all limitations on eligibility for ART among people living with HIV and recommending pre-exposure prophylaxis (PrEP) to population groups at significant HIV risk, paving the way for a global onslaught on HIV [4].Despite these remarkable advances, this epidemic grows relentlessly worldwide. Over 2.1 million new infections occur each year, two-thirds in women and 240,000 in children [2]. In heavily affected countries, HIV infection rates have only stabilized at best: the annualized acquisition rates in persons in their first decade of sexual activity average 3%–5% yearly in southern Africa [57]. These figures are hardly compatible with the international health community’s stated goal of an “AIDS-free generation” [8,9]. In highly resourced settings, microepidemics of HIV still occur, particularly among gays, bisexuals, and men who have sex with men (MSM) [10]. HIV epidemics are expanding in two geographic regions in 2015—the Middle East/North Africa and Eastern Europe/Central Asia—largely due to challenges in implementing evidence-based HIV policies and programmes [2]. Even for the past decade in the US, almost 50,000 new cases recorded annually, two-thirds among MSM, has been a stable figure for years and shows no evidence of declining [1].While treatment scale-up, medical male circumcision [11], and the implementation of strategies to prevent mother-to-child transmission [12] have received global traction, systemic or topical ARV-based biomedical advances to prevent sexual acquisition of HIV have, as yet, made limited impressions on a population basis, despite their reported efficacy. Factors such as their adherence requirements, cost, potential for drug resistance, and long-term feasibility have restricted the appetite for implementation, even though these approaches may reduce HIV incidence in select populations.Already, several trials have shown that daily oral administration of the ARV tenofovir disoproxil fumarate (TDF), taken singly or in combination with emtricitabine, as PrEP by HIV-uninfected individuals, reduces HIV acquisition among serodiscordant couples (where one partner is HIV-positive and the other is HIV-negative) [13], MSM [14], at-risk men and women [15], and people who inject drugs [16,17] by between 44% and 75%. Long-acting injectable antiretroviral agents such as rilpivirine and cabotegravir, administered every two and three months, respectively, are also being developed for PrEP. All of these PrEP approaches are dependent on repeated HIV testing and adherence to drug regimens, which may challenge effectiveness in some populations and contexts.The widespread elimination of HIV will require the development of new, more potent prevention tools. Because HIV acquisition occurs subclinically, the elimination of HIV on a population basis will require a highly effective vaccine. Alternatively, if vaccine development is delayed, supplementary strategies may include long-acting pre-exposure antiretroviral cocktails and/or the administration of neutralizing antibodies through long-lasting parenteral preparations or the development of a “genetic immunization” delivery system, as well as scaling up delivery of highly effective regimens to eliminate mother-to-child HIV transmission (Fig 1).Open in a separate windowFig 1Medical interventions required to end the epidemic of HIV.Image credit: Glenda Gray.  相似文献   

12.
13.
Céline Caillet and co-authors discuss a Collection on use of portable devices for the evaluation of medicine quality and legitimacy.

Summary points
  • Portable devices able to detect substandard and falsified medicines are vital innovations for enhancing the inspection of medicines in pharmaceutical supply chains and for timely action before they reach patients. Such devices exist, but there has been little to no independent scientific evidence of their accuracy and cost-effectiveness to guide regulatory authorities in choosing appropriate devices for their settings.
  • We tested 12 portable devices, evaluated their diagnostic performances and the resources required to use each device in a laboratory.
  • We then assessed the utility and usability of the devices in medicine inspectors’ hands in a pharmacy mimicking a real-life Lao pharmacy.
  • We then assessed the health and economic benefits of using portable devices compared to not using them in a low- to middle-income setting.
  • Here, we discuss the conclusions and practical implications of the multiphase study discussed in this Collection. We discuss the results, highlight the evidence gaps, and provide recommendations on the key aspects to consider in the implementation of portable devices and their main advantages and limitations.
Global concerns over the quality of medicines, especially in low- and middle-income countries (LMICs) are exacerbated by the Coronavirus Disease 2019 (COVID-19) pandemic [1,2]. The World Health Organisation (WHO) estimated that 10.5% of medicines in LMICs may be substandard or falsified (SF) [3]. “Prevention, detection, and response” to SF medical products are strategic priorities of WHO to contribute to effective and efficient regulatory systems [4]. Numerous portable medicine screening devices are available on the market, holding great hope for detection of SF medicines in an efficient and timely manner, and, therefore, might serve as key detection tools to inform prevention and response [5,6]. Screening devices have the potential to rapidly identify suspected SF medical products, giving more objective selection for reference assays, reducing the financial and technical burden. However, little is known regarding how well the existing devices fulfil their functions and how they could be deployed within risk-based postmarketing surveillance (rb-PMS) systems [57].We conducted, during 2016 to 2018, a collaborative multiphase exploratory study aimed at comparing portable screening devices. This paper accompanies 4 papers in this PLOS Collection “A multiphase evaluation of portable screening devices to assess medicines quality for national Medicines Regulatory Authorities.” The first article introduced the multiphase study [8]. In brief, 12 devices (S1 Table) were first evaluated in a laboratory setting [9], to select the most field-suitable devices for further evaluation of their utility/usability by Lao medicines inspectors [10]. Cost-effectiveness analysis of their implementation for rb-PMS in Laos was also conducted [11]. The results of these 3 phases were discussed in a multistakeholder meeting in 2018 in Vientiane, Lao PDR (S1 Text). The advantages/disadvantages, cost-effectiveness, and optimal use of screening devices in medicine supply chains were discussed to develop policy recommendations for medicines regulatory authorities (MRAs) and other institutions who wish to implement screening technologies. A summary of the main results of the multiphase study is presented in S2 Table.As far as we are aware, this is the first independent investigation comparing the accuracy and practical use from a public health perspective, of a diverse set of portable medicine quality screening devices. The specific objective(s) for which the portable screening technologies are implemented, their advantages/limitations, costs and logistics, and the development of detailed standard operating procedures and training programmes are key points to be carefully addressed when considering selection and deployment of screening technologies within specific rb-PMS systems (Fig 1).Open in a separate windowFig 1Major proposed considerations for the selection and implementation of medicine quality screening device.Each circle represents a key consideration when purchasing a screening device, grouped by themes (represented by heptagons). When the shapes overlap, the considerations are connected. For example, standard operating procedures are needed for the implementation of devices and should include measures for user safety. The circle diameters are illustrative.Here, we utilise this research and related literature to discuss the evidence, gaps, and recommendations, complementary to those recently published by the US Pharmacopeial Convention [12]. These discussions can inform policy makers, non-governmental organisations, wholesalers/distributors, and hospital pharmacies considering the implementation of such screening devices. We discuss unanswered research questions that require attention to ensure that the promise these devices hold is realised.  相似文献   

14.
ObjectiveLike other inhalational anesthetics xenon seems to be associated with post-operative nausea and vomiting (PONV). We assessed nausea incidence following balanced xenon anesthesia compared to sevoflurane, and dexamethasone for its prophylaxis in a randomized controlled trial with post-hoc explorative analysis.Methods220 subjects with elevated PONV risk (Apfel score ≥2) undergoing elective abdominal surgery were randomized to receive xenon or sevoflurane anesthesia and dexamethasone or placebo after written informed consent. 93 subjects in the xenon group and 94 subjects in the sevoflurane group completed the trial. General anesthesia was maintained with 60% xenon or 2.0% sevoflurane. Dexamethasone 4mg or placebo was administered in the first hour. Subjects were analyzed for nausea and vomiting in predefined intervals during a 24h post-anesthesia follow-up.ResultsLogistic regression, controlled for dexamethasone and anesthesia/dexamethasone interaction, showed a significant risk to develop nausea following xenon anesthesia (OR 2.30, 95% CI 1.02–5.19, p = 0.044). Early-onset nausea incidence was 46% after xenon and 35% after sevoflurane anesthesia (p = 0.138). After xenon, nausea occurred significantly earlier (p = 0.014), was more frequent and rated worse in the beginning. Dexamethasone did not markedly reduce nausea occurrence in both groups. Late-onset nausea showed no considerable difference between the groups.ConclusionIn our study setting, xenon anesthesia was associated with an elevated risk to develop nausea in sensitive subjects. Dexamethasone 4mg was not effective preventing nausea in our study. Group size or dosage might have been too small, and change of statistical analysis parameters in the post-hoc evaluation might have further contributed to a limitation of our results. Further trials will be needed to address prophylaxis of xenon-induced nausea.

Trial Registration

EU Clinical Trials EudraCT-2008-004132-20ClinicalTrials.gov NCT00793663  相似文献   

15.
16.
17.
Various stakeholders in science have put research integrity high on their agenda. Among them, research funders are prominently placed to foster research integrity by requiring that the organizations and individual researchers they support make an explicit commitment to research integrity. Moreover, funders need to adopt appropriate research integrity practices themselves. To facilitate this, we recommend that funders develop and implement a Research Integrity Promotion Plan (RIPP). This Consensus View offers a range of examples of how funders are already promoting research integrity, distills 6 core topics that funders should cover in a RIPP, and provides guidelines on how to develop and implement a RIPP. We believe that the 6 core topics we put forward will guide funders towards strengthening research integrity policy in their organization and guide the researchers and research organizations they fund.

Research funders are prominently placed to foster research integrity by requiring that researchers make an explicit commitment to research integrity. This Consensus View suggests 6 core topics that funders should cover in a research integrity promotion plan and provides practical recommendations for how to implement one.

To improve research quality and validity, foster responsible research cultures, and maintain public trust in science, various stakeholders have put research integrity high on their agenda. Among them, research funders are increasingly acknowledging their pivotal role in contributing to a culture of research integrity. For example, the European Commission (EC) is mandating research organizations receiving funding from the €95 billion Horizon Europe program to have, at the institutional level, policies and processes in place for research integrity covering the promotion of good practice, prevention of misconduct and questionable practices, and procedures to deal with breaches of research integrity [1]. To meet these obligations, the EC requires beneficiaries to respect the principles of research integrity as set out in the European Code of Conduct for Research Integrity (ECoC) and suggests that research organizations develop and implement a Research Integrity Promotion Plan (RIPP) [2]. In this Consensus View, we have adopted the World Conference on Research Integrity’s approach to research integrity, by having “research integrity” refer to “the principles and standards that have the purpose to ensure validity and trustworthiness of research” [3]. More specifically, we mostly adhere to the principles outlined in the ECoC: reliability, honesty, respect, and accountability. While many definitions of research integrity exist [4,5], for example, those that distinguish between the integrity of a researcher, integrity of research, and integrity of the research record, the ECoC combines these approaches in a balanced way [1].We believe that funders are prominently placed to foster a culture of research integrity by requiring that the organizations and individual researchers they support make an explicit commitment to research integrity. At the same time, funders need to adopt appropriate research integrity practices themselves. Of late, attention to research integrity among funders has gathered pace, as reflected in several initiatives around the globe that demonstrate how funders can support a culture of research integrity. For example, the US National Science Foundation (NSF) [6] requires applicants’ research organizations to provide training and oversight in the responsible conduct of research, designate individuals responsible for research integrity, and have an institutional certification to testify of its commitment. Also, in 2016, 3 Canadian federal funders joined forces to support research integrity in the Canadian Tri-Agency Framework: Responsible Conduct of Research–Harmony and Responsibility [7]. The framework was subsequently updated in 2021. This framework sets out responsible practices that research organizations and researchers should follow, including rigor, record keeping, accurate referencing, responsible authorship, and the management of conflicts of interest. It also acknowledges the responsibilities of the funders, including “helping to promote the responsible conduct of research and to assist individuals and institutions with the interpretation or implementation of this Responsible Conduct of Research (RCR) Framework”.It is not only major funding organizations in highly developed research environments that are taking steps. Smaller funders are also acting to mandate compliance with research integrity standards. The constantly growing literature on the topic is another sign of development within this area [2,3]. In the USA, research integrity recently reached the political arena, when, following a call from researchers [8], President Biden’s administration published a memorandum on restoring trust [9] that highlights the importance of integrity in research. The memorandum will be supported by the reintroduction of the Scientific Integrity Act. This act will prohibit research misconduct and the manipulation of research findings. It talks of a “culture of research integrity” and demands that funding agencies adopt and enforce research integrity policies, appoint a research integrity officer, and provide regular research integrity and ethics training. The US are not alone in their endeavors. Governments in other countries are equally gearing up to support the integrity and reproducibility of research [10]. However, so far, there is only limited evidence about the effectiveness of such initiatives, although it is generally accepted that they raise awareness among various stakeholders concerning research integrity challenges, strengthen the sense of responsibility of those stakeholders to address those challenges, and thereby ultimately contribute to fostering a culture of research integrity.In a collective effort to foster research integrity, research organizations and funders have their own, complementary roles. The Standard Operating Procedures for Research Integrity (SOPs4RI) consortium has recommended that both research organizations and funders develop a RIPP. A RIPP outlines the key responsibilities of an organization concerning research integrity and details methods and procedures to foster it. For example, in the case of research organizations, a RIPP should facilitate and stimulate a healthy research environment, proper mentoring and supervision, research ethics structures, research integrity training, high-quality dissemination practices, research collaboration, effective data management, and open and fair procedures to deal with breaches of research integrity [11]. Funders have a different role. They can support, safeguard, and incentivise, or even mandate, responsible research practices from research organizations and researchers. Equally important, funders should make sure that their internal processes live up to the highest standards of research integrity. We recognize that funders are many and varied in their scale, portfolio, disciplinary focus, and the extent to which they have procedures and governance arrangements to support research integrity. For all funders, adopting a RIPP will structure and coordinate research integrity practices, giving clarity and transparency to applicant institutions and researchers.In this Consensus View, we highlight examples of best practice of funders worldwide to foster a research integrity culture. With these examples in mind, we suggest guidelines to support funders in taking a leading role in fostering research integrity. In so doing, we acknowledge the local contexts in which funders operate, but we believe that all funders, large and small, in all parts of the world, can and should contribute to improving research validity and building and maintaining trust in science through incentivising and mandating a culture of research integrity. Our core argument is that developing a tailored RIPP will contribute to building an institutional culture of research integrity, both within funding organizations and among the research organizations and individual researchers they fund. Based on empirical work from the SOPs4RI project, we have identified 6 key research integrity topics: researchers’ compliance with research integrity standards; expectations for research organizations; selection of grant applications; declaration of interests; monitoring funded research; and dealing with internal integrity breaches (Fig 1). We recommend that these topics should be included in a RIPP and provide guidelines on developing and implementing a RIPP.Open in a separate windowFig 1Topics to be covered in a RIPP for funders.An overview of the 6 most important topics identified by the SOPs4RI to be included in research funding organization’s RIPP. RIPP, Research Integrity Promotion Plan; SOPs4RI, Standard Operating Procedures for Research Integrity.  相似文献   

18.
General anesthetic photolabels have been instrumental in discovering and confirming protein binding partners and binding sites of these promiscuous ligands. We report the in vivo photoactivation of meta-azipropofol, a potent analog of propofol, in Xenopus laevis tadpoles. Covalent adduction of meta-azipropofol in vivo prolongs the primary pharmacologic effect of general anesthetics in a behavioral phenotype we termed “optoanesthesia.” Coupling this behavior with a tritiated probe, we performed unbiased, time-resolved gel proteomics to identify neuronal targets of meta-azipropofol in vivo. We have identified synaptic binding partners, such as synaptosomal-associated protein 25, as well as voltage-dependent anion channels as potential facilitators of the general anesthetic state. Pairing behavioral phenotypes elicited by the activation of efficacious photolabels in vivo with time-resolved proteomics provides a novel approach to investigate molecular mechanisms of general anesthetics.  相似文献   

19.
The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included “Data Science;” “Standards and Interoperability;” “Open Science and Reproducibility;” “Translational Bioinformatics;” “Visualization;” and “Bioinformatics Open Source Project Updates”. In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled “Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community,” that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.Open in a separate window  相似文献   

20.

Objectives

The contribution of ultrasound-assisted thoracic paravertebral block to postoperative analgesia remains unclear. We compared the effect of a combination of ultrasound assisted-thoracic paravertebral block and propofol general anesthesia with opioid and sevoflurane general anesthesia on volatile anesthetic, propofol and opioid consumption, and postoperative pain in patients having breast cancer surgery.

Methods

Patients undergoing breast cancer surgery were randomly assigned to ultrasound-assisted paravertebral block with propofol general anesthesia (PPA group, n = 121) or fentanyl with sevoflurane general anesthesia (GA group, n = 126). Volatile anesthetic, propofol and opioid consumption, and postoperative pain intensity were compared between the groups using noninferiority and superiority tests.

Results

Patients in the PPA group required less sevoflurane than those in the GA group (median [interquartile range] of 0 [0, 0] vs. 0.4 [0.3, 0.6] minimum alveolar concentration [MAC]-hours), less intraoperative fentanyl requirements (100 [50, 100] vs. 250 [200, 300]μg,), less intense postoperative pain (median visual analog scale score 2 [1, 3.5] vs. 3 [2, 4.5]), but more propofol (median 529 [424, 672] vs. 100 [100, 130] mg). Noninferiority was detected for all four outcomes; one-tailed superiority tests for each outcome were highly significant at P<0.001 in the expected directions.

Conclusions

The combination of propofol anesthesia with ultrasound-assisted paravertebral block reduces intraoperative volatile anesthetic and opioid requirements, and results in less post operative pain in patients undergoing breast cancer surgery.

Trial Registration

ClinicalTrial.gov NCT00418457  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号