首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
2.
People can perceive misfortunes as caused by previous bad deeds (immanent justice reasoning) or resulting in ultimate compensation (ultimate justice reasoning). Across two studies, we investigated the relation between these types of justice reasoning and identified the processes (perceptions of deservingness) that underlie them for both others (Study 1) and the self (Study 2). Study 1 demonstrated that observers engaged in more ultimate (vs. immanent) justice reasoning for a “good” victim and greater immanent (vs. ultimate) justice reasoning for a “bad” victim. In Study 2, participants'' construals of their bad breaks varied as a function of their self-worth, with greater ultimate (immanent) justice reasoning for participants with higher (lower) self-esteem. Across both studies, perceived deservingness of bad breaks or perceived deservingness of ultimate compensation mediated immanent and ultimate justice reasoning respectively.  相似文献   

3.
The Weibullian-log logistic (WeLL) inactivation model was modified to account for heat adaptation by introducing a logistic adaptation factor, which rendered its “rate parameter” a function of both temperature and heating rate. The resulting model is consistent with the observation that adaptation is primarily noticeable in slow heat processes in which the cells are exposed to sublethal temperatures for a sufficiently long time. Dynamic survival patterns generated with the proposed model were in general agreement with those of Escherichia coli and Listeria monocytogenes as reported in the literature. Although the modified model''s rate equation has a cumbersome appearance, especially for thermal processes having a variable heating rate, it can be solved numerically with commercial mathematical software. The dynamic model has five survival/adaptation parameters whose determination will require a large experimental database. However, with assumed or estimated parameter values, the model can simulate survival patterns of adapting pathogens in cooked foods that can be used in risk assessment and the establishment of safe preparation conditions.Combined with heat transfer data or models, microbial survival kinetics, especially of bacteria or spores, is extensively used to determine the safety of industrial heat preservation processes like canning, extant or planned. The same is true for milder heat processes such as milk and fruit pasteurization. However, survival models are also a valuable tool to assess the safety of prepared foods, especially those made of raw meats, poultry, and eggs, where surviving pathogens can be a public health issue.The heat resistance of a bacterium, or any other microorganism, is almost always determined from a set of its isothermal survival curves, recorded at several lethal temperatures. The kinetic models, which define the heat resistance parameters, may vary, but the calculation procedure itself is usually the same. First, the experimental isothermal survival data are fitted with what is known as the “primary model.” Once fitted, the temperature dependence of this primary model''s coefficients is described by what is known as the “secondary model.” When combined with a temperature profile expression, T(t), and incorporated into the inactivation rate equation, the result is a “tertiary model,” which enables its user to predict the organism''s survival curve under any static or dynamic (i.e., nonisothermal) conditions.The traditional log-linear (“first-order kinetic”) model is the best-known primary survival model, and it is still widely used in sterility calculations in the food, pharmaceutical, and other industries. Traditionally, it has been assumed that the D value calculated with this model has a log-linear temperature dependence or, alternatively, that the temperature effect on the exponential rate constant, k, the D value''s reciprocal, follows the Arrhenius equation. However, accumulating experimental evidence in recent years indicates that bacterial heat inactivation only rarely follows the first-order kinetics and that there is no reason that it should (3, 18, 29). Nonlinear survival curves can be described by a variety of mathematical models (6). Perhaps the most frequently used in recent years is the Weibullian model, of which the traditional log-linear model is a special case—see below.Regardless of the log-linearity issue, none of the above-mentioned models accounts for adaptation, the ability of certain bacterial cells to adjust their metabolism in response to stress in order to increase their survivability (2, 10, 26, 27, 28). A notable example is Escherichia coli. Its cells can produce “heat shock proteins,” which help them to survive mild heat treatments (1, 11). Other organisms, Salmonella enterica and Bacillus cereus among them, can also develop defensive mechanisms that help them to survive in an acidic environment (8, 9, 13). Whether adaptation allows the cells to avoid injury or to repair damage once it has occurred, or both, should not concern us here. (Injury and recovery, although related, are a separate issue, one which is amply discussed in the literature. Their quantitative aspects and mathematical modeling are discussed elsewhere [5].)The cells'' ability to augment their resistance is not unlimited, and it takes time for the cells to activate the protective system and synthesize its chemical elements (10, 12). Consequently, the effect of heat adaptation on an organism''s survival pattern becomes measurable only at or at slightly above what''s known as the “sublethal” temperature range. Under dynamic conditions, therefore, adaptation can be detected only when the heating rate is sufficiently low to allow the cells to respond metabolically to the heat stress prior to their destruction.Several investigators have reported and discussed the quantitative aspects of adaptation (25, 27, 28). When it occurs, adaptation is noticed as a gap between survival curves determined at low heating rates and those predicted by kinetic models whose parameters had been determined at high lethal temperatures (7, 8, 9, 27, 28). The question is how to modify the inactivation kinetic model so that it can properly account for adaptation at low heating rates while maintaining its predictive ability at high rates and clearly lethal temperatures. Stasiewicz et al. (25) have recently given a partial answer to this question. They started with the Weibullian inactivation model (see below) and assumed that its rate parameter''s temperature dependence follows a modified version of the Arrhenius equation. Using this model and experimental data for Salmonella bacteria, they showed that a “pathway-dependent model” is more reliable than a “state-dependent model.”The objectives of our work were to develop a variant of the Weibullian-log logistic (WeLL) inactivation model to account for dynamic adaptation and to demonstrate its applicability with reported adaptive survival patterns exhibited by Escherichia coli and Listeria monocytogenes, two organisms of food safety concern.  相似文献   

4.
5.
6.

Background

This study designed and applied accessible yet systematic methods to generate baseline information about the patterns and structure of Canada''s neglected tropical disease (NTD) research network; a network that, until recently, was formed and functioned on the periphery of strategic Canadian research funding.

Methodology

Multiple methods were used to conduct this study, including: (1) a systematic bibliometric procedure to capture archival NTD publications and co-authorship data; (2) a country-level “core-periphery” network analysis to measure and map the structure of Canada''s NTD co-authorship network including its size, density, cliques, and centralization; and (3) a statistical analysis to test the correlation between the position of countries in Canada''s NTD network (“k-core measure”) and the quantity and quality of research produced.

Principal Findings

Over the past sixty years (1950–2010), Canadian researchers have contributed to 1,079 NTD publications, specializing in Leishmania, African sleeping sickness, and leprosy. Of this work, 70% of all first authors and co-authors (n = 4,145) have been Canadian. Since the 1990s, however, a network of international co-authorship activity has been emerging, with representation of researchers from 62 different countries; largely researchers from OECD countries (e.g. United States and United Kingdom) and some non-OECD countries (e.g. Brazil and Iran). Canada has a core-periphery NTD international research structure, with a densely connected group of OECD countries and some African nations, such as Uganda and Kenya. Sitting predominantly on the periphery of this research network is a cluster of 16 non-OECD nations that fall within the lowest GDP percentile of the network.

Conclusion/Significance

The publication specialties, composition, and position of NTD researchers within Canada''s NTD country network provide evidence that while Canadian researchers currently remain the overall gatekeepers of the NTD research they generate; there is opportunity to leverage existing research collaborations and help advance regions and NTD areas that are currently under-developed.  相似文献   

7.
8.
The translationally-controlled tumor protein (TCTP) is a highly conserved, ubiquitously expressed, abundant protein that is broadly distributed among eukaryotes. Its biological function spans numerous cellular processes ranging from regulation of the cell cycle and microtubule stabilization to cell growth, transformation, and death processes. In this work, we propose a new function for TCTP as a “buffer protein” controlling cellular homeostasis. We demonstrate that binding of hemin to TCTP is mediated by a conserved His-containing motif (His76His77) followed by dimerization, an event that involves ligand-mediated conformational changes and that is necessary to trigger TCTP''s cytokine-like activity. Mutation in both His residues to Ala prevents hemin from binding and abrogates oligomerization, suggesting that the ligand site localizes at the interface of the oligomer. Unlike heme, binding of Ca2+ ligand to TCTP does not alter its monomeric state; although, Ca2+ is able to destabilize an existing TCTP dimer created by hemin addition. In agreement with TCTP''s proposed buffer function, ligand binding occurs at high concentration, allowing the “buffer” condition to be dissociated from TCTP''s role as a component of signal transduction mechanisms.  相似文献   

9.

Background

Species Distribution Models (SDMs) aim on the characterization of a species'' ecological niche and project it into geographic space. The result is a map of the species'' potential distribution, which is, for instance, helpful to predict the capability of alien invasive species. With regard to alien invasive species, recently several authors observed a mismatch between potential distributions of native and invasive ranges derived from SDMs and, as an explanation, ecological niche shift during biological invasion has been suggested. We studied the physiologically well known Slider turtle from North America which today is widely distributed over the globe and address the issue of ecological niche shift versus choice of ecological predictors used for model building, i.e., by deriving SDMs using multiple sets of climatic predictor.

Principal Findings

In one SDM, predictors were used aiming to mirror the physiological limits of the Slider turtle. It was compared to numerous other models based on various sets of ecological predictors or predictors aiming at comprehensiveness. The SDM focusing on the study species'' physiological limits depicts the target species'' worldwide potential distribution better than any of the other approaches.

Conclusion

These results suggest that a natural history-driven understanding is crucial in developing statistical models of ecological niches (as SDMs) while “comprehensive” or “standard” sets of ecological predictors may be of limited use.  相似文献   

10.
The earliest concept of a balance of nature in Western thought saw it as being provided by gods but requiring human aid or encouragement for its maintenance. With the rise of Greek natural philosophy, emphasis shifted to traits gods endowed species with at the outset, rather than human actions, as key to maintaining the balance. The dominance of a constantly intervening God in the Middle Ages lessened interest in the inherent features of nature that would contribute to balance, but the Reformation led to renewed focus on such features, particularly traits of species that would maintain all of them but permit none to dominate nature. Darwin conceived of nature in balance, and his emphasis on competition and frequent tales of felicitous species interactions supported the idea of a balance of nature. But Darwin radically changed its underlying basis, from God to natural selection. Wallace was perhaps the first to challenge the very notion of a balance of nature as an undefined entity whose accuracy could not be tested. His skepticism was taken up again in the 20th century, culminating in a widespread rejection of the idea of a balance of nature by academic ecologists, who focus rather on a dynamic, often chaotic nature buffeted by constant disturbances. The balance-of-nature metaphor, however, lives on in large segments of the public, representing a fragile aspect of nature and biodiversity that it is our duty to protect.The notion of a “balance of nature” stretches back to early Greeks, who believed gods maintained it with the aid of human prayers, sacrifices, and rituals [1]. As Greek philosophers developed the idea of natural laws, human assistance in maintaining the balance did not disappear but was de-emphasized. Herodotus, for instance, the earliest known scholar to seek biological evidence for a balance of nature, asked how the different animal species each maintained their numbers, even though some species ate other species. Amassing facts and factoids, he saw divinely created predators'' reproductive rates lower than those of prey, buttressing the idea of a providentially determined balance with a tale of a mutualism between Nile crocodiles beset with leeches and a plover species that feeds on them [1]. Two myths in Plato''s Dialogues supported the idea of a balance of nature: the Timaeus myth, in which different elements of the universe, including living entities, are parts of a highly integrated “superorganism,” and the Protagoras myth, in which gods created each animal species with characteristics that would allow it to thrive and, having run out of biological traits, had to give man fire and superior intelligence [1]. Among Romans, Cicero followed Herodotus and Plato in advancing a balance of nature generated by different reproductive rates and traits among species, as well as interactions among species [1].The Middle Ages saw less interest in such pre-set devices as differential reproductive rates to keep nature in balance, perhaps because people believed in a God who would maintain the balance by frequent direct intervention [1]. The Reformation, however, fostered further development of the concept of a providential balance of nature set in motion at creation. Thomas Browne [2] added differential mortality rates to factors maintaining the balance, and Matthew Hale [3] proposed that lower rates of mortality for humans than for other animals maintain human dominance within a balanced nature and added vicissitudes of heat from the sun to the factors keeping any one species from getting out of hand.The discovery of fossils that could not be ascribed to known living species severely challenged the idea of a God-given balance of nature, as they contradicted the idea of species divinely created with the necessary features for survival [4]. John Ray [5] suggested that the living representatives of such fossils would be found in unexplored parts of the earth, a solution that was viable until the great scientific explorations of the late 18th and early 19th centuries [4]. Ray also argued that what would now be termed different Grinnellian ecological niches demonstrated God''s provision of each species with a space of its own in nature.According to Egerton [1], the earliest use of the term “balance” to refer specifically to ecology was probably by Ray''s disciple, William Derham [6], who asserted in 1714 that:
“The Balance of the Animal World is, throughout all Ages, kept even, and by a curious Harmony and just Proportion between the increase of all Animals, and the length of their Lives, the World is through all Ages well, but not over-stored.”
Derham recognized that human populations seemed to be endlessly increasing but saw this fact as a provision by God for future disasters. This explanation contrasts with that of Linnaeus [7], who saw human and other populations endlessly increasing but believed the size of the earth was also increasing to accommodate them. Derham grappled with the issue of theodicy but failed to reconcile plagues of noxious animals with the balance of nature, seeing them rather as “Rods and Scourges to chastise us, as means to excite our Wisdom, Care, and Industry” [1].Derham''s contemporary Richard Bradley [8],[9] focused more on biological facts and less on Providence in sketching a more comprehensive account of an ecological balance of nature, taking account of the rapidly expanding knowledge of biodiversity, noting that each plant had its phytophagous insects, each insect its parasitic wasps or flies and predatory birds, concluding that “all Bodies have some Dependence upon one another; and that every distinct Part of Nature''s Works is necessary for the Support of the rest; and that if any one was wanting, all the rest must consequently be out of Order.” Thus, he saw the balance as fragile rather than robust, in spite of a constantly intervening God. Linnaeus [10] similarly marshaled observations of species interactions to explain why no species increases to crowd out all others, adding competition to the predation, parasitism, and herbivory adduced by Bradley and also emphasizing the different roles (we might now say “niches”) of different species as allowing them all to coexist in a sort of superorganismic, balanced whole.Unlike Derham, Georges-Louis Leclerc, Comte de Buffon [11] managed to reconcile animal plagues with a balanced nature. He perceived the balance of nature as dynamic, with all species fluctuating between relative rarity and abundance, so that whenever a species became overabundant, weather, predation, and competition for food would bring it back into balance. Buffon''s successor as director of the Jardin des Plantes in Paris, Jacques-Henri Bernardin de Saint-Pierre [12], was probably the first to associate ecological damage caused by biological invasions with a disruption of the balance of nature. Observing damage to introduced trees from insects accidentally introduced with them, he argued that failure to introduce the birds that would eat the insects led to the damage. William Paley [13], perhaps the inspiration for today''s advocates of “intelligent design,” analogized nature to a watch. One would assume a smoothly running watch was designed with purpose, and so too nature was designed by God with balance and a purpose.In the 19th century, evolution burst on the scene, greatly influencing and ultimately modifying conceptions of a balance of nature. Fossils that seemed unrelated to any living species, as noted above, conflicted with the balance of nature, because they implied extinction, a manifestly unbalanced event that furthermore could be seen to imply that God had made a mistake. Whereas Ray had been able to argue that living exemplars of fossil species would be found in unexplored parts of the earth, by the 19th century, this explanation could be rejected. Jean-Baptiste Lamarck [14] resolved the conflict in a different way, arguing that species continually change, so the balance remains the same. The fossils thus represent ancestors of living species, not extinct lineages. Robert Chambers [15], another early evolutionist, similarly saw fossils not as a paradox in a balanced nature but as a consequence of the fact that, as the physical environment changed, species either evolved or went extinct.Alfred Russel Wallace was perhaps the first to question the very existence of a balance of nature, in a remarkable notebook entry, ca. 1855:
“Some species exclude all others in particular tracts. Where is the balance? When the locust devastates vast regions and causes the death of animals and man, what is the meaning of saying the balance is preserved… To human apprehension there is no balance but a struggle in which one often exterminates another” [16].
In modern parlance, Wallace appears almost to be asking how “balance” could be defined in such a way that a balance of nature could be a testable hypothesis.Darwin''s theory of evolution by natural selection certainly explained the existence of fossils, and his emphasis on inevitable competition both between and within species downplayed the role of niche specialization propounded by Plato, Cicero, Linnaeus, Derham, and others [1]. Darwin nevertheless saw the ecological roles of the diversity of species as parts of an almost superorganismic nature, and his main contribution to the idea of a balance of nature was his constant emphasis on competition and other mortality factors that kept all species'' populations in check [1]. His many metaphors and examples of the interactions among species, such as the tangled bank and the spinsters-cats-mice-bumblebees-clover stories in The Origin of Species [17], contributed to a sense of a highly balanced nature, but one driven by natural selection constantly changing species, rather than by God either intervening or creating species with traits that ensure their continued existence. Unlike Wallace, Darwin did not raise the issue of whether nature was actually balanced and how we would know if it was not.As ecology developed in the late 19th and early 20th centuries, it was inevitable that Wallace''s question—how to define “balance”—would be raised again and that increasingly wide and quantitative study, especially at the population level, would be brought to bear on the matter. The work of the early dominant plant ecologist Frederic Clements and his followers, with Clements'' notion of superorganismic communities [18], provided at least tacit support for the idea of a balance of nature, but his contemporary Charles Elton [19], a founder of the field of animal ecology and a leading student of animal population cycles, forcefully reprised Wallace''s concern:
“‘The balance of nature’ does not exist, and perhaps never has existed. The numbers of wild animals are constantly varying to a greater or lesser extent, and the variations are usually irregular in period and always irregular in amplitude. Each variation in the numbers of one species causes direct and indirect repercussions on the numbers of the others, and since many of the latter are themselves independently varying in numbers, the resultant confusion is remarkable.”
Despite Elton''s explicit skepticism, his depiction of energy flow through food chains and food webs was incorporated as a superorganismic analog to the physiology of individuals (e.g., [20]). Henry Gleason, another critic of the superorganism concept, who depicted populations distributed independently, rather than in highly organized communities, was ignored at this time [21].However, beginning with three papers in Ecological Monographs in 1947, the superorganism concept was increasingly questioned and, within 25 years, Gleason was vindicated and his views largely accepted by ecologists [22]. During this same period, extensive work by population biologists again took up Elton''s focus on population trajectories and contributed greatly to a growing recognition of the dynamism of nature and the fact that much of this dynamism did not seem regular or balanced [21]. The idea of a balanced nature did not immediately disappear among ecologists. For instance, a noteworthy book by C. B. Williams [23], Patterns in the Balance of Nature, described the distribution of abundances within communities or regions as evincing statistical regularity that might be construed as a type of “balance of nature,” at least if changes in individual populations do not change certain statistical features (a hypothesis that Williams considered untested at the time). But the predominant view by ecologists of the 1960s saw the whole notion of a balance as, at best, irrelevant and, at worst, a distraction. Ehrlich and Birch [24], for example, ridiculed the idea:
“The existence of supposed balance of nature is usually argued somewhat as follows. Species X has been in existence for thousands or perhaps millions of generations, and yet its numbers have never increased to infinity or decreased to zero. The same is true of the millions of other species still extant. During the next 100 years, the numbers of all these species will fluctuate; yet none will increase indefinitely, and only a few will become extinct… Such ‘observations’ are made the basis for the statement that population size is ‘controlled’ or ‘regulated,’ and that drastic changes in size are the results of upsetting the ‘balance of nature.’”
Another line of ecological research that became popular at the end of the 20th century was to equate “balance of nature” with some sort of equilibrium of numbers, usually of population sizes [25], but sometimes of species richness. The problem remained that, with numbers that vary for whatever reason, it is still arbitrary just how much temporal variation can be accommodated within a process or phenomenon for it still to be termed equilibrial [26]. Often the decision on whether to perceive an ecological process as equilibrial seems to be based on whether there is some sort of homeostatic regulation of the numbers, such as density-dependence, which A. J. Nicholson [27] suggested as an argument against Elton''s skepticism of the existence of a balance. The classic 1949 ecology text by Allee et al. [28] explicitly equated balance with equilibrium and cited various mechanisms, such as density-dependence, in support of its universality in nature [25]. Later similar sorts of mathematical arguments equated the mathematical stability of models representing nature with a balance of nature [29], although the increasing recognition of stochastic aspects and chaotic mathematics of population fluctuations made it more difficult to perceive a balanced nature in population trajectories [21].For academic ecologists, the notion of a balance of nature has become passé, and the term is widely recognized as a panchreston [30]—a term that means so many different things to different people that it is useless as a theoretical framework or explanatory device. Much recent research has been devoted to emphasizing the dynamic aspects of nature and prominence of natural or anthropogenic disturbances, particularly as evidenced by vicissitudes of population sizes, and advances the idea that there is no such thing as a long-term equilibrium (e.g., [31],[32]). Some authors explicitly relate this research to a rejection of the concept of a balance of nature (e.g., [33][35]), Pickett et al. [33] going so far as to say it must be replaced by a different metaphor, the “flux of nature.”The issue is confounded by the fact that the perception of balance can be sought at different levels (populations, communities, ecosystems) and spatial scales. Much of the earlier discussion of a balance was at the population and community levels—Browne, Hale, Bradley, Linnaeus, Buffon, Bernardin de Saint-Pierre, and Darwin saw balance in the limited fluctuations of populations and the interactions of populations as one force imposing the limits. The proponents of density-dependent population regulation fall in this category as well [36],[37]. As a balance is sought at the community and ecosystem levels, the sorts of evidence brought to bear on the matter become more complicated and abstract [37],[38]. It is increasingly difficult to imagine what sorts of empirical or observational data could test the notion of a balance. For instance, Williams''s balance of nature—evidenced by a particular statistical distribution of population sizes—would not be perceived as balanced by many observers in light of the fact that entire populations can crash, explode, or even go extinct within the constraint of a statistical distribution of a given shape. Early claims of a balance at the highest level, such as the various superorganisms (Plato''s Timaeus myth, Paley''s watch metaphor, Clements''s superorganismic plant community) can hardly be seen as anything other than metaphors rather than testable hypotheses and have fallen from favor. The most expansive conception of a balance of nature—the Gaia hypothesis [39]—has been almost universally rejected by scientists [40]. The advent and growing acceptance of the metapopulation concept of nature [41] also complicates the search for balance in bounded population fluctuations. Spatially limited individual populations can arise, fluctuate wildly, and even go extinct, while suitable dynamics maintain the widespread metapopulation as a whole.Yet, the idea of a balance of nature lives on in the popular imagination, especially among conservationists and environmentalists. However, the usual use of the metaphor in an environmental context suggests that the balance, whether given by God or produced by evolution, is a fragile balance, one that needs human actions for its maintenance. Through the 18th century, the balance of nature was probably primarily a comforting construct—it would protect us; it represented some sort of benign governance in the face of occasional awful events. When Darwin replaced God as the determinant of the balance with natural selection, the comfort of a balance of nature was not so overarching, if there was any comfort at all. Today, ecologists do not even recognize a balance, and those members of the public who do, see it as something we must protect if we are ever to reap benefits from it in the future (e.g., wetlands that might help ameliorate flooding from storms and sea-level rise). This shift is clear in the writings of Bill McKibben [42],[43], who talks frequently about balance, but about balance with nature, not balance of nature, and how humankind is headed towards a catastrophic future if it does not act promptly and radically to rebalance society with nature.  相似文献   

11.

Background

Electroencephalographic (EEG) microstate analysis is a method of identifying quasi-stable functional brain states (“microstates”) that are altered in a number of neuropsychiatric disorders, suggesting their potential use as biomarkers of neurophysiological health and disease. However, use of EEG microstates as neurophysiological biomarkers requires assessment of the test-retest reliability of microstate analysis.

Methods

We analyzed resting-state, eyes-closed, 30-channel EEG from 10 healthy subjects over 3 sessions spaced approximately 48 hours apart. We identified four microstate classes and calculated the average duration, frequency, and coverage fraction of these microstates. Using Cronbach''s α and the standard error of measurement (SEM) as indicators of reliability, we examined: (1) the test-retest reliability of microstate features using a variety of different approaches; (2) the consistency between TAAHC and k-means clustering algorithms; and (3) whether microstate analysis can be reliably conducted with 19 and 8 electrodes.

Results

The approach of identifying a single set of “global” microstate maps showed the highest reliability (mean Cronbach''s α>0.8, SEM ≈10% of mean values) compared to microstates derived by each session or each recording. There was notably low reliability in features calculated from maps extracted individually for each recording, suggesting that the analysis is most reliable when maps are held constant. Features were highly consistent across clustering methods (Cronbach''s α>0.9). All features had high test-retest reliability with 19 and 8 electrodes.

Conclusions

High test-retest reliability and cross-method consistency of microstate features suggests their potential as biomarkers for assessment of the brain''s neurophysiological health.  相似文献   

12.

Correction to: EMBO Reports (2019) 20: e47074. DOI 10.15252/embr.201847074 | Published online 6 May 2019The authors noticed that the control and disease labels had been inverted in their data analysis resulting in publication of incorrect data in Figure 1C. The corrected figure is displayed below. This change affects the conclusions as detailed below. The authors apologize for this error and any confusion it may have caused.In the legend of 1C, change from, “Differential gene expression analysis of pediatric ileal CD patient samples (n = 180) shows increased (> 4‐fold) IMP1 expression as compared to non‐inflammatory bowel disease (IBD) pediatric samples (n = 43)”.Open in a separate windowFigure 1CCorrected Open in a separate windowFigure 1COriginal To, "Differential gene expression analysis of pediatric ileal CD patient samples (n = 180) shows decreased (> 4‐fold) IMP1 expression as compared to non‐inflammatory bowel disease (IBD) pediatric samples (n = 43)”.In abstract, change from, “Here, we report increased IMP1 expression in patients with Crohn''s disease and ulcerative colitis”.To, “Here, we report increased IMP1 expression in adult patients with Crohn''s disease and ulcerative colitis”.In results, change from, “Consistent with these findings, analysis of published the Pediatric RISK Stratification Study (RISK) cohort of RNA‐sequencing data 38 from pediatric patients with Crohn''s disease (CD) patients revealed that IMP1 is upregulated significantly compared to control patients and that this effect is specific to IMP1 (i.e., other distinct isoforms, IMP2 and IMP3, are not changed; Fig 1C)”.To, “Contrary to our findings in colon tissue from adults, analysis of published RNA‐sequencing data from the Pediatric RISK Stratification Study (RISK) cohort of ileal tissue from children with Crohn’s disease (CD) 38 revealed that IMP1 is downregulated significantly compared to control patients in the RISK cohort and that this effect is specific to IMP1 (i.e., other distinct isoforms, IMP2 and IMP3, are not changed; Fig 1C)”.In discussion, change from, “Indeed, we report that IMP1 is upregulated in patients with Crohn''s disease and ulcerative colitis and that mice with Imp1 loss exhibit enhanced repair following DSS‐mediated damage”.To “Indeed, we report that IMP1 is upregulated in adult patients with Crohn''s disease and ulcerative colitis and that mice with Imp1 loss exhibit enhanced repair following DSS‐mediated damage”.  相似文献   

13.
14.

Background

Access to “safe” water and “adequate” sanitation are emphasized as important measures for schistosomiasis control. Indeed, the schistosomes'' lifecycles suggest that their transmission may be reduced through safe water and adequate sanitation. However, the evidence has not previously been compiled in a systematic review.

Methodology

We carried out a systematic review and meta-analysis of studies reporting schistosome infection rates in people who do or do not have access to safe water and adequate sanitation. PubMed, Web of Science, Embase, and the Cochrane Library were searched from inception to 31 December 2013, without restrictions on year of publication or language. Studies'' titles and abstracts were screened by two independent assessors. Papers deemed of interest were read in full and appropriate studies included in the meta-analysis. Publication bias was assessed through the visual inspection of funnel plots and through Egger''s test. Heterogeneity of datasets within the meta-analysis was quantified using Higgins'' I2.

Principal Findings

Safe water supplies were associated with significantly lower odds of schistosomiasis (odds ratio (OR) = 0.53, 95% confidence interval (CI): 0.47–0.61). Adequate sanitation was associated with lower odds of Schistosoma mansoni, (OR = 0.59, 95% CI: 0.47–0.73) and Schistosoma haematobium (OR = 0.69, 95% CI: 0.57–0.84). Included studies were mainly cross-sectional and quality was largely poor.

Conclusions/Significance

Our systematic review and meta-analysis suggests that increasing access to safe water and adequate sanitation are important measures to reduce the odds of schistosome infection. However, most of the studies were observational and quality was poor. Hence, there is a pressing need for adequately powered cluster randomized trials comparing schistosome infection risk with access to safe water and adequate sanitation, more studies which rigorously define water and sanitation, and new research on the relationships between water, sanitation, hygiene, human behavior, and schistosome transmission.  相似文献   

15.
Nurses working in the hospital setting increasingly have become overburdened by managing alarms that, in many cases, provide low information value regarding patient health. The current trend, aided by disposable, wearable technologies, is to promote patient monitoring that does not require entering a patient''s room. The development of telemetry alarms and middleware escalation devices adds to the continued growth of auditory, visual, and haptic alarms to the hospital environment but can fail to provide a more complete understanding of patient health. As we begin to innovate to both address alarm overload and improve patient management, perhaps using fundamentally different integration architectures, lessons from the aviation flight deck are worth considering. Commercial jet transport systems and their alarms have evolved slowly over many decades and have developed integration methods that account for operational context, provide multiple response protocol levels, and present a more integrated view of the airplane system state. We articulate three alarm system objectives: (1) supporting hazard management, (2) establishing context, and (3) supporting alarm prioritization. More generally, we present the case that alarm design in aviation can spur directions for innovation for telemetry monitoring systems in hospitals.

Healthcare, and the hospital setting in particular, has experienced rapid growth of auditory, visual, and haptic alarms. These alarms can be notoriously unreliable or can focus on narrowly defined changes to the patient''s state.1 Further, this alarm proliferation has led nursing staff to become increasingly overburdened and distressed by managing alarms.2 Current alarm system architectures do not effectively integrate meaningful data that support increased patient status awareness and management.3 In contrast, commercial jet transports, over many decades, have developed integration methods that account for operational context, provide multiple response protocol levels, and present a more integrated view of airplane state to support operational decision making. Similar methods for advanced control rooms in nuclear power generation have been reviewed by Wu and Li.4In healthcare, The Joint Commission (TJC) and hospital quality departments have generated guidance that further elevates the need to address the industry''s “alarm problem.” In 2014, TJC issued an accreditation requirement (National Patient Safety Goal 06.01.01) titled, “Reduce patient harm associated with clinical alarm systems.”5 This requirement continues to be included in the 2020 requirements for accreditation.From the authors'' perspective, this requirement is leading to solutions that will not effectively support performance of essential tasks and is moving away from the types of innovations that are being sought in aviation and other settings. For example, healthcare administrators advocate categorizing alarms into high-priority (“run”), medium-priority (“walk”), and low-priority (“shuffle”) alarms independent of unit context, hospital context, situational context, and historical patient context.6 In addition, each alarm category is assigned a minimum response time. When nurses do not meet response time targets, administrators may add staff (“telemetry monitor watchers”), increase the volume of alarms, escalate alarms to other staff to respond, increase the “startling” nature of alarms to better direct attention, and benchmark average response times by individual nurse identifiers. Although well intentioned, these approaches can sometimes add to the alarm overload problem by creating more alarms and involving more people in alarm response.The authors, who have investigated human performance in several operational settings, believe that a need exists to reflect more broadly on the role of alarms in understanding and managing a system (be it an aircraft or a set of patients in a hospital department). Most alarms in hospitals signal when a variable is outside a prespecified range that is determined from the patient population (e.g., high heart rate), when a change in cardiac rhythm occurs (e.g., ventricular fibrillation [V-fib]), or when a problem occurs with the alarm system (e.g., change battery). These alarms support shifts in attention when the event being alarmed requires an action by a nurse and when the relative priority of the response is clear in relation to competing demands.Certain alarms are useful for other purposes, such as aiding situation awareness about planned, routine tasks (e.g., an expected event of high heart rate has occurred, which indicates that a staff member is helping a patient to the bathroom). Increasingly, secondary alarm notification systems (SANSs), otherwise known as middleware escalation systems, are incorporating communications through alarms, such as patient call systems, staff emergency broadcasts, and demands for “code blue” teams to immediately go to a patient''s bedside.Thus, alarms are used to attract attention (i.e., to orient staff to an important change). However, from a cognitive engineering perspective, we believe alarms can also be used to support awareness, prioritization, and decision making. That is, the current siloed approach to alarm presentation in healthcare, which is driven by technology, impedes the ability to properly understand and appreciate the implications of alarms. Understanding the meaning and implications of alarms can best be achieved when they are integrated via a system interface that places the alarm in the broader context of system state. We hope that sharing our insights can spur both design and alarm management innovations for bedside telemetry monitoring devices and related middleware escalation systems and dashboards.In this article, we provide insights from human factors research, and from the integrated glass cockpit in particular, to prompt innovation with clinical alarm systems. To draw lessons from aviation and other domains, we conducted a series of meetings among three human factors engineers with expertise in alarm design in healthcare, aviation, nuclear power generation, and military command and control domains. In the process, we identified differences in the design, use, and philosophies for managing alarms in different domains; defined alarm systems; clarified common elements in the “alarm problem” across these domains; articulated objectives for an alarm system that supports a human operator in controlling a complex process (i.e., supervisory control); and identified levels of alarm system maturity. Based on these activities, we assert that:
  1. Clinical alarm systems fail to reduce unnecessary complexity compared with the integrated glass cockpit.
  2. Aviation and clinical alarm systems share core objectives.
  3. The challenges with aviation and clinical alarm systems are similar, including where alarm systems fall short of their objectives.
  4. We can demarcate levels in the process of alarm system evolution, largely based on alarm reliability, system integration, and how system state is described. The higher levels point the way for innovation in clinical alarm systems.
  相似文献   

16.
The excitation lifetimes of photosynthetic pigments and the times needed for energy transfer between pigments in various algae, were determined in vitro and in vivo. For this purpose, the time curves of fluorescence rise and decay were measured by means of Brody''s instrument (10), and compared with theoretical curves obtained by the method of “convolution of the first kind.”1  相似文献   

17.
In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human–animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans'' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour (“happiness” and “fear”), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot''s greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot.  相似文献   

18.

Background

It is usually possible to identify the sex of a pre-pubertal child from their voice, despite the absence of sex differences in fundamental frequency at these ages. While it has been suggested that the overall spacing between formants (formant frequency spacing - ΔF) is a key component of the expression and perception of sex in children''s voices, the effect of its continuous variation on sex and gender attribution has not yet been investigated.

Methodology/Principal findings

In the present study we manipulated voice ΔF of eight year olds (two boys and two girls) along continua covering the observed variation of this parameter in pre-pubertal voices, and assessed the effect of this variation on adult ratings of speakers'' sex and gender in two separate experiments. In the first experiment (sex identification) adults were asked to categorise the voice as either male or female. The resulting identification function exhibited a gradual slope from male to female voice categories. In the second experiment (gender rating), adults rated the voices on a continuum from “masculine boy” to “feminine girl”, gradually decreasing their masculinity ratings as ΔF increased.

Conclusions/Significance

These results indicate that the role of ΔF in voice gender perception, which has been reported in adult voices, extends to pre-pubertal children''s voices: variation in ΔF not only affects the perceived sex, but also the perceived masculinity or femininity of the speaker. We discuss the implications of these observations for the expression and perception of gender in children''s voices given the absence of anatomical dimorphism in overall vocal tract length before puberty.  相似文献   

19.
Elixirs of death     
Substandard and fake drugs are increasingly threatening lives in both the developed and developing world, but governments and industry are struggling to improve the situation.When people take medicine, they assume that it will make them better. However many patients cannot trust their drugs to be effective or even safe. Fake or substandard medicine is a major public health problem and it seems to be growing. More than 200 heart patients died in Pakistan in 2012 after taking a contaminated drug against hypertension [1]. In 2006, cough syrup that contained diethylene glycol as a cheap substitute for pharmaceutical-grade glycerin was distributed in Panama, causing the death of at least 219 people [2,3]. However, the problem is not restricted to developing countries. In 2012, more than 500 patients came down with fungal meningitis and several dozens died after receiving contaminated steroid injections from a compounding pharmacy in Massachusetts [4]. The same year, a fake version of the anti-cancer drug Avastin, which contained no active ingredient, was sold in the USA. The drug seemed to have entered the country through Turkey, Switzerland, Denmark and the UK [5].…many patients cannot trust their drugs to be effective or even safeThe extent of the problem is not really known, as companies and governments do not always report incidents [6]. However, the information that is available is alarming enough, especially in developing countries. One study found that 20% of antihypertensive drugs collected from pharmacies in Rwanda were substandard [7]. Similarly, in a survey of anti-malaria drugs in Southeast Asia and sub-Saharan Africa, 20–42% were found to be either of poor quality or outright fake [8], whilst 56% of amoxicillin capsules sampled in different Arab countries did not meet the US Pharmacopeia requirements [9].Developing countries are particularly susceptible to substandard and fake medicine. Regulatory authorities do not have the means or human resources to oversee drug manufacturing and distribution. A country plagued by civil war or famine might have more pressing problems—including shortages of medicine in the first place. The drug supply chain is confusingly complex with medicines passing through many different hands before they reach the patient, which creates many possible entry points for illegitimate products. Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insurance. Instead, they buy cheap medicine from street vendors at the market or on the bus (Fig 1; [2,10,11]). “People do not have the money to buy medicine at a reasonable price. But quality comes at a price. A reasonable margin is required to pay for a quality control system,” explained Hans Hogerzeil, Professor of Global Health at Groningen University in the Netherlands. In some countries, falsifying medicine has developed into a major business. The low risk of being detected combined with relatively low penalties has turned falsifying medicine into the “perfect crime” [2].Open in a separate windowFigure 1Women sell smuggled, counterfeit medicine on the Adjame market in Abidjan, Ivory Coast, in 2007. Fraudulent street medecine sales rose by 15–25% in the past two years in Ivory Coast.Issouf Sanogo/AFP Photo/Getty Images.There are two main categories of illegitimate drugs. ‘Substandard'' medicines might result from poor-quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intent. It might be manufactured outside the regulatory system, perhaps in an illegitimate production shack that blends chalk with other ingredients and presses it into pills [10]. Whilst falsified medicines do not typically contain any active ingredients, substandard medicine might contain subtherapeutic amounts. This is particularly problematic when it comes to anti-infectious drugs, as it facilitates the emergence and spread of drug resistance [12]. A sad example is the emergence of artemisinin-resistant Plasmodium strains at the Thai–Cambodia border [8] and the Thai–Myanmar border [13], and increasing multidrug-resistant tuberculosis might also be attributed to substandard medication [11].Many people in developing countries live in rural areas with no local pharmacy, and anyway have little money and no health insuranceEven if a country effectively prosecutes falsified and substandard medicine within its borders, it is still vulnerable to fakes and low-quality drugs produced elsewhere where regulations are more lax. To address this problem, international initiatives are urgently required [10,14,15], but there is no internationally binding law to combat counterfeit and substandard medicine. Although drug companies, governments and NGOs are interested in good-quality medicines, the different parties seem to have difficulties coming to terms with how to proceed. What has held up progress is a conflation of health issues and economic interests: innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting quality of medicine [14,16].The concern that intellectual property (IP) interests threaten public health dates back to the ‘Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement'' of the World Trade Organization (WTO), adopted in 1994, to establish global protection of intellectual property rights, including patents for pharmaceuticals. The TRIPS Agreement had devastating consequences during the acquired immunodeficiency syndrome epidemic, as it blocked patients in developing countries from access to affordable medicine. Although it includes flexibility, such as the possibility for governments to grant compulsory licenses to manufacture or import a generic version of a patented drug, it has not always been clear how these can be used by countries [14,16,17].In response to public concerns over the public health consequences of TRIPS, the Doha Declaration on the TRIPS Agreement and Public Health was adopted at the WTO''s Ministerial Conference in 2001. It reaffirmed the right of countries to use TRIPS flexibilities and confirmed the primacy of public health over the enforcement of IP rights. Although things have changed for the better, the Doha Declaration did not solve all the problems associated with IP protection and public health. For example, anti-counterfeit legislation, encouraged by multi-national pharmaceutical industries and the EU, threatened to impede the availability of generic medicines in East Africa [14,16,18]. In 2008–2009, European customs authorities seized shipments of legitimate generic medicines in transit from India to other developing countries because they infringed European IP laws [14,16,17]. “We''re left with decisions being taken based on patents and trademarks that should be taken based on health,” commented Roger Bate, a global health expert and resident scholar at the American Enterprise Institute in Washington, USA. “The health community is shooting themselves in the foot.”Conflating health care and IP issues are reflected in the unclear use of the term ‘counterfeit'' [2,14]. “Since the 1990s the World Health Organization (WHO) has used the term ‘counterfeit'' in the sense we now use ‘falsified'',” explained Hogerzeil. “The confusion started in 1995 with the TRIPS agreement, through which the term ‘counterfeit'' got the very narrow meaning of trademark infringement.” As a consequence, an Indian generic, for example, which is legal in some countries but not in others, could be labelled as ‘counterfeit''—and thus acquire the negative connotation of bad quality. “The counterfeit discussion was very much used as a way to block the market of generics and to put them in a bad light,” Hogerzeil concluded.The rifts between the stakeholders have become so deep during the course of these discussions that progress is difficult to achieve. “India is not at all interested in any international regulation. And, unfortunately, it wouldn''t make much sense to do anything without them,” Hogerzeil explained. Indeed, India is a core player: not only does it have a large generics industry, but also the country seems to be, together with China, the biggest source of fake medical products [19,20]. The fact that India is so reluctant to react is tragically ironic, as this stance hampers the growth of its own generic companies like Ranbaxy, Cipla or Piramal. “I certainly don''t believe that Indian generics would lose market share if there was stronger action on public health,” Bate said. Indeed, stricter regulations and control systems would be advantageous, because they would keep fakers at bay. The Indian generic industry is a common target for fakers, because their products are broadly distributed. “The most likely example of a counterfeit product I have come across in emerging markets is a counterfeit Indian generic,” Bate said. Such fakes can damage a company''s reputation and have a negative impact on its revenues when customers stop buying the product.The WHO has had a key role in attempting to draft international regulations that would contain the spread of falsified and substandard medicine. It took a lead in 2006 with the launch of the International Medical Products Anti-Counterfeiting Taskforce (IMPACT). But IMPACT was not a success. Concerns were raised over the influence of multi-national drug companies and the possibility that issues on quality of medicines were conflated with the attempts to enforce stronger IP measures [17]. The WHO distanced itself from IMPACT after 2010. For example, it no longer hosts IMPACT''s secretariat at its headquarters in Geneva [2].‘Substandard'' medicines might result from poor quality ingredients, production errors and incorrect storage. ‘Falsified'' medicine is made with clear criminal intentIn 2010, the WHO''s member states established a working group to further investigate how to proceed, which led to the establishment of a new “Member State mechanism on substandard/spurious/falsely labelled/falsified/counterfeit medical products” (http://www.who.int/medicines/services/counterfeit/en/index.html). However, according to a publication by Amir Attaran from the University of Ottawa, Canada, and international colleagues, the working group “still cannot agree how to define the various poor-quality medicines, much less settle on any concrete actions” [14]. The paper''s authors demand more action and propose a binding legal framework: a treaty. “Until we have stronger public health law, I don''t think that we are going to resolve this problem,” Bate, who is one of the authors of the paper, said.Similarly, the US Food and Drug Administration (FDA) commissioned the Institute of Medicine (IOM) to convene a consensus committee on understanding the global public health implications of falsified and substandard pharmaceuticals [2]. Whilst others have called for a treaty, the IOM report calls on the World Health Assembly—the governing body of the WHO—to develop a code of practice such as a “voluntary soft law” that countries can sign to express their will to do better. “At the moment, there is not yet enough political interest in a treaty. A code of conduct may be more realistic,” Hogerzeil, who is also on the IOM committee, commented. Efforts to work towards a treaty should nonetheless be pursued, Bate insisted: “The IOM is right in that we are not ready to sign a treaty yet, but that does not mean you don''t start negotiating one.”Whilst a treaty might take some time, there are several ideas from the IOM report and elsewhere that could already be put into action to deal with this global health threat [10,12,14,15,19]. Any attempts to safeguard medicines need to address both falsified and substandard medicines, but the counter-measures are different [14]. Falsifying medicine is, by definition, a criminal act. To counteract fakers, action needs to be taken to ensure that the appropriate legal authorities deal with criminals. Substandard medicine, on the other hand, arises when mistakes are made in genuine manufacturing companies. Such mistakes can be reduced by helping companies do better and by improving quality control of drug regulatory authorities.Manufacturing pharmaceuticals is a difficult and costly business that requires clean water, high-quality chemicals, expensive equipment, technical expertise and distribution networks. Large and multi-national companies benefit from economies of scale to cope with these problems. But smaller companies often struggle and compromise in quality [2,21]. “India has 20–40 big companies and perhaps nearly 20,000 small ones. To me, it seems impossible for them to produce at good quality, if they remain so small,” Hogerzeil explained. “And only by being strict, can you force them to combine and to become bigger industries that can afford good-quality assurance systems.” Clamping down on drug quality will therefore lead to a consolidation of the industry, which is an essential step. “If you look at Europe and the US, there were hundreds of drug companies—now there are dozens. And if you look at the situation in India and China today, there are thousands and that will have to come down to dozens as well,” Bate explained.…innovator companies and high-income countries have been accused of pushing for the enforcement of intellectual property regulations under the guise of protecting […] medicineIn addition to consolidating the market by applying stricter rules, the IOM has also suggested measures for supporting companies that observe best practices [2]. For example, the IOM proposes that the International Finance Corporation and the Overseas Private Investment Corporation, which promote private-sector development to reduce poverty, should create separate investment vehicles for pharmaceutical manufacturers who want to upgrade to international standards. Another suggestion is to harmonize market registration of pharmaceutical products, which would ease the regulatory burden for generic producers in developing countries and improve the efficiency of regulatory agencies.Once the medicine leaves the manufacturer, controlling distribution systems becomes another major challenge in combatting falsified and substandard medicine. Global drug supply chains have grown increasingly complicated; drugs cross borders, are sold back and forth between wholesalers and distributers, and are often repackaged. Still, there is a main difference between developing and developed countries. In the latter case, relatively few companies dominate the market, whereas in poorer nations, the distribution system is often fragmented and uncontrolled with parallel schemes, too few pharmacies, even fewer pharmacists and many unlicensed medical vendors. Every transaction creates an opportunity for falsified or substandard medicine to enter the market [2,10,19]. More streamlined and transparent supply chains and stricter licensing requirements would be crucial to improve drug quality. “And we can start in the US,” Hogerzeil commented.…India is a core player: not only does it have a large generics industry, but the country also seems to be, together with China, the biggest source of fake medical productsDistribution could be improved at different levels, starting with the import of medicine. “There are states in the USA where the regulation for medicine importation is very lax. Anyone can import; private clinics can buy medicine from Lebanon or elsewhere and fly them in,” Hogerzeil explained. The next level would be better control over the distribution system within the country. The IOM suggests that state boards should license wholesalers and distributors that meet the National Association of Boards of Pharmacy accreditation standards. “Everybody dealing with medicine has to be licensed,” Hogerzeil said. “And there should be a paper trail of who buys what from whom. That way you close the entry points for illegal drugs and prevent that falsified medicines enter the legal supply chain.” The last level would be a track-and-trace system to identify authentic drugs [2]. Every single package of medicine should be identifiable through an individual marker, such as a 3D bar code. Once it is sold, it is ticked off in a central database, so the marker cannot be reused.According to Hogerzeil, equivalent measures at these different levels should be established in every country. “I don''t believe in double standards”, he said. “Don''t say to Uganda: ‘you can''t do that''. Rather, indicate to them what a cost-effective system in the West looks like and help them, and give them the time, to create something in that direction that is feasible in their situation.”Nigeria, for instance, has demonstrated that with enough political will, it is possible to reduce the proliferation of falsified and substandard medicine. Nigeria had been a major source for falsified products, but things changed in 2001, when Dora Akunyili was appointed Director General of the National Agency for Food and Drug Administration and Control. Akunyili has a personal motivation for fighting falsified drugs: her sister Vivian, a diabetic patient, lost her life to fake insulin in 1988. Akunyili strengthened import controls, campaigned for public awareness, clamped down on counterfeit operations and pushed for harsher punishments [10,19]. Paul Orhii, Akunyili''s successor, is committed to continuing her work [10]. Although there are no exact figures, various surveys indicate that the rate of bad-quality medicine has dropped considerably in Nigeria [10].China is also addressing its drug-quality problems. In a highly publicized event, the former head of China''s State Food and Drug Administration, Zheng Xiaoyu, was executed in 2007 after he was found guilty of accepting bribes to approve untested medicine. Since then, China''s fight against falsified medicine has continued. As a result of heightened enforcement, the number of drug companies in China dwindled from 5,000 in 2004 to about 3,500 this year [2]. Moreover, in July 2012, more than 1,900 suspects were arrested for the sale of fake or counterfeit drugs.Quality comes at a price, however. It is expensive to produce high-quality medicine, and it is expensive to control the production and distribution of drugs. Many low- and middle-income countries might not have the resources to tackle the problem and might not see quality of medicine as a priority. But they should, and affluent countries should help. Not only because health is a human right, but also for economic reasons. A great deal of time and money is invested into testing the safety and efficacy of medicine during drug development, and these resources are wasted when drugs do not reach patients. Falsified and substandard medicines are a financial burden to health systems and the emergence of drug-resistant pathogens might make invaluable medications useless. Investing in the safety of medicine is therefore a humane and an economic imperative.  相似文献   

20.

Background

Each year, 540 million Chinese are exposed to secondhand smoke (SHS), resulting in more than 100,000 deaths. Smoke-free policies have been demonstrated to decrease overall cigarette consumption, encourage smokers to quit, and protect the health of nonsmokers. However, restrictions on smoking in China remain limited and ineffective. Internal tobacco industry documents show that transnational tobacco companies (TTCs) have pursued a multifaceted strategy for undermining the adoption of restrictions on smoking in many countries.

Methods and Findings

To understand company activities in China related to SHS, we analyzed British American Tobacco''s (BAT''s) internal corporate documents produced in response to litigation against the major cigarette manufacturers to understand company activities in China related to SHS. BAT has carried out an extensive strategy to undermine the health policy agenda on SHS in China by attempting to divert public attention from SHS issues towards liver disease prevention, pushing the so-called “resocialisation of smoking” accommodation principles, and providing “training” for industry, public officials, and the media based on BAT''s corporate agenda that SHS is an insignificant contributor to the larger issue of air pollution.

Conclusions

The public health community in China should be aware of the tactics previously used by TTCs, including efforts by the tobacco industry to co-opt prominent Chinese benevolent organizations, when seeking to enact stronger restrictions on smoking in public places.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号