首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The 24-hour society appears to be an ineluctable process towards a social organisation where time constraints are no more "restricting" the human life. But, what kind of 24-hour society do we need? At what costs? Are they acceptable/sustainable? Shift work, night work, irregular and flexible working hours, together with new technologies, are the milestone of this epochal passage, of which shift workers are builders and victims at the same time. The borders between working and social times are no more fixed and rigidly determined: not only the link between work place and working hours is broken, but also the value of working time changes according to the different economic/productive/social effects it can make. What are the advantages and disadvantages for the individual, the companies, and the society? What is the cost/benefit ratio in terms of physical health; psychological well-being, family and social life? The research on irregular working hours and health shows us what can be the negative consequences of non-human-centered working times organisations. Coping properly with this process means avoiding a passive acceptance of it with consequent maladjustments at both individual and social level, but adopting effective preventive and compensative strategies aimed at building a more sustainable society, at acceptable costs and with the highest possible benefits.  相似文献   

2.
* Quality Control (QC) in Point of Care Testing (PoCT) is often thought of as a complex issue; however intelligent system analysis can simplify matters and greatly increase the chances of a well controlled system. What we want to achieve is a QC program which adequately controls the PoCT system, but does not excessively contribute to the operating costs or complexity of maintaining a PoCT instrument, or network of instruments. * Don't neglect effective pre-analytical work: good documentation, operator training, monitoring, and analyser maintenance programs are essential, as for any analyser. * Look closely at your analyser: Is it a "laboratory type" instrument or cartridge or strip based? Can it perform multiple test types or a single test only? How is it calibrated? Does it have built in self-check capabilities or an electronic check cartridge? Is the sample in contact with the instrument? What are the cartridge/strip/reagent storage requirements? * Establish where the analysis is taking place and which system component is involved. * Tailor your QC program to target this component, but still check the system as a whole. * A common approach is to check cartridges/strips on delivery and run a QA sample at least monthly to check storage conditions and operator performance. If there is no independent electronic instrument check, daily QC checks are also recommended. * Don't be afraid to stray beyond conventional QC models if necessary. Some PoCT systems are not adequately controlled by the application of conventional QC alone.  相似文献   

3.
We describe a new program for the alignment of multiple biological sequences that is both statistically motivated and fast enough for problem sizes that arise in practice. Our Fast Statistical Alignment program is based on pair hidden Markov models which approximate an insertion/deletion process on a tree and uses a sequence annealing algorithm to combine the posterior probabilities estimated from these models into a multiple alignment. FSA uses its explicit statistical model to produce multiple alignments which are accompanied by estimates of the alignment accuracy and uncertainty for every column and character of the alignment—previously available only with alignment programs which use computationally-expensive Markov Chain Monte Carlo approaches—yet can align thousands of long sequences. Moreover, FSA utilizes an unsupervised query-specific learning procedure for parameter estimation which leads to improved accuracy on benchmark reference alignments in comparison to existing programs. The centroid alignment approach taken by FSA, in combination with its learning procedure, drastically reduces the amount of false-positive alignment on biological data in comparison to that given by other methods. The FSA program and a companion visualization tool for exploring uncertainty in alignments can be used via a web interface at http://orangutan.math.berkeley.edu/fsa/, and the source code is available at http://fsa.sourceforge.net/.  相似文献   

4.
The spatial and temporal distribution of the spruce budworm is modelled by a nonlinear diffusion equation. Two questions are considered:
  1. What is the critical size of a patch of forest which can support an outbreak?
  2. What is the width of an effective barrier to spread of an outbreak?
Answers to these questions are obtained with the aid of comparison methods for nonlinear diffusion equations.  相似文献   

5.

Background

Terminal restriction fragment length polymorphism (T-RFLP) analysis is a common DNA-fingerprinting technique used for comparisons of complex microbial communities. Although the technique is well established there is no consensus on how to treat T-RFLP data to achieve the highest possible accuracy and reproducibility. This study focused on two critical steps in the T-RFLP data treatment: the alignment of the terminal restriction fragments (T-RFs), which enables comparisons of samples, and the normalization of T-RF profiles, which adjusts for differences in signal strength, total fluorescence, between samples.

Results

Variations in the estimation of T-RF sizes were observed and these variations were found to affect the alignment of the T-RFs. A novel method was developed which improved the alignment by adjusting for systematic shifts in the T-RF size estimations between the T-RF profiles. Differences in total fluorescence were shown to be caused by differences in sample concentration and by the gel loading. Five normalization methods were evaluated and the total fluorescence normalization procedure based on peak height data was found to increase the similarity between replicate profiles the most. A high peak detection threshold, alignment correction, normalization and the use of consensus profiles instead of single profiles increased the similarity of replicate T-RF profiles, i.e. lead to an increased reproducibility. The impact of different treatment methods on the outcome of subsequent analyses of T-RFLP data was evaluated using a dataset from a longitudinal study of the bacterial community in an activated sludge wastewater treatment plant. Whether the alignment was corrected or not and if and how the T-RF profiles were normalized had a substantial impact on ordination analyses, assessments of bacterial dynamics and analyses of correlations with environmental parameters.

Conclusions

A novel method for the evaluation and correction of the alignment of T-RF profiles was shown to reduce the uncertainty and ambiguity in alignments of T-RF profiles. Large differences in the outcome of assessments of bacterial community structure and dynamics were observed between different alignment and normalization methods. The results of this study can therefore be of value when considering what methods to use in the analysis of T-RFLP data.

Electronic supplementary material

The online version of this article (doi:10.1186/s12859-014-0360-8) contains supplementary material, which is available to authorized users.  相似文献   

6.
Multiple sequence alignment tools struggle to keep pace with rapidly growing sequence data, as few methods can handle large datasets while maintaining alignment accuracy. We recently introduced MAGUS, a new state-of-the-art method for aligning large numbers of sequences. In this paper, we present a comprehensive set of enhancements that allow MAGUS to align vastly larger datasets with greater speed. We compare MAGUS to other leading alignment methods on datasets of up to one million sequences. Our results demonstrate the advantages of MAGUS over other alignment software in both accuracy and speed. MAGUS is freely available in open-source form at https://github.com/vlasmirnov/MAGUS.  相似文献   

7.
MOTIVATION: What constitutes a baseline level of success for protein fold recognition methods? As fold recognition benchmarks are often presented without any thought to the results that might be expected from a purely random set of predictions, an analysis of fold recognition baselines is long overdue. Given varying amounts of basic information about a protein-ranging from the length of the sequence to a knowledge of its secondary structure-to what extent can the fold be determined by intelligent guesswork? Can simple methods that make use of secondary structure information assign folds more accurately than purely random methods and could these methods be used to construct viable hierarchical classifications? EXPERIMENTS PERFORMED: A number of rapid automatic methods which score similarities between protein domains were devised and tested. These methods ranged from those that incorporated no secondary structure information, such as measuring absolute differences in sequence lengths, to more complex alignments of secondary structure elements. Each method was assessed for accuracy by comparison with the Class Architecture Topology Homology (CATH) classification. Methods were rated against both a random baseline fold assignment method as a lower control and FSSP as an upper control. Similarity trees were constructed in order to evaluate the accuracy of optimum methods at producing a classification of structure. RESULTS: Using a rigorous comparison of methods with CATH, the random fold assignment method set a lower baseline of 11% true positives allowing for 3% false positives and FSSP set an upper benchmark of 47% true positives at 3% false positives. The optimum secondary structure alignment method used here achieved 27% true positives at 3% false positives. Using a less rigorous Critical Assessment of Structure Prediction (CASP)-like sensitivity measurement the random assignment achieved 6%, FSSP-59% and the optimum secondary structure alignment method-32%. Similarity trees produced by the optimum method illustrate that these methods cannot be used alone to produce a viable protein structural classification system. CONCLUSIONS: Simple methods that use perfect secondary structure information to assign folds cannot produce an accurate protein taxonomy, however they do provide useful baselines for fold recognition. In terms of a typical CASP assessment our results suggest that approximately 6% of targets with folds in the databases could be assigned correctly by randomly guessing, and as many as 32% could be recognised by trivial secondary structure comparison methods, given knowledge of their correct secondary structures.  相似文献   

8.
The functions of RNAs, like proteins, are determined by their structures, which, in turn, are determined by their sequences. Comparison/alignment of RNA molecules provides an effective means to predict their functions and understand their evolutionary relationships. For RNA sequence alignment, most methods developed for protein and DNA sequence alignment can be directly applied. RNA 3-dimensional structure alignment, on the other hand, tends to be more difficult than protein structure alignment due to the lack of regular secondary structures as observed in proteins. Most of the existing RNA 3D structure alignment methods use only the backbone geometry and ignore the sequence information. Using both the sequence and backbone geometry information in RNA alignment may not only produce more accurate classification, but also deepen our understanding of the sequence–structure–function relationship of RNA molecules. In this study, we developed a new RNA alignment method based on elastic shape analysis (ESA). ESA treats RNA structures as three dimensional curves with sequence information encoded on additional dimensions so that the alignment can be performed in the joint sequence–structure space. The similarity between two RNA molecules is quantified by a formal distance, geodesic distance. Based on ESA, a rigorous mathematical framework can be built for RNA structure comparison. Means and covariances of full structures can be defined and computed, and probability distributions on spaces of such structures can be constructed for a group of RNAs. Our method was further applied to predict functions of RNA molecules and showed superior performance compared with previous methods when tested on benchmark datasets. The programs are available at http://stat.fsu.edu/ ∼jinfeng/ESA.html.  相似文献   

9.

Consumers increasingly demand information about the environmental impacts of their food. The French government is in the process of introducing environmental labelling for all food products. A scientific council was set up, and its main conclusions are presented in this article, through six questions: What environmental issues should be considered? What objective should be targeted? What data are needed, and for whom? What methods for assessing environmental impacts? Which environmental scores should be chosen? What label format should be proposed? By answering these questions and considering the context, the available data, the proposed methods and adjustments, and the knowledge of consumer perception of formats, the scientific council considers that a labelling scheme is feasible and relevant.

  相似文献   

10.
The STAR project: context, objectives and approaches   总被引:20,自引:20,他引:0  
STAR is a European Commission Framework V project (EVK1-CT-2001-00089). The project aim is to provide practical advice and solutions with regard to many of the issues associated with the Water Framework Directive. This paper provides a context for the STAR research programme through a review of the requirements of the directive and the Common Implementation Strategy responsible for guiding its implementation. The scientific and strategic objectives of STAR are set out in the form of a series of research questions and the reader is referred to the papers in this volume that address those objectives, which include: (a) Which methods or biological quality elements are best able to indicate certain stressors? (b) Which method can be used on which scale? (c) Which method is suited for early and late warnings? (d) How are different assessment methods affected by errors and uncertainty? (e) How can data from different assessment methods be intercalibrated? (f) How can the cost-effectiveness of field and laboratory protocols be optimised? (g) How can boundaries of the five classes of Ecological Status be best set? (h) What contribution can STAR make to the development of European standards? The methodological approaches adopted to meet these objectives are described. These include the selection of the 22 stream-types and 263 sites sampled in 11 countries, the sampling protocols used to sample and survey phytobenthos, macrophytes, macroinvertebrates, fish and hydromorphology, the quality control and uncertainty analyses that were applied, including training, replicate sampling and audit of performance, the development of bespoke software and the project outputs. This paper provides the detailed background information to be referred to in conjunction with most of the other papers in this volume. These papers are divided into seven sections: (1) typology, (2) organism groups, (3) macrophytes and diatoms, (4) hydromorphology, (5) tools for assessing European streams with macroinvertebrates, (6) intercalibration and comparison and (7) errors and uncertainty. The principal findings of the papers in each section and their relevance to the Water Framework Directive are synthesised in short summary papers at the beginning of each section. Additional outputs, including all sampling and laboratory protocols and project deliverables, together with a range of freely downloadable software are available from the project website at www.eu_star.at.  相似文献   

11.
Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data.  相似文献   

12.
Microwaves are electromagnetic waves with frequencies between 300 MHz and 300 GHz, corresponding to wavelengths between 1 m and 1 mm, respectively. Microwaves interact with a wide variety of materials. In fact, they can be used to heat dielectric materials. Diffusion and chemical-reaction rates are influenced by temperature increase. Many authors believe that, if microwave irradiation is optimally applied, the resulting microscopical images are of superior quality, because of good process control.In order to develop good microwave recipes for EM it is important to face the following questions:
  • 1.1. What is the influence of microwaves on the reagents?
  • 2.2. What are the basic mechanisms behind the procedure?
  • 3.3. What is the influence of temperature increase on the reaction rates?
  • 4.4. What is the optimal temperature?
  • 5.5. Does microwave irradiation cause destruction of, for instance, proteins or membranes?
  • 6.6. How to program the microwave oven? How does the load (container with reagent, if any, and specimen) influence the microwave irradiation? How to place the container in the oven?
  相似文献   

13.
Multiple sequence alignment is a classical and challenging task. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state-of-the-art works suffer from the "once a gap, always a gap" phenomenon. Is there a radically new way to do multiple sequence alignment? In this paper, we introduce a novel and orthogonal multiple sequence alignment method, using both multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole and tries to build the alignment vertically, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds have proved significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks, showing that MANGO compares favorably, in both accuracy and speed, against state-of-the-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, ProbConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0, and Kalign 2.0. We have further demonstrated the scalability of MANGO on very large datasets of repeat elements. MANGO can be downloaded at http://www.bioinfo.org.cn/mango/ and is free for academic usage.  相似文献   

14.

Background

Obtaining an accurate sequence alignment is fundamental for consistently analyzing biological data. Although this problem may be efficiently solved when only two sequences are considered, the exact inference of the optimal alignment easily gets computationally intractable for the multiple sequence alignment case. To cope with the high computational expenses, approximate heuristic methods have been proposed that address the problem indirectly by progressively aligning the sequences in pairs according to their relatedness. These methods however are not flexible to change the alignment of an already aligned group of sequences in the view of new data, resulting thus in compromises on the quality of the deriving alignment. In this paper we present ReformAlign, a novel meta-alignment approach that may significantly improve on the quality of the deriving alignments from popular aligners. We call ReformAlign a meta-aligner as it requires an initial alignment, for which a variety of alignment programs can be used. The main idea behind ReformAlign is quite straightforward: at first, an existing alignment is used to construct a standard profile which summarizes the initial alignment and then all sequences are individually re-aligned against the formed profile. From each sequence-profile comparison, the alignment of each sequence against the profile is recorded and the final alignment is indirectly inferred by merging all the individual sub-alignments into a unified set. The employment of ReformAlign may often result in alignments which are significantly more accurate than the starting alignments.

Results

We evaluated the effect of ReformAlign on the generated alignments from ten leading alignment methods using real data of variable size and sequence identity. The experimental results suggest that the proposed meta-aligner approach may often lead to statistically significant more accurate alignments. Furthermore, we show that ReformAlign results in more substantial improvement in cases where the starting alignment is of relatively inferior quality or when the input sequences are harder to align.

Conclusions

The proposed profile-based meta-alignment approach seems to be a promising and computationally efficient method that can be combined with practically all popular alignment methods and may lead to significant improvements in the generated alignments.

Electronic supplementary material

The online version of this article (doi:10.1186/1471-2105-15-265) contains supplementary material, which is available to authorized users.  相似文献   

15.
The journal Primates was founded by Kinji Imanishi (1902–1992) in 1957: It is the oldest and longest-running international primatology journal in the world. In this series of dialogues between Tetsuro Matsuzawa, Editor-in-Chief of Primates and the General Director of the Japan Monkey Centre (JMC) and Juichi Yamagiwa, former Editor-in-Chief of Primates and the Museum Director of the JMC, we look back at the achievements of our spiritual ancestors in primate research and talk about the back story of Imanishi and his fellow primatologists: founding the JMC as a research institute focused on primates and launching this journal. What was their motivation? What challenges did they face? What is their continued influence on the field right up to the present? What will be the legacy of our influence on the discipline?  相似文献   

16.

Background

Continuing professional development (CPD) is one of the principal means by which health professionals (i.e. primary care physicians and specialists) maintain, improve, and broaden the knowledge and skills required for optimal patient care and safety. However, the lack of a widely accepted instrument to assess the impact of CPD activities on clinical practice thwarts researchers' comparisons of the effectiveness of CPD activities. Using an integrated model for the study of healthcare professionals' behaviour, our objective is to develop a theory-based, valid, reliable global instrument to assess the impact of accredited CPD activities on clinical practice.

Methods

Phase 1: We will analyze the instruments identified in a systematic review of factors influencing health professionals' behaviours using criteria that reflect the literature on measurement development and CPD decision makers' priorities. The outcome of this phase will be an inventory of instruments based on social cognitive theories. Phase 2: Working from this inventory, the most relevant instruments and their related items for assessing the concepts listed in the integrated model will be selected. Through an e-Delphi process, we will verify whether these instruments are acceptable, what aspects need revision, and whether important items are missing and should be added. The outcome of this phase will be a new global instrument integrating the most relevant tools to fit our integrated model of healthcare professionals' behaviour. Phase 3: Two data collections are planned: (1) a test-retest of the new instrument, including item analysis, to assess its reliability and (2) a study using the instrument before and after CPD activities with a randomly selected control group to explore the instrument's mere-measurement effect. Phase 4: We will conduct individual interviews and focus groups with key stakeholders to identify anticipated barriers and enablers for implementing the new instrument in CPD practice. Phase 5: Drawing on the results from the previous phases, we will use consensus-building methods to develop with the decision makers a plan to implement the new instrument.

Discussion

This project proposes to give stakeholders a theory-based global instrument to validly and reliably measure the impacts of CPD activities on clinical practice, thus laying the groundwork for more targeted and effective knowledge-translation interventions in the future.
  相似文献   

17.
Yoon  Joo Young  Yeom  Jeonghun  Lee  Heebum  Kim  Kyutae  Na  Seungjin  Park  Kunsoo  Paek  Eunok  Lee  Cheolju 《BMC bioinformatics》2011,12(1):1-12

Background

Continuing research into the global multiple sequence alignment problem has resulted in more sophisticated and principled alignment methods. Unfortunately these new algorithms often require large amounts of time and memory to run, making it nearly impossible to run these algorithms on large datasets. As a solution, we present two general methods, Crumble and Prune, for breaking a phylogenetic alignment problem into smaller, more tractable sub-problems. We call Crumble and Prune meta-alignment methods because they use existing alignment algorithms and can be used with many current alignment programs. Crumble breaks long alignment problems into shorter sub-problems. Prune divides the phylogenetic tree into a collection of smaller trees to reduce the number of sequences in each alignment problem. These methods are orthogonal: they can be applied together to provide better scaling in terms of sequence length and in sequence depth. Both methods partition the problem such that many of the sub-problems can be solved independently. The results are then combined to form a solution to the full alignment problem.

Results

Crumble and Prune each provide a significant performance improvement with little loss of accuracy. In some cases, a gain in accuracy was observed. Crumble and Prune were tested on real and simulated data. Furthermore, we have implemented a system called Job-tree that allows hierarchical sub-problems to be solved in parallel on a compute cluster, significantly shortening the run-time.

Conclusions

These methods enabled us to solve gigabase alignment problems. These methods could enable a new generation of biologically realistic alignment algorithms to be applied to real world, large scale alignment problems.  相似文献   

18.
When two sequences are aligned with a single set of alignment parameters, or when mutation parameters are estimated on the basis of a single ``optimal' sequence alignment, the variability of both the alignment and the estimated parameters can be seriously underestimated. To obtain a more realistic impression of the actual uncertainty, we propose sampling sequence alignments and mutation parameters simultaneously from their joint posterior distribution given the two original sequences. We illustrate our method with human and orangutan sequences from the hyper variable region I and with gene–pseudogene pairs. Received: 16 November 2000 / Accepted: 15 May 2001  相似文献   

19.
  1. Download : Download high-res image (90KB)
  2. Download : Download full-size image
Highlights
  • •Retention time shift can lead to inversion of elution order of peptides.
  • •Global alignment methods are suboptimal for alignment of distant runs.
  • •DIAlignR employs hybrid (global + local) RT alignment approach.
  • •DIAlignR can align swapped peaks accurately across distant runs.
  相似文献   

20.
In this paper I argue that we can learn much about ‘wild justice’ and the evolutionary origins of social morality – behaving fairly – by studying social play behavior in group-living animals, and that interdisciplinary cooperation will help immensely. In our efforts to learn more about the evolution of morality we need to broaden our comparative research to include animals other than non-human primates. If one is a good Darwinian, it is premature to claim that only humans can be empathic and moral beings. By asking the question ‘What is it like to be another animal?’ we can discover rules of engagement that guide animals in their social encounters. When I study dogs, for example, I try to be a ‘dogocentrist’ and practice ‘dogomorphism.’ My major arguments center on the following ‘big’ questions: Can animals be moral beings or do they merely act as if they are? What are the evolutionary roots of cooperation, fairness, trust, forgiveness, and morality? What do animals do when they engage in social play? How do animals negotiate agreements to cooperate, to forgive, to behave fairly, to develop trust? Can animals forgive? Why cooperate and play fairly? Why did play evolve as it has? Does ‘being fair’ mean being more fit – do individual variations in play influence an individual's reproductive fitness, are more virtuous individuals more fit than less virtuous individuals? What is the taxonomic distribution of cognitive skills and emotional capacities necessary for individuals to be able to behave fairly, to empathize, to behave morally? Can we use information about moral behavior in animals to help us understand ourselves? I conclude that there is strong selection for cooperative fair play in which individuals establish and maintain a social contract to play because there are mutual benefits when individuals adopt this strategy and group stability may be also be fostered. Numerous mechanisms have evolved to facilitate the initiation and maintenance of social play to keep others engaged, so that agreeing to play fairly and the resulting benefits of doing so can be readily achieved. I also claim that the ability to make accurate predictions about what an individual is likely to do in a given social situation is a useful litmus test for explaining what might be happening in an individual's brain during social encounters, and that intentional or representational explanations are often important for making these predictions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号