首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The role of human factor plays a critical role in the safe and clean operation of maritime industry. Human error prediction can be beneficial to assess risk in maritime industry since shipping activities can pose potential hazards to human life and maritime ecology. The aim of this paper is to propose a risk assessment tool by considering the role of human factor. Hence, the desired safety control level in maritime transportation activities can be ascertained. In the proposed approach, a Success Likelihood Index Method (SLIM) extended with fuzzy logic is used to calculate human error probability (HEP). Severity of consequences are adopted in the proposed approach to assess risk. The quantitative risk assessment approach under fuzzy SLIM methodology will be applied to a very specific case on-board ship: Ballast Water Treatment (BWT) system. In order to improve consistency of research and minimize subjectivity of experts' judgments, the paper adopts the dominance factor which is used to adjust the impact level of experts' judgments in the aggregation stage of the methodology. The paper aims at not only highlighting the importance of human factor in maritime risk assessment but also enhancing safety control level and minimizing potential environmental impacts to marine ecology.  相似文献   

2.
Shale gas fracturing is a complex system of continuous operation. If human errors occur, it will cause a chain reaction, from abnormal events to fracturing accidents, and even lead to abandonment of shale gas wells. The process of shale gas fracturing has many production stages that are complex and the consequence of any error is serious. The human error modes in shale gas fracturing process are mutative. Therefore, human error should be studied in a systematic way, and in a hybrid framework, that is, whole integration of identification, prioritization, reasoning, and control. A new structured identification method of human error in a hybrid framework for shale gas fracturing operation is presented in the article. First, human error is structurally identified based on the human HAZOP method. Second, fuzzy VIKOR method is applied to comprehensive prioritization. Finally, 4M element theory is used to analyze the human error and control its evolution. The method improves the consistency of the identification results through the standard identification step and the identification criterion. Results from a study of feed-flow process indicate that 34 kinds of human errors can be identified, and high-probability errors occur in the behavior of implementation and observation.  相似文献   

3.
Satellite telemetry using ARGOS platform transmitter terminals (PTTs) is widely used to track the movements of animals, but little is known of the accuracy of these systems when used on active terrestrial mammals. An accurate estimate of the error, and therefore the limitations of the data, is critical when assessing the level of confidence in results. ARGOS provides published 68th percentile error estimates for the three most accurate location classes (LCs), but studies have shown that the errors can be far greater when the devices are attached to free‐living animals. Here we use data from a study looking at the habitat use of the spectacled flying‐fox in the wet tropics of Queensland to calculate these errors for all LCs in free‐living terrestrial mammals, and use these results to assess what level of confidence we would have in habitat use assignment in the study area. The results showed that our calculated 68th percentile errors were larger than the published ARGOS errors for all LCs, and that for all classes the error frequency had a very long tail. Habitat use results showed that the size of the error compared with the scale of the habitat the study was conducted in makes it unlikely that our data can be used to assess habitat use with great confidence. Overall, our results show that while satellite telemetry results are useful for assessing large scale movements of animals, in complex landscapes they may not be accurate enough to be used for finer scale analysis including habitat use assessment.  相似文献   

4.
The joint development of polymorphic molecular markers and paternity analysis methods provides new approaches to investigate ongoing patterns of pollen flow in natural plant populations. However, paternity studies are hindered by false paternity assignment and the nondetection of true fathers. To gauge the risk of these two types of errors, we performed a simulation study to investigate the impact on paternity analysis of: (i) the assumed values for the size of the breeding male population (NBMP), and (ii) the rate of scoring error in genotype assessment. Our simulations were based on microsatellite data obtained from a natural population of the entomophilous wild service tree, Sorbus torminalis (L.) Crantz. We show that an accurate estimate of NBMP is required to minimize both types of errors, and we assess the reliability of a technique used to estimate NBMP based on parent-offspring genetic data. We then show that scoring errors in genotype assessment only slightly affect the assessment of paternity relationships, and conclude that it is generally better to neglect the scoring error rate in paternity analyses within a nonisolated population.  相似文献   

5.
6.
With the increasing development of the petrochemical industry and the growing demand for oil, polycyclic aromatic hydrocarbons (PAHs) pollutions in the environment, especially in petroleum exploitation areas, are caused by the discharge of waste from the petroleum extraction process into an environmental system. This study aims to develop a new health risk assessment approach based on interval dynamic multimedia fugacity (IDMF) model and uncertainty analysis that could analyze the human exposure risk level for PAH contamination. The developed IDM health risk assessment (IDMHRA) approach is applied to assess previous, current, and future risks at a case study site in Daqing, Heilongjiang, China, from 1985 to 2020 for model validation. The human health risk assessment results show that 11 PAHs (NAP, ANT, FLA, PYR, BaA, CHR, BbF, BkF, BaP, IPY, and DBA) in the study site require further remediation efforts in terms of their unacceptable non-carcinogenic and carcinogenic risk. The results of risk source analysis reveal that soil media is the main risk pathway as compared with other exposure pathways. It can be seen that remediation process for soil contamination in the study site is urgently demanded. The assessment results demonstrate that the developed IDMHRA approach provides an effective tool for decision-makers and environmental managers to make remediation decisions in contaminated sites.  相似文献   

7.
The public understands and supports the ethical use of human subjects in medical research, recognizing the unique role for this type of study in the development of new drugs and therapeutic strategies for treatment of disease. The use of data from human subjects can also be of value in understanding the circumstances under which individuals exposed to chemicals in the food supply, in the workplace, or in the environment might experience toxicity, i.e., in support of risk assessment. However, questions have been raised as to whether this latter type of research is ethical, or can be performed in an ethical manner. Under what circumstances is it acceptable to intentionally expose human subjects to potentially toxic agents? This is an extremely important issue for the risk assessment community to address, because it affects in a fundamental way the types of information that will be available to conduct human health risk assessments. Four papers in this issue offer viewpoints on the value of human data, the circumstances under which human subjects might be exposed to toxic chemicals for research purposes, the ethical problems associated with this research, and the role of human vs. animal data in the development of toxicity values for human health risk assessment  相似文献   

8.
The use of animal vs. human data for the purposes of establishing human risk was examined for four pharmaceutical compounds: acetylsalicylic acid, cyclophosphamide, indomethacin and clofibric acid. Literature searches were conducted to identify preclinical and clinical data useful for the derivation of acceptable daily intakes (ADIs) from which a number of risk values including occupational exposure limits (OELs) could be calculated. OELs were calculated using human data and then again using animal data exclusively. For two compounds, ASA and clofibric acid use of animal data alone led to higher OELs (not health protective), while for indomethacin and cyclophosphamide use of animal data resulted in the same or lower OELs based on human data alone. In each case arguments were made for why the use of human data was preferred. The results of the analysis support a basic principle of risk assessment that all available data be considered  相似文献   

9.
Although habitat fragmentation is one of the greatest threats to biodiversity worldwide, virtually no attention has been paid to the quantification of error in fragmentation statistics. Landscape pattern indices (LPIs), such as mean patch size and number of patches, are routinely used to quantify fragmentation and are often calculated using remote-sensing imagery that has been classified into different land-cover classes. No classified map is ever completely correct, so we asked if different maps with similar misclassification rates could result in widely different errors in pattern indices. We simulated landscapes with varying proportions of habitat and clumpiness (autocorrelation) and then simulated classification errors on the same maps. We simulated higher misclassification at patch edges (as is often observed), and then used a smoothing algorithm routinely used on images to correct salt-and-pepper classification error. We determined how well classification errors (and smoothing) corresponded to errors seen in four pattern indices. Maps with low misclassification rates often yielded errors in LPIs of much larger magnitude and substantial variability. Although smoothing usually improved classification error, it sometimes increased LPI error and reversed the direction of error in LPIs introduced by misclassification. Our results show that classification error is not always a good predictor of errors in LPIs, and some types of image postprocessing (for example, smoothing) might result in the underestimation of habitat fragmentation. Furthermore, our results suggest that there is potential for large errors in nearly every landscape pattern analysis ever published, because virtually none quantify the errors in LPIs themselves.  相似文献   

10.
The Epidemiology Work Group at the Workshop on Future Research for Improving Risk Assessment Methods, Of Mice, Men, and Models, held August 16 to 18, 2000, at Snowmass Village, Aspen, Colorado, concluded that in order to improve the utility of epidemiologic studies for risk assessment, methodologic research is needed in the following areas: (1) aspects of epidemiologic study designs that affect doseresponse estimation; (2) alternative methods for estimating dose in human studies; and (3) refined methods for dose-response modeling for epidemiologic data. Needed research in aspects of epidemiologic study design includes recognition and control of study biases, identification of susceptible subpopulations, choice of exposure metrics, and choice of epidemiologic risk parameters. Much of this research can be done with existing data. Research needed to improve determinants of dose in human studies includes additional individual-level data (e.g., diet, co-morbidity), development of more extensive human data for physiologically based pharmacokinetic (PBPK) dose modeling, tissue registries to increase the availability of tissue for studies of exposure/dose and susceptibility biomarkers, and biomarker data to assess exposures in humans and animals. Research needed on dose-response modeling of human studies includes more widespread application of flexible statistical methods (e.g., general additive models), development of methods to compensate for epidemiologic bias in dose-response models, improved biological models using human data, and evaluation of the benchmark dose using human data. There was consensus among the Work Group that, whereas most prior risk assessments have focused on cancer, there is a growing need for applications to other health outcomes. Developmental and reproductive effects, injuries, respiratory disease, and cardiovascular disease were identified as especially high priorities for research. It was also a consensus view that epidemiologists, industrial hygienists, and other scientists focusing on human data need to play a stronger role throughout the risk assessment process. Finally, the group agreed that there was a need to improve risk communication, particularly on uncertainty inherent in risk assessments that use epidemiologic data.  相似文献   

11.
The assessment of risk from environmental and occupational exposures incorporates and synthesizes data from a variety of scientific disciplines including toxicology and epidemiology. Epidemiological data have offered valuable contributions to the identification of human health hazards, estimation of human exposures, quantification of the exposure–response relation, and characterization of risks to specific target populations including sensitive populations. As with any scientific discipline, there are some uncertainties inherent in these data; however, the best human health risk assessments utilize all available information, characterizing strengths and limitations as appropriate. Human health risk assessors evaluating environmental and occupational exposures have raised concerns about the validity of using epidemiological data for risk assessment due to actual or perceived study limitations. This article highlights three concerns commonly raised during the development of human health risk assessments of environmental and occupational exposures: (a) error in the measurement of exposure, (b) potential confounding, and (c) the interpretation of non-linear or non-monotonic exposure–response data. These issues are often the content of scientific disagreement and debate among the human health risk assessment community, and we explore how these concerns may be contextualized, addressed, and often ameliorated.  相似文献   

12.
Genotyping errors occur when the genotype determined after molecular analysis does not correspond to the real genotype of the individual under consideration. Virtually every genetic data set includes some erroneous genotypes, but genotyping errors remain a taboo subject in population genetics, even though they might greatly bias the final conclusions, especially for studies based on individual identification. Here, we consider four case studies representing a large variety of population genetics investigations differing in their sampling strategies (noninvasive or traditional), in the type of organism studied (plant or animal) and the molecular markers used [microsatellites or amplified fragment length polymorphisms (AFLPs)]. In these data sets, the estimated genotyping error rate ranges from 0.8% for microsatellite loci from bear tissues to 2.6% for AFLP loci from dwarf birch leaves. Main sources of errors were allelic dropouts for microsatellites and differences in peak intensities for AFLPs, but in both cases human factors were non-negligible error generators. Therefore, tracking genotyping errors and identifying their causes are necessary to clean up the data sets and validate the final results according to the precision required. In addition, we propose the outline of a protocol designed to limit and quantify genotyping errors at each step of the genotyping process. In particular, we recommend (i) several efficient precautions to prevent contaminations and technical artefacts; (ii) systematic use of blind samples and automation; (iii) experience and rigor for laboratory work and scoring; and (iv) systematic reporting of the error rate in population genetics studies.  相似文献   

13.
In the process of shale gas fracturing, long time, high strength, and boring task could trigger psychological or physical operation fatigues, and then lead to downtime, sand blocking, or other accidents, which are the main problems in safety management. However, conventional studies either separately analyze risks without considering the interaction and relationship between humans or teams, or only form a qualitative framework for team analysis. These methods cannot be directly applied for the fracturing operators due to complex operation procedures and diversified human errors. An improved methodology, therefore, is proposed to assess human holistic risk during the whole cycle of fracturing operation. First, D-S evidence theory is introduced to obtain the individual risk, and team performance shaping factor is also presented to establish risk assessment model for fracturing teams. Second, individual and team risks are integrated to calculate the quantitative value of human holistic risk. A scatter diagram of human risk is finally designed, where the high-risk fracturing stages and human types can be clearly revealed. Results from a study of one shale gas well indicate that human holistic risk is more objective and practical, and improves accuracy of human reliability analysis.  相似文献   

14.
Conventional risk assessment practices utilize a tenfold uncertainty factor (UF) to extrapolate from the general human population to sensitive subgroups, such as children and geriatrics. This study evaluated whether the tenfold UF can be reduced when pharmacokinetic and pharmacodynamic data for pharmaceuticals used by children and geriatrics are incorporated into the risk assessment for human sensitivity. Composite factors (kinetics X dynamics) were calculated from data-derived values for bumetanide, furosemide, metoprolol, atenolol, naproxen, and ibuprofen. For the compounds examined, all of the composite factors were lower than 10. Furthermore, 8 of the 12 composite factors were less than 5.5. Incorporation of human kinetic and dynamic data into risk assessment can aid in reducing the uncertainties associated with sensitive subgroups and further study is encouraged.  相似文献   

15.
Borzani's [(1994) World Journal of Microbiology and Biotechnology 10, 475–476] idea of evaluation of absolute error affecting the 'maximum specific growth rate' (ESGR), calculated on the basis of the first and the last time points of the entire experimental time period, is generalized to the real-life situations where the relative errors of cell concentration cannot be assumed to be constant during the experiment. Visualizing the entire experimental time period as to comprise of several successive, mutually exclusive and exhaustive time intervals, we compute specific growth rates (SGRs) for each of these time intervals. Defining maximum of these SGR values as MSGR in contrast to Borzani's ESGR our aim is to study the effect of the expected absolute error on SGRs of different intervals. This will reveal the discrepancy between the true and observed MSGRs. Assuming the relative error distribution on (0,1) to be rectangular and symmetric truncated normal with mean at 0.5 and suitable variance, the expected values of the absolute errors are evaluated and numerically tabulated using the software packages MATHEMATICA and S-PLUS. Our results thus hold for situations involving varying relative errors where Borzani's results cannot be applied. A discussion with a concrete numerical example on the misidentification of the MSGR interval due to the effect of the random relative measuremental errors reveals to an experimental biologist that ignorance of this fact may lead to his/her entire experiment being futile.  相似文献   

16.
This study assesses the impact of errors in sorting and identifying macroinvertebrate samples collected and analysed using different protocols (e.g. STAR-AQEM, RIVPACS). The study is based on the auditing scheme implemented in the EU-funded project STAR and presents the first attempt at analysing the audit data. Data from 10 participating countries are analysed with regard to the impact of sorting and identification errors. These differences are measured in the form of gains and losses at each level of audit for 120 samples. Based on gains and losses to the primary results, qualitative binary taxa lists were deducted for each level of audit for a subset of 72 data sets. Between these taxa lists the taxonomic similarity and the impact of differences on selected metrics common to stream assessment were analysed. The results of our study indicate that in all methods used, a considerable amount of sorting and identification error could be detected. This total impact is reflected in most functional metrics. In some metrics indicative of taxonomic richness, the total impact of differences is not directly reflected in differences in metric scores. The results stress the importance of implementing quality control mechanisms in macroinvertebrate assessment schemes. Peter Haase, Andrea Sundermann: These authors contributed equally to this work.  相似文献   

17.
French environmental law for nature protection requires that all the facilities, works, and development projects that may affect the environment should be the subject of an impact study to evaluate their consequences, including on human health. For this analysis, the risk assessment approach is used and the population's exposure is estimated with the aid of multimedia models. The CalTOX model is frequently used for this kind of study. Unfortunately, the analysis of these studies shows that the model is often badly understood and poorly used. The difficulties encountered by the users, the errors and the problems met in the interpretation of results, which are the most commonly found in the human exposure assessment, are listed and their consequences illustrated. CalTOX has been shown to have many advantages (adaptability, speed in carrying out calculations, transparency), but it ought not to be used as a “black box” because such a use may lead to many errors and a loss of confidence in the studies.  相似文献   

18.
AimTo study the sensitivity of three commercial dosimetric systems, Delta4, Multicube and Octavius4D, in detecting Volumetric Modulated Arc Therapy (VMAT) delivery errors.MethodsFourteen prostate and head and neck (H&N) VMAT plans were considered for this study. Three types of errors were introduced into the original plans: gantry angle independent and dependent MLC errors, and gantry angle dependent dose errors. The dose matrix measured by each detector system for the no-error and error introduced delivery were compared with the reference Treatment Planning System (TPS) calculated dose matrix for no-error plans using gamma (γ) analysis with 2%/2 mm tolerance criteria. The ability of the detector system in identifying the minimum error in each scenario was assessed by analysing the gamma pass rates of no error delivery and error delivery using a Wilcoxon signed-rank test. The relative sensitivity of the system was assessed by determining the slope of the gamma pass line for studied error magnitude in each error scenario.ResultsIn the gantry angle independent and dependent MLC error scenario the Delta4, Multicube and Octavius4D systems detected a minimum 2 mm error. In the gantry angle dependent dose error scenario all studied systems detected a minimum 3% and 2% error in prostate and H&N plans respectively. In the studied detector systems Multicube showed relatively less sensitivity to the errors in the majority of error scenarios.ConclusionThe studied systems identified the same magnitude of minimum errors in all considered error scenarios.  相似文献   

19.
Lee HS  Zhang Y 《Proteins》2012,80(1):93-110
We developed BSP‐SLIM, a new method for ligand–protein blind docking using low‐resolution protein structures. For a given sequence, protein structures are first predicted by I‐TASSER; putative ligand binding sites are transferred from holo‐template structures which are analogous to the I‐TASSER models; ligand–protein docking conformations are then constructed by shape and chemical match of ligand with the negative image of binding pockets. BSP‐SLIM was tested on 71 ligand–protein complexes from the Astex diverse set where the protein structures were predicted by I‐TASSER with an average RMSD 2.92 Å on the binding residues. Using I‐TASSER models, the median ligand RMSD of BSP‐SLIM docking is 3.99 Å which is 5.94 Å lower than that by AutoDock; the median binding‐site error by BSP‐SLIM is 1.77 Å which is 6.23 Å lower than that by AutoDock and 3.43 Å lower than that by LIGSITECSC. Compared to the models using crystal protein structures, the median ligand RMSD by BSP‐SLIM using I‐TASSER models increases by 0.87 Å, while that by AutoDock increases by 8.41 Å; the median binding‐site error by BSP‐SLIM increase by 0.69Å while that by AutoDock and LIGSITECSC increases by 7.31 Å and 1.41 Å, respectively. As case studies, BSP‐SLIM was used in virtual screening for six target proteins, which prioritized actives of 25% and 50% in the top 9.2% and 17% of the library on average, respectively. These results demonstrate the usefulness of the template‐based coarse‐grained algorithms in the low‐resolution ligand–protein docking and drug‐screening. An on‐line BSP‐SLIM server is freely available at http://zhanglab.ccmb.med.umich.edu/BSP‐SLIM . Proteins 2012. © 2011 Wiley Periodicals, Inc.  相似文献   

20.
W K Greene  E Baker  T H Rabbitts  U R Kees 《Gene》1999,232(2):203-207
Human SLIM1 is a recently described gene of the LIM-only class encoding four and a half tandemly repeated LIM domains. LIM domains are double zinc finger structures which provide an interface for protein/protein interactions and are conserved in a variety of nuclear and cytoplasmic factors important in cell fate determination and cellular regulation. Here we report the structural organization, expression pattern and chromosomal localization of the human SLIM1 gene. SLIM1 was found to contain at least five exons with all four introns disrupting the coding region at a similar position relative to the respective complete LIM domains. Northern blot analysis confirmed strikingly high expression of SLIM1 in skeletal muscle and heart, with much lower expression observed in several other tissues including colon, small intestine and prostate. The SLIM1 gene was assigned to human chromosome Xq26 using fluorescence in situ hybridization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号