首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the process of shale gas fracturing, long time, high strength, and boring task could trigger psychological or physical operation fatigues, and then lead to downtime, sand blocking, or other accidents, which are the main problems in safety management. However, conventional studies either separately analyze risks without considering the interaction and relationship between humans or teams, or only form a qualitative framework for team analysis. These methods cannot be directly applied for the fracturing operators due to complex operation procedures and diversified human errors. An improved methodology, therefore, is proposed to assess human holistic risk during the whole cycle of fracturing operation. First, D-S evidence theory is introduced to obtain the individual risk, and team performance shaping factor is also presented to establish risk assessment model for fracturing teams. Second, individual and team risks are integrated to calculate the quantitative value of human holistic risk. A scatter diagram of human risk is finally designed, where the high-risk fracturing stages and human types can be clearly revealed. Results from a study of one shale gas well indicate that human holistic risk is more objective and practical, and improves accuracy of human reliability analysis.  相似文献   

2.
页岩气是一种特殊的天然气聚集,以吸附或游离状态存在于页岩之中。页岩气资源储量丰富,约占全球天然气能源的三分之一,主要分布在中国、北美、俄罗斯等国家和地区。页岩气开采所使用的水力压裂技术会对深地微生物产生显著影响,在水力压裂的不同阶段,微生物群落组成存在明显差异。其中,产甲烷菌能够提高页岩气的产量,而产酸细菌会造成设备腐蚀,降低页岩气的回收效率。本文概述了页岩气的开采现状、开采过程以及微生物种群的变化和潜在影响,以期促进页岩气开采与深地微生物相互影响的研究,最终推动页岩气的绿色、高效开采。  相似文献   

3.
Although it is clear that errors in genotyping data can lead to severe errors in linkage analysis, there is as yet no consensus strategy for identification of genotyping errors. Strategies include comparison of duplicate samples, independent calling of alleles, and Mendelian-inheritance-error checking. This study aimed to develop a better understanding of error types associated with microsatellite genotyping, as a first step toward development of a rational error-detection strategy. Two microsatellite marker sets (a commercial genomewide set and a custom-designed fine-resolution mapping set) were used to generate 118,420 and 22,500 initial genotypes and 10,088 and 8,328 duplicates, respectively. Mendelian-inheritance errors were identified by PedManager software, and concordance was determined for the duplicate samples. Concordance checking identifies only human errors, whereas Mendelian-inheritance-error checking is capable of detection of additional errors, such as mutations and null alleles. Neither strategy is able to detect all errors. Inheritance checking of the commercial marker data identified that the results contained 0.13% human errors and 0.12% other errors (0.25% total error), whereas concordance checking found 0.16% human errors. Similarly, Mendelian-inheritance-error checking of the custom-set data identified 1.37% errors, compared with 2.38% human errors identified by concordance checking. A greater variety of error types were detected by Mendelian-inheritance-error checking than by duplication of samples or by independent reanalysis of gels. These data suggest that Mendelian-inheritance-error checking is a worthwhile strategy for both types of genotyping data, whereas fine-mapping studies benefit more from concordance checking than do studies using commercial marker data. Maximization of error identification increases the likelihood of linkage when complex diseases are analyzed.  相似文献   

4.
Conventional process-analysis-type techniques for compiling life-cycle inventories suffer from a truncation error, which is caused by the omission of resource requirements or pollutant releases of higher-order upstream stages of the production process. The magnitude of this truncation error varies with the type of product or process considered, but can be on the order of 50%. One way to avoid such significant errors is to incorporate input-output analysis into the assessment framework, resulting in a hybrid life-cycle inventory method. Using Monte-Carlo simulations, it can be shown that uncertainties of input-output– based life-cycle assessments are often lower than truncation errors in even extensive, third-order process analyses.  相似文献   

5.
Hydraulic fracturing is used to increase the permeability of shale gas formations and involves pumping large volumes of fluids into these formations. A portion of the frac fluid remains in the formation after the fracturing process is complete, which could potentially contribute to deleterious microbially induced processes in natural gas wells. Here, we report on the geochemical and microbiological properties of frac and flowback waters from two newly drilled natural gas wells in the Barnett Shale in North Central Texas. Most probable number studies showed that biocide treatments did not kill all the bacteria in the fracturing fluids. Pyrosequencing-based 16S rRNA diversity analyses indicated that the microbial communities in the flowback waters were less diverse and completely distinct from the communities in frac waters. These differences in frac and flowback water communities appeared to reflect changes in the geochemistry of fracturing fluids that occurred during the frac process. The flowback communities also appeared well adapted to survive biocide treatments and the anoxic conditions and high temperatures encountered in the Barnett Shale.  相似文献   

6.

Purpose

Following the boom of shale gas production in the USA and the decrease in the US gas prices, increasing interest in shale gas is developing in many countries holding shale reserves and exploration is already taking place in some EU countries, including the UK. Any commercial development of shale gas in Europe requires a broad environmental assessment, recognizing the different European conditions and legislations.

Methods

This study focuses on the UK situation and estimates the environmental impacts of shale gas using life-cycle assessment (LCA); the burdens of shale gas production in the UK are compared with the burdens of the current UK natural gas mix. The main focus is on the analysis of water impacts, but a broad range of other impact categories are also considered. A sensitivity analysis is performed on the most environmentally criticized operations in shale gas production, including flowback disposal and emission control, by considering a range of possible process options.

Results and discussion

Improper waste water management and direct disposal or spills of waste water to river can lead to high water and human ecotoxicity. Mining of the sand and withdrawal of the water used in fracking fluids determine the main impacts on water use and degradation. However, the water degradation of the conventional natural gas supply to the UK is shown to be even higher than that of shale gas. For the global warming potential (GWP), the handling methods of the emissions associated with the hydraulic fracturing influence the results only when emissions are vented. Finally, the estimated ultimate recovery of the well has the greatest impact on the results as well as the flowback ratio and flowback disposal method.

Conclusions

This paper provides insights to better understand the future development of shale gas in the UK. Adequate waste water management and emission handling significantly reduce the environmental impacts of shale gas production. Policy makers should consider that shale gas at the same time increases the water consumption and decreases the water degradation when compared with the gas mix supply. Furthermore, the environmental impacts of shale gas should be considered according to the low productivity that force the drilling and exploitation of a high number of wells.
  相似文献   

7.
The identification of genes contributing to complex diseases and quantitative traits requires genetic data of high fidelity, because undetected errors and mutations can profoundly affect linkage information. The recent emphasis on the use of the sibling-pair design eliminates or decreases the likelihood of detection of genotyping errors and marker mutations through apparent Mendelian incompatibilities or close double recombinants. In this article, we describe a hidden Markov method for detecting genotyping errors and mutations in multilocus linkage data. Specifically, we calculate the posterior probability of genotyping error or mutation for each sibling-pair-marker combination, conditional on all marker data and an assumed genotype-error rate. The method is designed for use with sibling-pair data when parental genotypes are unavailable. Through Monte Carlo simulation, we explore the effects of map density, marker-allele frequencies, marker position, and genotype-error rate on the accuracy of our error-detection method. In addition, we examine the impact of genotyping errors and error detection and correction on multipoint linkage information. We illustrate that even moderate error rates can result in substantial loss of linkage information, given efforts to fine-map a putative disease locus. Although simulations suggest that our method detects 相似文献   

8.
Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called ‘StakeMeter’. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.  相似文献   

9.
The finite element (FE) method when coupled with computed tomography (CT) is a powerful tool in orthopaedic biomechanics. However, substantial data is required for patient-specific modelling. Here we present a new method for generating a FE model with a minimum amount of patient data. Our method uses high order cubic Hermite basis functions for mesh generation and least-square fits the mesh to the dataset. We have tested our method on seven patient data sets obtained from CT assisted osteodensitometry of the proximal femur. Using only 12 CT slices we generated smooth and accurate meshes of the proximal femur with a geometric root mean square (RMS) error of less than 1 mm and peak errors less than 8 mm. To model the complex geometry of the pelvis we developed a hybrid method which supplements sparse patient data with data from the visible human data set. We tested this method on three patient data sets, generating FE meshes of the pelvis using only 10 CT slices with an overall RMS error less than 3 mm. Although we have peak errors about 12 mm in these meshes, they occur relatively far from the region of interest (the acetabulum) and will have minimal effects on the performance of the model. Considering that linear meshes usually require about 70-100 pelvic CT slices (in axial mode) to generate FE models, our method has brought a significant data reduction to the automatic mesh generation step. The method, that is fully automated except for a semi-automatic bone/tissue boundary extraction part, will bring the benefits of FE methods to the clinical environment with much reduced radiation risks and data requirement.  相似文献   

10.
Chen L  Tai J  Zhang L  Shang Y  Li X  Qu X  Li W  Miao Z  Jia X  Wang H  Li W  He W 《Molecular bioSystems》2011,7(9):2547-2553
Understanding the pathogenesis of complex diseases is aided by precise identification of the genes responsible. Many computational methods have been developed to prioritize candidate disease genes, but coverage of functional annotations may be a limiting factor for most of these methods. Here, we introduce a global candidate gene prioritization approach that considers information about network properties in the human protein interaction network and risk transformative contents from known disease genes. Global risk transformative scores were then used to prioritize candidate genes. This method was introduced to prioritize candidate genes for prostate cancer. The effectiveness of our global risk transformative algorithm for prioritizing candidate genes was evaluated according to validation studies. Compared with ToppGene and random walk-based methods, our method outperformed the two other candidate gene prioritization methods. The generality of our method was assessed by testing it on prostate cancer and other types of cancer. The performance was evaluated using standard leave-one-out cross-validation.  相似文献   

11.
SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis is a tool widely used to help in decision making in complex systems. It suits to exploring the issues and measures related to the conservation and development of local breeds, as it allows the integration of many driving factors influencing breed dynamics. We developed a quantified SWOT method as a decision-making tool for identification and ranking of conservation and development strategies of local breeds, and applied it to a set of 13 cattle breeds of six European countries. The method has four steps: definition of the system, identification and grouping of the driving factors, quantification of the importance of driving factors and identification and prioritization of the strategies. The factors were determined following a multi-stakeholder approach and grouped with a three-level structure. Animal genetic resources expert groups ranked the factors, and a quantification process was implemented to identify and prioritize strategies. The proposed SWOT methodology allows analyzing the dynamics of local cattle breeds in a structured and systematic way. It is a flexible tool developed to assist different stakeholders in defining the strategies and actions. The quantification process allows the comparison of the driving factors and the prioritization of the strategies for the conservation and development of local cattle breeds. We identified 99 factors across the breeds. Although the situation is very heterogeneous, the future of these breeds may be promising. The most important strengths and weaknesses were related to production systems and farmers. The most important opportunities were found in marketing new products, whereas the most relevant threats were found in selling the current products. The across-breed strategies utility decreased as they gained specificity. Therefore, the strategies at European level should focus on general aspects and be flexible enough to be adapted to the country and breed specificities.  相似文献   

12.
The current study focuses on identification and prioritization of the most important risks affecting a gas power plant located in southern Iran that was selected as a case study. After identifying risky activities, plant operations, and natural disasters, a Delphi questionnaire was prepared to specify crisis and accident-prone centers that could lead to the plant's destruction. After analyzing the questionnaires, the final criteria were determined. Subsequently, multi-criteria decision-making methods including the technique for order preference by similarity to ideal solution (TOPSIS) and analytical hierarchy process (AHP) were applied to prioritize the identified criteria. The relative weights of the criteria were calculated using the eigenvector method in the environment of MATLAB and Expert Choice. In some cases, there was no correlation between the obtained results, resulting in a novel integrated technique consisting of three methods: average, Borda, and Copeland were used to reach a consensus for prioritizing the criteria. The risk assessment results indicate that gas and oil pipes, dust storm, and terrorism have the first to third priorities among the other risks. Some strategies are proposed to control and mitigate the identified risks.  相似文献   

13.
Li X  Wang Q  Zheng Y  Lv S  Ning S  Sun J  Huang T  Zheng Q  Ren H  Xu J  Wang X  Li Y 《Nucleic acids research》2011,39(22):e153
The identification of human cancer-related microRNAs (miRNAs) is important for cancer biology research. Although several identification methods have achieved remarkable success, they have overlooked the functional information associated with miRNAs. We present a computational framework that can be used to prioritize human cancer miRNAs by measuring the association between cancer and miRNAs based on the functional consistency score (FCS) of the miRNA target genes and the cancer-related genes. This approach proved successful in identifying the validated cancer miRNAs for 11 common human cancers with area under ROC curve (AUC) ranging from 71.15% to 96.36%. The FCS method had a significant advantage over miRNA differential expression analysis when identifying cancer-related miRNAs with a fine regulatory mechanism, such as miR-27a in colorectal cancer. Furthermore, a case study examining thyroid cancer showed that the FCS method can uncover novel cancer-related miRNAs such as miR-27a/b, which were showed significantly upregulated in thyroid cancer samples by qRT-PCR analysis. Our method can be used on a web-based server, CMP (cancer miRNA prioritization) and is freely accessible at http://bioinfo.hrbmu.edu.cn/CMP. This time- and cost-effective computational framework can be a valuable complement to experimental studies and can assist with future studies of miRNA involvement in the pathogenesis of cancers.  相似文献   

14.
The efforts to protect biological diversity must be prioritized because resources for nature conservation are limited. Conservation prioritization can be based on numerous criteria, from ecological integrity to species representation, but in this review I address only species-level prioritization. Criteria used for species prioritization range from aesthetical to evolutionary considerations, but I focus on the aspects that are biologically relevant. I distinguish between two main aspects of diversity that are used as objectives: Maintenance of biodiversity pattern, and maintenance of biodiversity process. I identify two additional criteria typically used in species prioritization that serve for achieving the objectives: The species’ need of protection, and cost and effectiveness of conservation actions. I discuss how these criteria could be combined with either of the objectives in a complementarity-based benefit function framework for conservation prioritization. But preserving evolutionary process versus current diversity pattern may turn out to be conflicting objectives that have to be traded-off with each other, if pursued simultaneously. Although many reasonable criteria and methods exist, species prioritization is hampered by uncertainties, most of which stem from the poor quality of data on what species exist, where they occur, and what are the costs and benefits of protecting them. Surrogate measures would be extremely useful but their performance is still largely unknown. Future challenges in species prioritization lie in finding ways to compensate for missing information.  相似文献   

15.
Errors in genotyping data have been shown to have a significant effect on the estimation of recombination fractions in high-resolution genetic maps. Previous estimates of errors in existing databases have been limited to the analysis of relatively few markers and have suggested rates in the range 0.5%-1.5%. The present study capitalizes on the fact that within the Centre d'Etude du Polymorphisme Humain (CEPH) collection of reference families, 21 individuals are members of more than one family, with separate DNA samples provided by CEPH for each appearance of these individuals. By comparing the genotypes of these individuals in each of the families in which they occur, an estimated error rate of 1.4% was calculated for all loci in the version 4.0 CEPH database. Removing those individuals who were clearly identified by CEPH as appearing in more than one family resulted in a 3.0% error rate for the remaining samples, suggesting that some error checking of the identified repeated individuals may occur prior to data submission. An error rate of 3.0% for version 4.0 data was also obtained for four chromosome 5 markers that were retyped through the entire CEPH collection. The effects of these errors on a multipoint map were significant, with a total sex-averaged length of 36.09 cM with the errors, and 19.47 cM with the errors corrected. Several statistical approaches to detect and allow for errors during linkage analysis are presented. One method, which identified families containing possible errors on the basis of the impact on the maximum lod score, showed particular promise, especially when combined with the limited retyping of the identified families. The impact of the demonstrated error rate in an established genotype database on high-resolution mapping is significant, raising the question of the overall value of incorporating such existing data into new genetic maps.  相似文献   

16.
The current study focuses on identification and prioritization of the most important risks affecting a gas power plant located in southern Iran that was selected as a case study. After identifying risky activities, plant operations, and natural disasters, a Delphi questionnaire was prepared to specify crisis and accident-prone centers that could lead to the plant's destruction. After analyzing the questionnaires, the final criteria were determined. Subsequently, multi-criteria decision-making methods including technique for order preference by similarity to ideal solution (TOPSIS) and analytical hierarchy process (AHP) were applied to prioritize the identified criteria. The relative weights of the criteria were calculated using the eigenvector method in the environment of MATLAB and EXPERT CHOICE. In some cases, there was no correlation between the obtained results, resulting in a novel integrated technique consisting of three methods: average, Borda, and Copeland was used to reach a consensus for prioritizing the criteria. The risk assessment results indicate that gas and oil pipes, dust storm, and terrorism have the first to third priorities among the other risks. Some strategies are proposed to control and mitigate the identified risks.  相似文献   

17.
A fundamental challenge in human health is the identification of disease-causing genes. Recently, several studies have tackled this challenge via a network-based approach, motivated by the observation that genes causing the same or similar diseases tend to lie close to one another in a network of protein-protein or functional interactions. However, most of these approaches use only local network information in the inference process and are restricted to inferring single gene associations. Here, we provide a global, network-based method for prioritizing disease genes and inferring protein complex associations, which we call PRINCE. The method is based on formulating constraints on the prioritization function that relate to its smoothness over the network and usage of prior information. We exploit this function to predict not only genes but also protein complex associations with a disease of interest. We test our method on gene-disease association data, evaluating both the prioritization achieved and the protein complexes inferred. We show that our method outperforms extant approaches in both tasks. Using data on 1,369 diseases from the OMIM knowledgebase, our method is able (in a cross validation setting) to rank the true causal gene first for 34% of the diseases, and infer 139 disease-related complexes that are highly coherent in terms of the function, expression and conservation of their member proteins. Importantly, we apply our method to study three multi-factorial diseases for which some causal genes have been found already: prostate cancer, alzheimer and type 2 diabetes mellitus. PRINCE''s predictions for these diseases highly match the known literature, suggesting several novel causal genes and protein complexes for further investigation.  相似文献   

18.
In a research environment dominated by reductionist approaches to brain disease mechanisms, gene network analysis provides a complementary framework in which to tackle the complex dysregulations that occur in neuropsychiatric and other neurological disorders. Gene–gene expression correlations are a common source of molecular networks because they can be extracted from high‐dimensional disease data and encapsulate the activity of multiple regulatory systems. However, the analysis of gene coexpression patterns is often treated as a mechanistic black box, in which looming ‘hub genes’ direct cellular networks, and where other features are obscured. By examining the biophysical bases of coexpression and gene regulatory changes that occur in disease, recent studies suggest it is possible to use coexpression networks as a multi‐omic screening procedure to generate novel hypotheses for disease mechanisms. Because technical processing steps can affect the outcome and interpretation of coexpression networks, we examine the assumptions and alternatives to common patterns of coexpression analysis and discuss additional topics such as acceptable datasets for coexpression analysis, the robust identification of modules, disease‐related prioritization of genes and molecular systems and network meta‐analysis. To accelerate coexpression research beyond modules and hubs, we highlight some emerging directions for coexpression network research that are especially relevant to complex brain disease, including the centrality–lethality relationship, integration with machine learning approaches and network pharmacology .  相似文献   

19.
Genotyping errors occur when the genotype determined after molecular analysis does not correspond to the real genotype of the individual under consideration. Virtually every genetic data set includes some erroneous genotypes, but genotyping errors remain a taboo subject in population genetics, even though they might greatly bias the final conclusions, especially for studies based on individual identification. Here, we consider four case studies representing a large variety of population genetics investigations differing in their sampling strategies (noninvasive or traditional), in the type of organism studied (plant or animal) and the molecular markers used [microsatellites or amplified fragment length polymorphisms (AFLPs)]. In these data sets, the estimated genotyping error rate ranges from 0.8% for microsatellite loci from bear tissues to 2.6% for AFLP loci from dwarf birch leaves. Main sources of errors were allelic dropouts for microsatellites and differences in peak intensities for AFLPs, but in both cases human factors were non-negligible error generators. Therefore, tracking genotyping errors and identifying their causes are necessary to clean up the data sets and validate the final results according to the precision required. In addition, we propose the outline of a protocol designed to limit and quantify genotyping errors at each step of the genotyping process. In particular, we recommend (i) several efficient precautions to prevent contaminations and technical artefacts; (ii) systematic use of blind samples and automation; (iii) experience and rigor for laboratory work and scoring; and (iv) systematic reporting of the error rate in population genetics studies.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号