首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Traditionally, ecological risk assessments (ERAs) have emphasized risks to individual organisms or populations of species. Although habitats may be a potential target for chemical stressors, and are considered in the framework for ERAs, the actual use of habitat evaluation methods in this process is limited. Habitats obviously represent an important entity to protect since damaged aquatic and wildlife habitats may be totally irretrievable over a human life span compared to deleterious biochemical and physiological changes which may be reversible within the life cycle of an organism, if exposure is terminated. Habitat methods have been largely used as management tools to evaluate impacts of planned water and land development projects. Habitat evaluation methods represent a structured, systematic and logical approach to determine changes to habitats because they consider important life requisites and environmental variables limiting to species. Their use in the ERA process will provide a means to differentiate habitat changes resulting from physical, chemical and/or biological factors or a combination of such factors. In addition, minimal and optimum habitat suitability can be determined for different habitat variables under different chemical exposure scenarios. The objectives of this paper are to review several available habitat evaluation methods and discuss their use in risk assessment. Particular emphasis is given to USFWS's Habitat Evaluation Procedures (HEPs) and the Instream Flow Incremental Method (IFIM).  相似文献   

2.
3.
Life cycle assessment (LCA) methods and tools are increasingly being taught in university courses. Students are learning the concepts and applications of process-based LCA, input−output-based LCA, and hybrid methods. Here, we describe a classroom simulation to introduce students to an economic input−output life cycle assessment (EIO-LCA) method. The simulation uses a simplified four-industry economy with eight transactions among the industries. Production functions for the transactions and waste generation amounts are provided for each industry. Students represent an industry and receive and issue purchase orders for materials to simulate the actual purchases of materials within the economy. Students then compare the simulation to mathematical representations of the model. Finally, students view an online EIO-LCA tool ( http://www.eiolca.net ) and use the tool to compare different products. The simulation has been used successfully with a wide range of students to facilitate conceptual understanding of one EIO-LCA method.  相似文献   

4.
张霞  李占斌  张振文  邓彦 《生态学报》2012,32(21):6788-6794
预测陕西洛惠渠灌区地下水动态变化情况,在综合分析了各种地下水动态研究方法的基础上,提出了基于支持向量机和改进的BP神经网络模型的灌区地下水动态预测方法,并在MATLAB中编制了相应的计算机程序,建立了相应的地下水动态预测模型。以灌区多年实例数据为学习样本和测试样本,比较了两种模型的地下水动态预测优劣性。研究表明,支持向量机模型和BP网络模型在样本训练学习过程中都具较高的模拟精度,而在样本学习阶段,支持向量机的预测精度明显优于BP网络,可以很好的描述地下水动态复杂的耦合关系。支持向量机方法切实可行,更加适合大型灌区地下水动态预测,是对传统地下水动态研究方法的补充与完善。  相似文献   

5.
An increase in studies using restriction site‐associated DNA sequencing (RADseq) methods has led to a need for both the development and assessment of novel bioinformatic tools that aid in the generation and analysis of these data. Here, we report the availability of AftrRAD, a bioinformatic pipeline that efficiently assembles and genotypes RADseq data, and outputs these data in various formats for downstream analyses. We use simulated and experimental data sets to evaluate AftrRAD's ability to perform accurate de novo assembly of loci, and we compare its performance with two other commonly used programs, stacks and pyrad. We demonstrate that AftrRAD is able to accurately assemble loci, while accounting for indel variation among alleles, in a more computationally efficient manner than currently available programs. AftrRAD run times are not strongly affected by the number of samples in the data set, making this program a useful tool when multicore systems are not available for parallel processing, or when data sets include large numbers of samples.  相似文献   

6.
Independent of the platform and the analysis methods used, the result of a microarray experiment is, in most cases, a list of differentially expressed genes. An automatic ontological analysis approach has been recently proposed to help with the biological interpretation of such results. Currently, this approach is the de facto standard for the secondary analysis of high throughput experiments and a large number of tools have been developed for this purpose. We present a detailed comparison of 14 such tools using the following criteria: scope of the analysis, visualization capabilities, statistical model(s) used, correction for multiple comparisons, reference microarrays available, installation issues and sources of annotation data. This detailed analysis of the capabilities of these tools will help researchers choose the most appropriate tool for a given type of analysis. More importantly, in spite of the fact that this type of analysis has been generally adopted, this approach has several important intrinsic drawbacks. These drawbacks are associated with all tools discussed and represent conceptual limitations of the current state-of-the-art in ontological analysis. We propose these as challenges for the next generation of secondary data analysis tools.  相似文献   

7.
8.
The reliable estimation of animal location, and its associated error is fundamental to animal ecology. There are many existing techniques for handling location error, but these are often ad hoc or are used in isolation from each other. In this study we present a Bayesian framework for determining location that uses all the data available, is flexible to all tagging techniques, and provides location estimates with built-in measures of uncertainty. Bayesian methods allow the contributions of multiple data sources to be decomposed into manageable components. We illustrate with two examples for two different location methods: satellite tracking and light level geo-location. We show that many of the problems with uncertainty involved are reduced and quantified by our approach. This approach can use any available information, such as existing knowledge of the animal''s potential range, light levels or direct location estimates, auxiliary data, and movement models. The approach provides a substantial contribution to the handling uncertainty in archival tag and satellite tracking data using readily available tools.  相似文献   

9.
Mass spectrometry (MS) is a technique that is used for biological studies. It consists in associating a spectrum to a biological sample. A spectrum consists of couples of values (intensity, m/z), where intensity measures the abundance of biomolecules (as proteins) with a mass-to-charge ratio (m/z) present in the originating sample. In proteomics experiments, MS spectra are used to identify pattern expressions in clinical samples that may be responsible of diseases. Recently, to improve the identification of peptides/proteins related to patterns, MS/MS process is used, consisting in performing cascade of mass spectrometric analysis on selected peaks. Latter technique has been demonstrated to improve the identification and quantification of proteins/peptide in samples. Nevertheless, MS analysis deals with a huge amount of data, often affected by noises, thus requiring automatic data management systems. Tools have been developed and most of the time furnished with the instruments allowing: (i) spectra analysis and visualization, (ii) pattern recognition, (iii) protein databases querying, (iv) peptides/proteins quantification and identification. Currently most of the tools supporting such phases need to be optimized to improve the protein (and their functionalities) identification processes. In this article we survey on applications supporting spectrometrists and biologists in obtaining information from biological samples, analyzing available software for different phases. We consider different mass spectrometry techniques, and thus different requirements. We focus on tools for (i) data preprocessing, allowing to prepare results obtained from spectrometers to be analyzed; (ii) spectra analysis, representation and mining, aimed to identify common and/or hidden patterns in spectra sets or in classifying data; (iii) databases querying to identify peptides; and (iv) improving and boosting the identification and quantification of selected peaks. We trace some open problems and report on requirements that represent new challenges for bioinformatics.  相似文献   

10.
Increasing crop production to meet the food requirements of the world''s growing population will put great pressure on global water resources. Given that the vast freshwater resources that are available in the world are far from fully exploited, globally there should be sufficient water for future agricultural requirements. However, there are large areas where low water supply and high human demand may lead to regional shortages of water for future food production. In these arid and semi-arid areas, where water is a major constraint on production, improving water resource management is crucial if Malthusian disasters are to be avoided. There is considerable scope for improvement, since in both dryland and irrigated agriculture only about one-third of the available water (as rainfall, surface, or groundwater) is used to grow useful plants. This paper illustrates a range of techniques that could lead to increased crop production by improving agricultural water use efficiency. This may be achieved by increasing the total amount of water available to plants or by increasing the efficiency with which that water is used to produce biomass. Although the crash from the Malthusian precipice may ultimately be inevitable if population growth is not addressed, the time taken to reach the edge of the precipice could be lengthened by more efficient use of existing water resources. <br>  相似文献   

11.
Communities, policy actors and conservationists benefit from understanding what institutions and land management regimes promote ecosystem services like carbon sequestration and biodiversity conservation. However, the definition of success depends on local conditions. Forests'' potential carbon stock, biodiversity and rate of recovery following disturbance are known to vary with a broad suite of factors including temperature, precipitation, seasonality, species'' traits and land use history. Methods like tracking over-time changes within forests, or comparison with “pristine” reference forests have been proposed as means to compare the structure and biodiversity of forests in the face of underlying differences. However, data from previous visits or reference forests may be unavailable or costly to obtain. Here, we introduce a new metric of locally weighted forest intercomparison to mitigate the above shortcomings. This method is applied to an international database of nearly 300 community forests and compared with previously published techniques. It is particularly suited to large databases where forests may be compared among one another. Further, it avoids problematic comparisons with old-growth forests which may not resemble the goal of forest management. In most cases, the different methods produce broadly congruent results, suggesting that researchers have the flexibility to compare forest conditions using whatever type of data is available. Forest structure and biodiversity are shown to be independently measurable axes of forest condition, although users'' and foresters'' estimations of seemingly unrelated attributes are highly correlated, perhaps reflecting an underlying sentiment about forest condition. These findings contribute new tools for large-scale analysis of ecosystem condition and natural resource policy assessment. Although applied here to forestry, these techniques have broader applications to classification and evaluation problems using crowdsourced or repurposed data for which baselines or external validations are not available.  相似文献   

12.
13.
Access to quality-assured, accurate diagnostics is critical to ensure that the 2021–2030 neglected tropical disease (NTD) road map targets can be achieved. Currently, however, there is limited regulatory oversight and few quality assurance mechanisms for NTD diagnostic tools. In attempting to address such challenges and the changing environment in regulatory requirements for diagnostics, a landscape analysis was conducted, to better understand the availability of NTD diagnostics and inform future regulatory frameworks. The list of commercially available diagnostics was compiled from various sources, including WHO guidance, national guidelines for case detection and management, diagnostic target product profiles and the published literature. The inventory was analyzed according to diagnostic type, intended use, regulatory status, and risk classification. To estimate the global need and size of the market for each type of diagnostic, annual procurement data were collected from WHO, procurement agencies, NGOs and international organizations, where available and global disease prevalence. Expert interviews were also conducted to ensure a better understanding of how diagnostics are procured and used. Of 125 diagnostic tools included in this analysis, rapid diagnostic tools accounted for 33% of diagnostics used for NTDs and very few diagnostics had been subjected to regulatory assessment. The number of tests needed for each disease was less than 1 million units per annum, except in the case of two diseases, suggesting limited commercial value. Despite the nature of the market, and presumed insufficient return on commercial investment, acceptable levels of assurance on performance, quality and safety of diagnostics are still required. Priority actions include setting up an agile, interim, stepwise risk assessment mechanism, in particular for diagnostics of lower risk, in order to support national NTD programmes and their partners with the selection and procurement of the diagnostics needed to control, eliminate and eradicate NTDs.  相似文献   

14.
Purpose

Organizational life cycle assessment (O-LCA) is an emerging method to analyze the inputs, outputs, and environmental impacts of an organization throughout its value chain. To facilitate the method’s application, the Guidance on Organizational Life Cycle Assessment was published within the UNEP/SETAC Life Cycle Initiative and applied by 12 “road-testing” organizations. In this paper, different aspects of the road testers’ studies are displayed and analyzed according to the feedback of the road testers.

Methods

An anonymous survey about the method application was conducted among the road testers. The analysis assessed, among others: (i) which goals the organizations initially pursued and their achievement; (ii) how previous experience with environmental tools contributed to the study design; (iii) which methodological options were chosen (like the scope of the study, data collection approaches, impact assessment methods and tools, and data sources); and (iv) which methodological challenges were faced.

Results and discussion

The survey showed that analytical goals were of priority for most road testers and obtained a higher achievement level than managerial and societal goals for which either long-term measures or the inclusion of stakeholders are needed. Previous experience with product- or organization-related tools considering the whole life cycle proves useful due to available data and/or organizational models. The categorization of organizational activities, data collection, data quality assessment, and interpretation proved being the most challenging methodological elements. In addition, three cross-cutting issues of method application were identified: aligning the O-LCA study to previous environmental activities, designing the study, and availability of personnel and software resources.

Conclusions

The road-testing organizations verified the applicability and usefulness of the O-LCA Guidance and significantly widened the pool of case studies available. On the other hand, additional guidance for methodological challenges particular of the organizational level, the availability of software tools able to support O-LCA application, region-specific LCI databases, and a broadly recognized data quality assessment scheme would facilitate conducting O-LCA case studies.

  相似文献   

15.
In Lake Erie, a wide variety of statistical and process-based models have significantly advanced our understanding of the major causal linkages/ecosystem processes underlying the local water quality problems. In this study, our aim is to identify knowledge gaps, monitoring assessment objectives, and management recommendations that should be critically reviewed through the iterative monitoring-modelling-assessment cycles of adaptive management. In the watershed, the presence of multiple SWAT applications provides assurance that a wide array of physical, chemical, and biological processes with distinct characterizations are used to reproduce the patterns of flow and nutrient export in agricultural lands. While there are models with more advanced mechanistic representation of certain facets of the hydrological cycle (surface runoff, groundwater and sediment erosion) or better equipped to depict urban settings, we believe that greater insights will be gained by revisiting several influential assumptions (tile drainage, fertilizer/manure application rates, land-use/landcover data) and recalibrating the existing SWAT models to capture both baseline and event-flow conditions and daily nutrient concentration (not loading) variability in multiple locations rather than a single downstream site. It is also critical to redesign land-use management scenarios by accommodating recent conceptual and technical advancements of their life-cycle effectiveness, the variability in their starting operational efficiency, and differential response to storm events or seasonality, as well as the role of legacy phosphorus. In the receiving waterbody, the development of data-driven models to establish causal linkages between the trophic status of Lake Erie and external phosphorus loading represents a pragmatic means to draw forecasts regarding the phytoplankton community response to different management actions. Two critical next steps to further augment the empirical modelling work is the iterative updating as more data are acquired through monitoring and the introduction of additional explanatory variables that are likely associated with the occurrence of cyanobacteria-dominated blooms. The majority of the process-based models are not constrained by the available data, and therefore their primary value is their use as heuristic tools to advance our understanding of Lake Erie. The validation of their predictive power should become one of the overarching objectives of the iterative monitoring-modelling-assessment cycles. With respect to the projected responses of the system to nutrient loading reduction, we express our skepticism with the optimistic predictions of the extent and duration of hypoxia, given our limited knowledge of the sediment diagenesis processes in the central basin along with the lack of data related to the vertical profiles of organic matter and phosphorus fractionation or sedimentation/burial rates. Our study also questions the adequacy of the coarse spatiotemporal (seasonal/annual, basin- or lake-wide) scales characterizing the philosophy of both water quality management objectives and modelling enterprise in Lake Erie, as this strategy seems somewhat disconnected from the ecosystem services targeted. We conclude by emphasizing that the valuation of ecosystem services should be integrated into the decision-making process, as we track the evolution of the system over time.  相似文献   

16.
To test for association between a disease and a set of linked markers, or to estimate relative risks of disease, several different methods have been developed. Many methods for family data require that individuals be genotyped at the full set of markers and that phase can be reconstructed. Individuals with missing data are excluded from the analysis. This can result in an important decrease in sample size and a loss of information. A possible solution to this problem is to use missing-data likelihood methods. We propose an alternative approach, namely the use of multiple imputation. Briefly, this method consists in estimating from the available data all possible phased genotypes and their respective posterior probabilities. These posterior probabilities are then used to generate replicate imputed data sets via a data augmentation algorithm. We performed simulations to test the efficiency of this approach for case/parent trio data and we found that the multiple imputation procedure generally gave unbiased parameter estimates with correct type 1 error and confidence interval coverage. Multiple imputation had some advantages over missing data likelihood methods with regards to ease of use and model flexibility. Multiple imputation methods represent promising tools in the search for disease susceptibility variants.  相似文献   

17.
In the past decades a wide variety of tools have been developed to assess the sustainability performance of farms. Although multiple studies have compared tools on a theoretical basis, little attention has been paid to the comparing tools in practice. This research compared indicator-based sustainability assessment tools to gain insight in practical requirements, procedures and complexity involved in applying sustainability assessment tools. In addition, the relevance of the tools, as perceived by farmers, was evaluated. An overview of 48 indicator-based sustainability assessment tools was developed to, subsequently, select tools that address the environmental, social and economic dimension of sustainability, are issued in a scientific publication and suitable for assessing the sustainability performance of livestock and arable farms in Denmark. Only four tools (RISE, SAFA, PG and IDEA) complied with the selection criteria and were used to assess the sustainability performance of five Danish farms. The tools vary widely in their scoring and aggregation method, time requirement and data input. The farmers perceived RISE as the most relevant tool to gain insight in the sustainability performance of their farm. The findings emphasize the importance of context specificity, user-friendliness, complexity of the tool, language use, and a match between value judgements of tool developers and farmers. Even though RISE was considered as the most relevant tool, the farmers expressed a hesitation to apply the outcomes of the four tools in their decision making and management. Furthermore, they identified limitations in their options to improve their sustainability performance. Additional efforts are needed to support farmers in using the outcomes in their decision making. The outcomes of sustainability assessment tools should therefore be considered as a starting point for discussion, reflection and learning.  相似文献   

18.
Species delimitation is the act of identifying species‐level biological diversity. In recent years, the field has witnessed a dramatic increase in the number of methods available for delimiting species. However, most recent investigations only utilize a handful (i.e. 2–3) of the available methods, often for unstated reasons. Because the parameter space that is potentially relevant to species delimitation far exceeds the parameterization of any existing method, a given method necessarily makes a number of simplifying assumptions, any one of which could be violated in a particular system. We suggest that researchers should apply a wide range of species delimitation analyses to their data and place their trust in delimitations that are congruent across methods. Incongruence across the results from different methods is evidence of either a difference in the power to detect cryptic lineages across one or more of the approaches used to delimit species and could indicate that assumptions of one or more of the methods have been violated. In either case, the inferences drawn from species delimitation studies should be conservative, for in most contexts it is better to fail to delimit species than it is to falsely delimit entities that do not represent actual evolutionary lineages.  相似文献   

19.
The stream–groundwater interface (SGI) is thought to be an important location within stream networks for dissolved organic carbon (DOC) processing (e.g., degradation, removal), since it is considered a hotspot for microbial activity and biogeochemical reactions. This research is one of the first attempts to collect and assess DOC conditions at the SGI across a stream network—an entire third-order, lowland watershed in Michigan, USA. We present an initial exploration of this unique data set and highlight some of the challenges when working at these scales. Overall, our results show that SGI DOC conditions are complex at the network scale and do not conform to predictions based upon previous point- and small-scale studies. We found no strong pattern of DOC removal within the SGI at the network scale even after using chloride and temperature as natural tracers to differentiate between hydrological processes and biogeochemical reactions influencing DOC cycling. Instead, trends in DOC quantity and molecular qualities suggest that potential biotic reactions, including aerobic microbial respiration, had an influence on DOC concentrations at only some of the sites, while physical mixing of ground and stream surface waters appears to explain the majority of the observed changes in DOC concentrations at other sites. In addition, results show that neither SGI sites indicating DOC removal nor DOC molecular quality shifts measured correlated with stream order. It did reveal that DOC variability across surface water, groundwater, and the SGI locations was consistently greatest in the shallow sediments of the SGI, demonstrating that the SGI is a systematically distinct location for DOC conditions in the watershed. Our empirical stream network-scale SGI data are some of the first that are compatible with recently developed process-based, network-scale, SGI models. Our results indicate that these process-based models may not accurately represent SGI exchange of lowland, groundwater discharge-dominated streams like the one in this study. Lastly, this study shows that new methods are needed to achieve the goal of making and linking SGI observations to network-scale biogeochemical processes and theory. To help develop these methods we provide a discussion of the approach we used (i.e., “lessons-learned”) that might become the basis for systematic data analysis of SGI porewaters in future, network-wide biogeochemical studies of the SGI.  相似文献   

20.
Coastal environments contain some of the marine world's most important ecosystems and represent significant resources for human industry and recreation. Water quality in the coastal environment is extremely important for a number of reasons from the protection of marine organisms and the well being of marine ecosystems to the health of people in the region and the safety of industries such as aquaculture. As a result it is essential that environmental health in coastal environments is monitored. Traditional monitoring methods include assessment of biological indices or direct measurements of water quality, which are based on in situ data collection and hence are often spatially or temporally limited. Remote sensing imagery is increasingly used as a rich source of spatial information, providing more detailed coverage then other methods. But the complexity of information in the imagery requires new analysis techniques that allow us to identify the components and possible causes of spatial and temporal variability.This paper presents a review of methods to analyse spatial and temporal variations in remote sensing data of coastal water quality and discusses and compares these methods and the outcomes they achieve. Selected techniques are illustrated by using a sample dataset of MODIS chlorophyll-a imagery. We consider classification methods (cluster analysis, discriminant analysis) that may be used in exploratory, confirmatory and predictive ways, methods that summarize and identify patterns within complex datasets (factor analysis, principal components analysis, self-organizing maps), and techniques that explicitly analyse spatial relationships (the semivariogram and geographically weighted regression). Each technique has a different purpose and addresses different questions. This review identifies how these methods can be utilized to address water quality variability in order to foster a wider application of such techniques for coastal water quality assessment and monitoring.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号