首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Essential Biodiversity Variables (EBV) are fundamental variables that can be used for assessing biodiversity change over time, for determining adherence to biodiversity policy, for monitoring progress towards sustainable development goals, and for tracking biodiversity responses to disturbances and management interventions. Data from observations or models that provide measured or estimated EBV values, which we refer to as EBV data products, can help to capture the above processes and trends and can serve as a coherent framework for documenting trends in biodiversity. Using primary biodiversity records and other raw data as sources to produce EBV data products depends on cooperation and interoperability among multiple stakeholders, including those collecting and mobilising data for EBVs and those producing, publishing and preserving EBV data products. Here, we encapsulate ten principles for the current best practice in EBV-focused biodiversity informatics as ‘The Bari Manifesto’, serving as implementation guidelines for data and research infrastructure providers to support the emerging EBV operational framework based on trans-national and cross-infrastructure scientific workflows. The principles provide guidance on how to contribute towards the production of EBV data products that are globally oriented, while remaining appropriate to the producer's own mission, vision and goals. These ten principles cover: data management planning; data structure; metadata; services; data quality; workflows; provenance; ontologies/vocabularies; data preservation; and accessibility. For each principle, desired outcomes and goals have been formulated. Some specific actions related to fulfilling the Bari Manifesto principles are highlighted in the context of each of four groups of organizations contributing to enabling data interoperability - data standards bodies, research data infrastructures, the pertinent research communities, and funders. The Bari Manifesto provides a roadmap enabling support for routine generation of EBV data products, and increases the likelihood of success for a global EBV framework.  相似文献   

2.
Policy makers require high-level summaries of biodiversity change. However, deriving such summaries from raw biodiversity data is a complex process involving several intermediary stages. In this paper, we describe an operational workflow for generating annual estimates of species occupancy at national scales from raw species occurrence data, which can be used to construct a range of policy-relevant biodiversity indicators. We describe the workflow in detail: from data acquisition, data assessment and data manipulation, through modelling, model evaluation, application and dissemination. At each stage, we draw on our experience developing and applying the workflow for almost a decade to outline the challenges that analysts might face. These challenges span many areas of ecology, taxonomy, data science, computing and statistics. In our case, the principal output of the workflow is annual estimates of occupancy, with measures of uncertainty, for over 5000 species in each of several defined ‘regions’ (e.g. countries, protected areas, etc.) of the UK from 1970 to 2019. This data product corresponds closely to the notion of a species distribution Essential Biodiversity Variable (EBV). Throughout the paper, we highlight methodologies that might not be applicable outside of the UK and suggest alternatives. We also highlight areas where the workflow can be improved; in particular, methods are needed to mitigate and communicate the risk of bias arising from the lack of representativeness that is typical of biodiversity data. Finally, we revisit the ‘ideal’ and ‘minimal’ criteria for species distribution EBVs laid out in previous contributions and pose some outstanding questions that should be addressed as a matter of priority. Going forward, we hope that this paper acts as a template for research groups around the world seeking to develop similar data products.  相似文献   

3.
Recent advances in molecular technology have revolutionized research on all aspects of the biology of organisms, including ciliates, and created unprecedented opportunities for pursuing a more integrative approach to investigations of biodiversity. However, this goal is complicated by large gaps and inconsistencies that still exist in the foundation of basic information about biodiversity of ciliates. The present paper reviews issues relating to the taxonomy of ciliates and presents specific recommendations for best practice in the observation and documentation of their biodiversity. This effort stems from a workshop that explored ways to implement six Grand Challenges proposed by the International Research Coordination Network for Biodiversity of Ciliates (IRCN‐BC). As part of its commitment to strengthening the knowledge base that supports research on biodiversity of ciliates, the IRCN‐BC proposes to populate The Ciliate Guide, an online database, with biodiversity‐related data and metadata to create a resource that will facilitate accurate taxonomic identifications and promote sharing of data.  相似文献   

4.
生物多样性的稳定维持关乎人类生存发展与地球健康。生物多样性核心监测指标(Essential Biodiversity Variables, EBVs)旨在结合地面调查与遥感技术, 为大尺度、长时间序列的生物多样性监测提供新的解决方案。然而, 目前学界仍然缺乏一套国家尺度标准化EBVs遥感监测产品数据集, 以进行生物多样性评估。本研究旨在对中国生物多样性核心监测指标遥感产品进行体系构建与思考, 首先综述了目前EBVs的遥感研究概况, 并根据EBVs研究文献的数量进行调研分析; 同时, 本文在已有遥感生物多样性产品优先标准的基础上, 添加了“可重复性”的新标准, 并据此构建了中国EBVs遥感产品体系与监测数据集的指标清单, 最终对中国EBVs遥感研究存在的问题进行思考与讨论。本研究可为中国的生物多样性遥感监测提供科学依据, 有望为中国生物多样性政策的制定提供支撑。  相似文献   

5.
6.
One of the most serious bottlenecks in the scientific workflows of biodiversity sciences is the need to integrate data from different sources, software applications, and services for analysis, visualisation and publication. For more than a quarter of a century the TDWG Biodiversity Information Standards organisation has a central role in defining and promoting data standards and protocols supporting interoperability between disparate and locally distributed systems.Although often not sufficiently recognized, TDWG standards are the foundation of many popular Biodiversity Informatics applications and infrastructures ranging from small desktop software solutions to large scale international data networks. However, individual scientists and groups of collaborating scientist have difficulties in fully exploiting the potential of standards that are often notoriously complex, lack non-technical documentations, and use different representations and underlying technologies. In the last few years, a series of initiatives such as Scratchpads, the EDIT Platform for Cybertaxonomy, and biowikifarm have started to implement and set up virtual work platforms for biodiversity sciences which shield their users from the complexity of the underlying standards. Apart from being practical work-horses for numerous working processes related to biodiversity sciences, they can be seen as information brokers mediating information between multiple data standards and protocols.The ViBRANT project will further strengthen the flexibility and power of virtual biodiversity working platforms by building software interfaces between them, thus facilitating essential information flows needed for comprehensive data exchange, data indexing, web-publication, and versioning. This work will make an important contribution to the shaping of an international, interoperable, and user-oriented biodiversity information infrastructure.  相似文献   

7.
Abstract

Biodiversity data generated in the context of research projects often lack a strategy for long-term preservation and availability, and are therefore at risk of becoming outdated and finally lost. The reBiND project aims to develop an efficient and well-documented workflow for rescuing such data sets. The workflow consists of phases for data transformation into contemporary standards, data validation, storage in a native XML database, and data publishing in international biodiversity networks. It has been developed and tested using the example of collection and observational data but is flexible enough to be transferred to other data types and domains.  相似文献   

8.
With the number of satellite sensors and date centers being increased continuously, it is becoming a trend to manage and process massive remote sensing data from multiple distributed sources. However, the combination of multiple satellite data centers for massive remote sensing (RS) data collaborative processing still faces many challenges. In order to reduce the huge amounts of data migration and improve the efficiency of multi-datacenter collaborative process, this paper presents the infrastructures and services of the data management as well as workflow management for massive remote sensing data production. A dynamic data scheduling strategy was employed to reduce the duplication of data request and data processing. And by combining the remote sensing spatial metadata repositories and Gfarm grid file system, the unified management of the raw data, intermediate products and final products were achieved in the co-processing. In addition, multi-level task order repositories and workflow templates were used to construct the production workflow automatically. With the help of specific heuristic scheduling rules, the production tasks were executed quickly. Ultimately, the Multi-datacenter Collaborative Process System (MDCPS) were implemented for large-scale remote sensing data production based on the effective management of data and workflow. As a consequence, the performance of MDCPS in experiments environment showed that those strategies could significantly enhance the efficiency of co-processing across multiple data centers.  相似文献   

9.
Key global indicators of biodiversity decline, such as the IUCN Red List Index and the Living Planet Index, have relatively long assessment intervals. This means they, due to their inherent structure, function as late‐warning indicators that are retrospective, rather than prospective. These indicators are unquestionably important in providing information for biodiversity conservation, but the detection of early‐warning signs of critical biodiversity change is also needed so that proactive management responses can be enacted promptly where required. Generally, biodiversity conservation has dealt poorly with the scattered distribution of necessary detailed information, and needs to find a solution to assemble, harmonize and standardize the data. The prospect of monitoring essential biodiversity variables (EBVs) has been suggested in response to this challenge. The concept has generated much attention, but the EBVs themselves are still in development due to the complexity of the task, the limited resources available, and a lack of long‐term commitment to maintain EBV data sets. As a first step, the scientific community and the policy sphere should agree on a set of priority candidate EBVs to be developed within the coming years to advance both large‐scale ecological research as well as global and regional biodiversity conservation. Critical ecological transitions are of high importance from both a scientific as well as from a conservation policy point of view, as they can lead to long‐lasting biodiversity change with a high potential for deleterious effects on whole ecosystems and therefore also on human well‐being. We evaluated candidate EBVs using six criteria: relevance, sensitivity to change, generalizability, scalability, feasibility, and data availability and provide a literature‐based review for eight EBVs with high sensitivity to change. The proposed suite of EBVs comprises abundance, allelic diversity, body mass index, ecosystem heterogeneity, phenology, range dynamics, size at first reproduction, and survival rates. The eight candidate EBVs provide for the early detection of critical and potentially long‐lasting biodiversity change and should be operationalized as a priority. Only with such an approach can science predict the future status of global biodiversity with high certainty and set up the appropriate conservation measures early and efficiently. Importantly, the selected EBVs would address a large range of conservation issues and contribute to a total of 15 of the 20 Aichi targets and are, hence, of high biological relevance.  相似文献   

10.
This paper discusses a number of aspects of using grid computing methods in support of molecular simulations, with examples drawn from the eMinerals project. A number of components for a useful grid infrastructure are discussed, including the integration of compute and data grids, automatic metadata capture from simulation studies, interoperability of data between simulation codes, management of data and data accessibility, management of jobs and workflow, and tools to support collaboration. Use of a grid infrastructure also brings certain challenges, which are discussed. These include making use of boundless computing resources, the necessary changes, and the need to be able to manage experimentation.  相似文献   

11.
Programs and initiatives aiming to protect biodiversity and ecosystems have increased over the last decades in response to their decline. Most of these are based on monitoring data to quantitatively describe trends in biodiversity and ecosystems. The estimation of such trends, at large scales, requires the integration of numerous data from multiple monitoring sites. However, due to the high heterogeneity of data formats and the resulting lack of interoperability, the data integration remains sparsely used and synthetic analyses are often limited to a restricted part of the data available.Here we propose a workflow, comprising four main steps, from data gathering to quality control, to better integrate ecological monitoring data and to create a synthetic dataset that will make it possible to analyse larger sets of monitoring data, including unpublished data.The workflow was designed and applied in the production of the Status of Coral Reefs of the World: 2020 report, where more than two hundred individual datasets were integrated to assess the status and trends of hard coral cover at the global scale. The workflow was applied to two case studies and associated R codes, based on the experience acquired during the production of this report.The proposed workflow allows for the integration of datasets with different levels of taxonomic and spatial precision, with a high degree of reproducibility. It provides a conceptual and technical framework for the integration of ecological monitoring data, allowing for the estimation of temporal trends in biodiversity and ecosystems or to test ecological hypotheses at larger scales.  相似文献   

12.
Biodiversity data derive from myriad sources stored in various formats on many distinct hardware and software platforms. An essential step towards understanding global patterns of biodiversity is to provide a standardized view of these heterogeneous data sources to improve interoperability. Fundamental to this advance are definitions of common terms. This paper describes the evolution and development of Darwin Core, a data standard for publishing and integrating biodiversity information. We focus on the categories of terms that define the standard, differences between simple and relational Darwin Core, how the standard has been implemented, and the community processes that are essential for maintenance and growth of the standard. We present case-study extensions of the Darwin Core into new research communities, including metagenomics and genetic resources. We close by showing how Darwin Core records are integrated to create new knowledge products documenting species distributions and changes due to environmental perturbations.  相似文献   

13.
Spatial and/or temporal biases in biodiversity data can directly influence the utility, comparability, and reliability of ecological and evolutionary studies. While the effects of biased spatial coverage of biodiversity data are relatively well known, temporal variation in data quality (i.e., the congruence between recorded and actual information) has received much less attention. Here, we develop a conceptual framework for understanding the influence of time on biodiversity data quality based on three main processes: (1) the natural dynamics of ecological systems—such as species turnover or local extinction; (2) periodic taxonomic revisions, and; (3) the loss of physical and metadata due to inefficient curation, accidents, or funding shortfalls. Temporal decay in data quality driven by these three processes has fundamental consequences for the usage and comparability of data collected in different time periods. Data decay can be partly ameliorated by adopting standard protocols for generation, storage, and sharing data and metadata. However, some data degradation is unavoidable due to natural variations in ecological systems. Consequently, changes in biodiversity data quality over time need be carefully assessed and, if possible, taken into account when analyzing aging datasets.  相似文献   

14.
15.
Biodiversity metadata provide service to query, management and use of actual data sets. The progress of the development of metadata standards in China was analyzed, and metadata required and/or produced based on the Convention on Biological Diversity were reviewed. A biodiversity metadata standard was developed based on the characteristics of biodiversity data and in line with the framework of international metadata standards. The content of biodiversity metadata is divided into two levels. The first level consists of metadata entities and elements that are necessary to exclusively identify a biodiversity data set, and is named as Core Metadata. The second level comprises metadata entities and elements that are necessary to describe all aspects of a biodiversity data set. The standard for core biodiversity metadata is presented in this paper, which is composed of 51 elements belonging to 6 categories (entities), i.e. inventory information, collection information, information on the content of the data set, management information, access information, and metadata management information. The name, definition, condition, data type, and field length of metadata elements in these six categories (entities) are also described.  相似文献   

16.
The Convention on Biological Diversity's strategic plan lays out five goals: “(A) address the underlying causes of biodiversity loss by mainstreaming biodiversity across government and society; (B) reduce the direct pressures on biodiversity and promote sustainable use; (C) improve the status of biodiversity by safeguarding ecosystems, species and genetic diversity; (D) enhance the benefits to all from biodiversity and ecosystem services; (E) enhance implementation through participatory planning, knowledge management and capacity building.” To meet and inform on the progress towards these goals, a globally coordinated approach is needed for biodiversity monitoring that is linked to environmental data and covers all biogeographic regions. During a series of workshops and expert discussions, we identified nine requirements that we believe are necessary for developing and implementing such a global terrestrial species monitoring program. The program needs to design and implement an integrated information chain from monitoring to policy reporting, to create and implement minimal data standards and common monitoring protocols to be able to inform Essential Biodiversity Variables (EBVs), and to develop and optimize semantics and ontologies for data interoperability and modelling. In order to achieve this, the program needs to coordinate diverse but complementary local nodes and partnerships. In addition, capacities need to be built for technical tasks, and new monitoring technologies need to be integrated. Finally, a global monitoring program needs to facilitate and secure funding for the collection of long-term data and to detect and fill gaps in under-observed regions and taxa. The accomplishment of these nine requirements is essential in order to ensure data is comprehensive, to develop robust models, and to monitor biodiversity trends over large scales. A global terrestrial species monitoring program will enable researchers and policymakers to better understand the status and trends of biodiversity.  相似文献   

17.
Human activity and land‐use change are dramatically altering the sizes, geographical distributions and functioning of biological populations worldwide, with tremendous consequences for human well‐being. Yet our ability to measure, monitor and forecast biodiversity change – crucial to addressing it – remains limited. Biodiversity monitoring systems are being developed to improve this capacity by deriving metrics of change from an array of in situ data (e.g. field plots or species occurrence records) and Earth observations (EO; e.g. satellite or airborne imagery). However, there are few ecologically based frameworks for integrating these data into meaningful metrics of biodiversity change. Here, I describe how concepts of pattern and scale in ecology could be used to design such a framework. I review three core topics: the role of scale in measuring and modelling biodiversity patterns with EO, scale‐dependent challenges linking in situ and EO data and opportunities to apply concepts of pattern and scale to EO to improve biodiversity mapping. From this analysis emerges an actionable approach for measuring, monitoring and forecasting biodiversity change, highlighting key opportunities to establish EO as the backbone of global‐scale, science‐driven conservation.  相似文献   

18.
Climate change threatens to commit 15–37% of species to extinction by 2050. There is a clear need to support policy-makers analyzing and assessing the impact of climate change along with land use changes. This requires a megascience infrastructure that is capable of discovering and integrating enormous volumes of multi-disciplinary data, i.e. data from biodiversity, earth observation, and climatic archives. Metadata and services interoperability is necessary. The Global Earth Observation System of Systems (GEOSS) works to realize such an interoperability infrastructure based on systems architecture standardization. In this paper we describe the results of linking the infrastructures of Climate Change research and Biodiversity research together using the approach envisioned by GEOSS. In fact, we present and discuss a service-oriented framework which was applied to implement and demonstrate the Climate Change and Biodiversity use scenario of the GEOSS Interoperability Process Pilot Project (IP3). This interoperability is done for the purpose of enabling scientists to do large-scale ecological analysis. We describe a generic use scenario and related modelling workbench that implement an environment for studying the impacts of climate change on biodiversity. The Service Oriented Architecture framework, which realizes this environment, is described. Its standard-based components and services, according to GEOSS requirements, are discussed. This framework was successfully demonstrated at the GEO IV Ministerial Meeting in Cape Town, South Africa November 2007.  相似文献   

19.
Biological knowledge can be inferred from three major levels of information: molecules, organisms and ecologies. Bioinformatics is an established field that has made significant advances in the development of systems and techniques to organize contemporary molecular data; biodiversity informatics is an emerging discipline that strives to develop methods to organize knowledge at the organismal level extending back to the earliest dates of recorded natural history. Furthermore, while bioinformatics studies generally focus on detailed examinations of key 'model' organisms, biodiversity informatics aims to develop over-arching hypotheses that span the entire tree of life. Biodiversity informatics is presented here as a discipline that unifies biological information from a range of contemporary and historical sources across the spectrum of life using organisms as the linking thread. The present review primarily focuses on the use of organism names as a universal metadata element to link and integrate biodiversity data across a range of data sources.  相似文献   

20.
Environmental DNA (eDNA) metabarcoding surveys enable rapid, noninvasive identification of taxa from trace samples with wide‐ranging applications from characterizing local biodiversity to identifying food‐web interactions. However, the technique is prone to error from two major sources: (a) contamination through foreign DNA entering the workflow, and (b) misidentification of DNA within the workflow. Both types of error have the potential to obscure true taxon presence or to increase taxonomic richness by incorrectly identifying taxa as present at sample sites, but multiple error sources can remain unaccounted for in metabarcoding studies. Here, we use data from an eDNA metabarcoding study designed to detect vertebrate species at waterholes in Australia's arid zone to illustrate where and how in the workflow errors can arise, and how to mitigate those errors. We detected the DNA of 36 taxa spanning 34 families, 19 orders and five vertebrate classes in water samples from waterholes, demonstrating the potential for eDNA metabarcoding surveys to provide rapid, noninvasive detection in remote locations, and to widely sample taxonomic diversity from aquatic through to terrestrial taxa. However, we initially identified 152 taxa in the samples, meaning there were many false positive detections. We identified the sources of these errors, allowing us to design a stepwise process to detect and remove error, and provide a template to minimize similar errors that are likely to arise in other metabarcoding studies. Our findings suggest eDNA metabarcoding surveys need to be carefully conducted and screened for errors to ensure their accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号