首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

2.
3.

Purpose  

At the parameter level, data inaccuracy, data gaps, and the use of unrepresentative data have been recognized as sources of uncertainty in life cycle assessment (LCA). In many LCA uncertainty studies, parameter distributions were created based on the measured variability or on “rules of thumb,” but the possible existence of correlation was not explored. The correlation between parameters may alter the sampling space and, thus, yield unrepresentative results. The objective of this article is to describe the effect of correlation between input parameters (and the final product) on the outcome of an uncertainty analysis, carried out for an LCA of an agricultural product.  相似文献   

4.
Uncertainty calculation in life cycle assessments   总被引:1,自引:0,他引:1  
Goal and Background  Uncertainty is commonly not taken into account in LCA studies, which downgrades their usability for decision support. One often stated reason is a lack of method. The aim of this paper is to develop a method for calculating the uncertainty propagation in LCAs in a fast and reliable manner. Approach  The method is developed in a model that reflects the calculation of an LCA. For calculating the uncertainty, the model combines approximation formulas and Monte Carlo Simulation. It is based on virtual data that distinguishes true values and random errors or uncertainty, and that hence allows one to compare the performance of error propagation formulas and simulation results. The model is developed for a linear chain of processes, but extensions for covering also branched and looped product systems are made and described. Results  The paper proposes a combined use of approximation formulas and Monte Carlo simulation for calculating uncertainty in LCAs, developed primarily for the sequential approach. During the calculation, a parameter observation controls the performance of the approximation formulas. Quantitative threshold values are given in the paper. The combination thus transcends drawbacks of simulation and approximation. Conclusions and Outlook  The uncertainty question is a true jigsaw puzzle for LCAs and the method presented in this paper may serve as one piece in solving it. It may thus foster a sound use of uncertainty assessment in LCAs. Analysing a proper management of the input uncertainty, taking into account suitable sampling and estimation techniques; using the approach for real case studies, implementing it in LCA software for automatically applying the proposed combined uncertainty model and, on the other hand, investigating about how people do decide, and should decide, when their decision relies on explicitly uncertain LCA outcomes-these all are neighbouring puzzle pieces inviting to further work.  相似文献   

5.
Attributional and consequential LCA of milk production   总被引:1,自引:1,他引:0  
Background, aim and scope  Different ways of performing a life cycle assessment (LCA) are used to assess the environmental burden of milk production. A strong connection exists between the choice between attributional LCA (ALCA) and consequential LCA (CLCA) and the choice of how to handle co-products. Insight is needed in the effect of choice on results of environmental analyses of agricultural products, such as milk. The main goal of this study was to demonstrate and compare ALCA and CLCA of an average conventional milk production system in The Netherlands. Materials and methods  ALCA describes the pollution and resource flows within a chosen system attributed to the delivery of a specified amount of the functional unit. CLCA estimates how pollution and resource flows within a system change in response to a change in output of the functional unit. For an average Dutch conventional milk production system, an ALCA (mass and economic allocation) and a CLCA (system expansion) were performed. Impact categories included in the analyses were: land use, energy use, climate change, acidification and eutrophication. The comparison was based on four criteria: hotspot identification, comprehensibility, quality and availability of data. Results  Total environmental burdens were lower when using CLCA compared with ALCA. Major hotspots for the different impact categories when using CLCA and ALCA were similar, but other hotspots differed in contributions, order and type. As experienced by the authors, ALCA and use of co-product allocation are difficult to comprehend for a consequential practitioner, while CLCA and system expansion are difficult to comprehend for an attributional practitioner. Literature shows concentrates used within ALCA will be more understandable for a feeding expert than the feed used within CLCA. Outcomes of CLCA are more sensitive to uncertainties compared with ALCA, due to the inclusion of market prospects. The amount of data required within CLCA is similar compared with ALCA. Discussion  The main cause of these differences between ALCA and CLCA is the fact that different systems are modelled. The goal of the study or the research question to be answered defines the system under study. In general, the goal of CLCA is to assess environmental consequences of a change in demand, whereas the goal of ALCA is to assess the environmental burden of a product, assuming a status-quo situation. Nowadays, however, most LCA practitioners chose one methodology independent of their research question. Conclusions  This study showed it is possible to perform both ALCA (mass and economic allocation) and CLCA (system expansion) of milk. Choices of methodology, however, resulted in differences in: total quantitative outcomes, hotspots, degree of understanding and quality. Recommendations and perspectives  We recommend LCA practitioners to better distinguish between ALCA and CLCA in applied studies to reach a higher degree of transparency. Furthermore, we recommend LCA practitioners of different research areas to perform similar case studies to address differences between ALCA and CLCA of the specific products as the outcomes might differ from our study.  相似文献   

6.
Goal, Scope and Background Decision-makers demand information about the range of possible outcomes of their actions. Therefore, for developing Life Cycle Assessment (LCA) as a decision-making tool, Life Cycle Inventory (LCI) databases should provide uncertainty information. Approaches for incorporating uncertainty should be selected properly contingent upon the characteristics of the LCI database. For example, in industry-based LCI databases where large amounts of up-to-date process data are collected, statistical methods might be useful for quantifying the uncertainties. However, in practice, there is still a lack of knowledge as to what statistical methods are most effective for obtaining the required parameters. Another concern from the industry's perspective is the confidentiality of the process data. The aim of this paper is to propose a procedure for incorporating uncertainty information with statistical methods in industry-based LCI databases, which at the same time preserves the confidentiality of individual data. Methods The proposed procedure for taking uncertainty in industry-based databases into account has two components: continuous probability distributions fitted to scattering unit process data, and rank order correlation coefficients between inventory flows. The type of probability distribution is selected using statistical methods such as goodness-of-fit statistics or experience based approaches. Parameters of probability distributions are estimated using maximum likelihood estimation. Rank order correlation coefficients are calculated for inventory items in order to preserve data interdependencies. Such probability distributions and rank order correlation coefficients may be used in Monte Carlo simulations in order to quantify uncertainties in LCA results as probability distribution. Results and Discussion A case study is performed on the technology selection of polyethylene terephthalate (PET) chemical recycling systems. Three processes are evaluated based on CO2 reduction compared to the conventional incineration technology. To illustrate the application of the proposed procedure, assumptions were made about the uncertainty of LCI flows. The application of the probability distributions and the rank order correlation coefficient is shown, and a sensitivity analysis is performed. A potential use of the results of the hypothetical case study is discussed. Conclusion and Outlook The case study illustrates how the uncertainty information in LCI databases may be used in LCA. Since the actual scattering unit process data were not available for the case study, the uncertainty distribution of the LCA result is hypothetical. However, the merit of adopting the proposed procedure has been illustrated: more informed decision-making becomes possible, basing the decisions on the significance of the LCA results. With this illustration, the authors hope to encourage both database developers and data suppliers to incorporate uncertainty information in LCI databases.  相似文献   

7.
Although the auger method has been reported to be simple and superior to other methods of determination of roots, a standard procedure of determining roots with the same is lacking. In a bid to standardize the auger method for studying wheat root distribution; we sampled roots with 5, 7.5 and 10 cm ID augers on the row and midway between rows down to 180 cm. The suitability of a sampling scheme was adjudged from bias between observed and actual root length densities (RLD). The actual density in a layer was obtained by integrating the equation fitted to the average of root density data horizontally between 0 and 11 cm, because for 22 cm apart rows of wheat the representative half of the unit soil strip was 11 cm from the row; and assumed actual RLD was the average of horizontal distribution of RLD in a particular layer. Single site sampling on the row or between rows gave the maximum bias. Average of two sites viz. on the row and midway between rows with 10 cm ID auger and 7.5 cm ID auger or at three sites with 5 cm ID auger (additional site midway between the earlier two) gave the best estimates in that order.  相似文献   

8.

Purpose

The analysis of uncertainty in life cycle assessment (LCA) studies has been a topic for more than 10 years, and many commercial LCA programs now feature a sampling approach called Monte Carlo analysis. Yet, a full Monte Carlo analysis of a large LCA system, for instance containing the 4,000 unit processes of ecoinvent v2.2, is rarely carried out by LCA practitioners. One reason for this is computation time. An alternative faster than Monte Carlo method is analytical error propagation by means of a Taylor series expansion; however, this approach suffers from being explained in the literature in conflicting ways, hampering implementation in most software packages for LCA. The purpose of this paper is to compare the two different approaches from a theoretical and practical perspective.

Methods

In this paper, we compare the analytical and sampling approaches in terms of their theoretical background and their mathematical formulation. Using three case studies—one stylized, one real-sized, and one input–output (IO)-based—we approach these techniques from a practical perspective and compare them in terms of speed and results.

Results

Depending on the precise question, a sampling or an analytical approach provides more useful information. Whenever they provide the same indicators, an analytical approach is much faster but less reliable when the uncertainties are large.

Conclusions

For a good analysis, analytical and sampling approaches are equally important, and we recommend practitioners to use both whenever available, and we recommend software suppliers to implement both.  相似文献   

9.
This paper compares the distribution, sampling and estimation of abundance for two animal species in an African ecosystem by means of an intensive simulation of the sampling process under a geographical information system (GIS) environment. It focuses on systematic and random sampling designs, commonly used in wildlife surveys, comparing their performance to an adaptive design at three increasing sampling intensities, using the root mean square errors (RMSE). It further assesses the impact of sampling designs and intensities on estimates of population parameters. The simulation is based on data collected during a prior survey, in which geographical locations of all observed animals were recorded. This provides more detailed data than that usually available from transect surveys. The results show precision of estimates to increase with increasing sampling intensity, while no significant differences are observed between estimates obtained under random and systematic designs. An increase in precision is observed for the adaptive design, thereby validating the use of this design for sampling clustered populations. The study illustrates the benefits of combining statistical methods with GIS techniques to increase insight into wildlife population dynamics.  相似文献   

10.
Two plant fossil‐bearing beds from the middle Barremian of Belgium were analysed to ascertain how experimental designs affect conclusions regarding palaeodiversity at a local scale. We analysed eight lateral samples per bed taken regularly every 3 m using an exhaustive sub‐sampling method. The Clench equation was used to evaluate the completeness of the taxonomic inventory of the samples and the sampling effort needed to obtain a reliable representation of diversity. The number of replicates needed to obtain the same representation of diversity from different nearby lateral samples of the same bed ranged from 5 to 19. Richness (S), Evenness (J) and the number of equiprobable taxa (2H’) greatly varied between samples from the same bed, even over short distances. Only one of the studied samples was representative of the taxonomic inventory of its bed. Our study shows that 1) the selection bias of the sampling area is reduced by increasing the number of lateral samples taken in a bed, enabling more reliable conclusions about local‐scale diversity; 2) intense sub‐sampling methods are needed to account for statistically independent observations of detailed lateral variation; and 3) sampling methods in palaeodiversity analyses must look for a similar degree of representativeness in samples rather than a homogeneous sample size. Using a sampling effort analysis provides evidence for the completeness of the data set, adjusting the amount of work required. Implementing the Clench equation in palaeodiversity analyses improves the performance of data acquisition in palaeoecological studies and provides a quality test of the data sets derived from them.  相似文献   

11.
12.
The invasion of woody plants into grass‐dominated ecosystems has occurred worldwide during the past century with potentially significant impacts on soil organic carbon (SOC) storage, ecosystem carbon sequestration and global climate warming. To date, most studies of tree and shrub encroachment impacts on SOC have been conducted at small scales and results are equivocal. To quantify the effects of woody plant proliferation on SOC at broad spatial scales and to potentially resolve inconsistencies reported from studies conducted at fine spatial scales, information regarding spatial variability and uncertainty of SOC is essential. We used sequential indicator simulation (SIS) to quantify spatial uncertainty of SOC in a grassland undergoing shrub encroachment in the Southern Great Plains, USA. Results showed that both SOC pool size and its spatial uncertainty increased with the development of woody communities in grasslands. Higher uncertainty of SOC in new shrub‐dominated communities may be the result of their relatively recent development, their more complex above‐ and belowground architecture, stronger within‐community gradients, and a greater degree of faunal disturbance. Simulations of alternative sampling designs demonstrated the effects of spatial uncertainty on the accuracy of SOC estimates and enabled us to evaluate the efficiency of sampling strategies aimed at quantifying landscape‐scale SOC pools. An approach combining stratified random sampling with unequal point densities and transect sampling of landscape elements exhibiting strong internal gradients yielded the best estimates. Complete random sampling was less effective and required much higher sampling densities. Results provide novel insights into spatial uncertainty of SOC and its effects on estimates of carbon sequestration in terrestrial ecosystem and suggest effective protocol for the estimating of soil attributes in landscapes with complex vegetation patterns.  相似文献   

13.
A primary challenge of animal surveys is to understand how to reliably sample populations exhibiting strong spatial heterogeneity. Building upon recent findings from survey, tracking and tagging data, we investigate spatial sampling of a seasonally resident population of Atlantic bluefin tuna in the Gulf of Maine, Northwestern Atlantic Ocean. We incorporate empirical estimates to parameterize a stochastic population model and simulate measurement designs to examine survey efficiency and precision under variation in tuna behaviour. We compare results for random, systematic, stratified, adaptive and spotter-search survey designs, with spotter-search comprising irregular transects that target surfacing schools and known aggregation locations (i.e., areas of expected high population density) based on a priori knowledge. Results obtained show how survey precision is expected to vary on average with sampling effort, in agreement with general sampling theory and provide uncertainty ranges based on simulated variance in tuna behaviour. Simulation results indicate that spotter-search provides the highest level of precision, however, measurable bias in observer-school encounter rate contributes substantial uncertainty. Considering survey bias, precision, efficiency and anticipated operational costs, we propose that an adaptive-stratified sampling alone or a combination of adaptive-stratification and spotter-search (a mixed-layer design whereby a priori information on the location and size of school aggregations is provided by sequential spotter-search sampling) may provide the best approach for reducing uncertainty in seasonal abundance estimates.
Nathaniel K. NewlandsEmail:
  相似文献   

14.

Purpose

Some LCA software tools use precalculated aggregated datasets because they make LCA calculations much quicker. However, these datasets pose problems for uncertainty analysis. Even when aggregated dataset parameters are expressed as probability distributions, each dataset is sampled independently. This paper explores why independent sampling is incorrect and proposes two techniques to account for dependence in uncertainty analysis. The first is based on an analytical approach, while the other uses precalculated results sampled dependently.

Methods

The algorithm for generating arrays of dependently presampled aggregated inventories and their LCA scores is described. These arrays are used to calculate the correlation across all pairs of aggregated datasets in two ecoinvent LCI databases (2.2, 3.3 cutoff). The arrays are also used in the dependently presampled approach. The uncertainty of LCA results is calculated under different assumptions and using four different techniques and compared for two case studies: a simple water bottle LCA and an LCA of burger recipes.

Results and discussion

The meta-analysis of two LCI databases shows that there is no single correct approximation of correlation between aggregated datasets. The case studies show that the uncertainty of single-product LCA using aggregated datasets is usually underestimated when the correlation across datasets is ignored and that the magnitude of the underestimation is dependent on the system being analysed and the LCIA method chosen. Comparative LCA results show that independent sampling of aggregated datasets drastically overestimates the uncertainty of comparative metrics. The approach based on dependently presampled results yields results functionally identical to those obtained by Monte Carlo analysis using unit process datasets with a negligible computation time.

Conclusions

Independent sampling should not be used for comparative LCA. Moreover, the use of a one-size-fits-all correction factor to correct the calculated variability under independent sampling, as proposed elsewhere, is generally inadequate. The proposed approximate analytical approach is useful to estimate the importance of the covariance of aggregated datasets but not for comparative LCA. The approach based on dependently presampled results provides quick and correct results and has been implemented in EcodEX, a streamlined LCA software used by Nestlé. Dependently presampled results can be used for streamlined LCA software tools. Both presampling and analytical solutions require a preliminary one-time calculation of dependent samples for all aggregated datasets, which could be centrally done by database providers. The dependent presampling approach can be applied to other aspects of the LCA calculation chain.
  相似文献   

15.
Many quantitative genetic statistics are functions of variance components, for which a large number of replicates is needed for precise estimates and reliable measures of uncertainty, on which sound interpretation depends. Moreover, in large experiments the deaths of some individuals can occur, so methods for analysing such data need to be robust to missing values. We show how confidence intervals for narrow-sense heritability can be calculated in a nested full-sib/half-sib breeding design (males crossed with several females) in the presence of missing values. Simulations indicate that the method provides accurate results, and that estimator uncertainty is lowest for sampling designs with many males relative to the number of females per male, and with more females per male than progenies per female. Missing data generally had little influence on estimator accuracy, thus suggesting that the overall number of observations should be increased even if this results in unbalanced data. We also suggest the use of parametrically simulated data for prior investigation of the accuracy of planned experiments. Together with the proposed confidence intervals an informed decision on the optimal sampling design is possible, which allows efficient allocation of resources.  相似文献   

16.

Purpose

The objectives of this study are to evaluate life cycle assessment (LCA) for concrete mix designs containing alternative cement replacement materials in comparison with conventional 100% general use cement concrete and to evaluate the interplay and sensitivity of LCA for four concrete mix designs and six functional units which range in degrees of complexity and variables.

Methods

Six functional units with varying degrees of complexity are included in the analysis: (i) volume of concrete, (ii) volume and 28-day compressive strength, (iii) volume and 28-day rapid chloride permeability (RCP), (iv) volume and binder intensity, (v) volume and a combination of compressive strength and RCP and (vi) volume and a combination of binder intensity and RCP. Four reference flows are included in the analysis: three concrete mix designs containing slag, silica fume and limestone cement as cement replacement and one concrete mix design for conventional concrete.

Results and discussion

All three alternative mix designs were evaluated to have lower environmental impacts compared with the base 100% general use cement and so are considered to be ‘green’ concrete. Similar LCA results were observed for FU1, FU2 and FU4, and relatively similar results were obtained for FU3, FU5 and FU6. LCA conducted with functional units which were a function of durability exhibited markedly different (lower) LCA compared with the functional units that did not capture long-term durability.

Conclusions

Outcomes of this study portray the interplay between concrete mix design materials, choice of functional unit and environmental impact based on LCA. The results emphasize (i) the non-linearity between material properties and environmental impact and (ii) the importance of conducting an LCA with a selected functional unit that captures the concrete’s functional performance metrics specific to its application and expected exposure conditions. Based on this study, it is recommended that a complete LCA for a given concrete mix design should entail examination of multiple functional units in order to identify the range of environmental impacts or the optimal environmental impacts.
  相似文献   

17.
Life-Cycle Assessment (LCA) is a decision analysis tool used to compare alternatives to providing a given product or service. To ensure a fair comparison, LCA must select system boundaries in a consistent manner. The Relative Mass-Energy-Economic (RMEE) (pronounced ‘army’) of system boundary selection is a practical and quantitative method of defining system boundaries. RMEE compares each input to a system with the system’s functional unit on a mass, energy and economic basis. If this ratio of input to functional unit is less than a selected “cut-off” (defined as ‘ZRMEE’) then the input is excluded from the analysis and all unit processes upstream of that input are outside the system boundary. Ignoring unit processes outside the system boundary limits the size of the LCA analysis but adds a source of uncertainty for the overall results. The lower the value of the Z cut-off ratio the larger the system boundary is, resulting in a greater number of unit processes.  相似文献   

18.
Web surveys have replaced Face-to-Face and computer assisted telephone interviewing (CATI) as the main mode of data collection in most countries. This trend was reinforced as a consequence of COVID-19 pandemic-related restrictions. However, this mode still faces significant limitations in obtaining probability-based samples of the general population. For this reason, most web surveys rely on nonprobability survey designs. Whereas probability-based designs continue to be the gold standard in survey sampling, nonprobability web surveys may still prove useful in some situations. For instance, when small subpopulations are the group under study and probability sampling is unlikely to meet sample size requirements, complementing a small probability sample with a larger nonprobability one may improve the efficiency of the estimates. Nonprobability samples may also be designed as a mean for compensating for known biases in probability-based web survey samples by purposely targeting respondent profiles that tend to be underrepresented in these surveys. This is the case in the Survey on the impact of the COVID-19 pandemic in Spain (ESPACOV) that motivates this paper. In this paper, we propose a methodology for combining probability and nonprobability web-based survey samples with the help of machine-learning techniques. We then assess the efficiency of the resulting estimates by comparing them with other strategies that have been used before. Our simulation study and the application of the proposed estimation method to the second wave of the ESPACOV Survey allow us to conclude that this is the best option for reducing the biases observed in our data.  相似文献   

19.
Question: Static sampling designs for collecting spatial data efficiently are being readily utilized by ecologists, however, most ecological systems involve a multivariate spatial process that evolves dynamically over time. Efficient monitoring of such spatio‐temporal systems can be achieved by modeling the dynamic system and reducing the uncertainty associated with the effect of design choice at future observation times. However, can we combine traditional techniques with dynamic methods to find optimal dynamic sampling designs for monitoring the succession of a herbaceous community? Location: Lower Hamburg Bend Conservation Area, Missouri, USA (40°34′42″ lat. 95°45′38″ long.). Methods: The dynamic nature of the system under study is modeled in such a way that uncertainty in the measurements and temporal process can both be accounted for. Both fixed and roving monitoring locations were used in conjunction with a spatio‐temporal statistical model to efficiently determine optimal locations of roving monitors over time based on the reduction of uncertainty in predictions. Results: During the first 3 years of the study, roving monitors where held at fixed locations to allow for statistical parameter estimation from which to make predictions. Optimal monitoring locations for the remaining 2 years were selected based on the overall reduction in prediction uncertainty. Conclusions: The dynamic and adaptive vegetation monitoring scheme allowed for the efficient collection of data that will be utilized for many future ecological studies. By optimally placing an additional set of monitoring locations, we were able to utilize information about the system dynamics when informing the data collection process.  相似文献   

20.
Minimum counts are commonly used to estimate population size and trend for wildlife conservation and management; however, the scope of inference based on such data is limited by untestable assumptions regarding the detection process. Alternative approaches, such as distance sampling, occupancy surveys, and repeated counts, can be employed to produce detection-corrected estimates of population parameters. Unfortunately, these approaches can be more complicated and costly to implement, potentially limiting their use. We explored a conceptual framework linking datasets collected at different spatial scales under different survey designs, with the goal of improving inference. Specifically, we link landscape-scale distance sampling surveys with local-scale minimum counts in an integrated modeling framework to estimate mountain goat (Oreamnos americanus) abundance at both the local and regional scale in south-central Alaska, USA, and provide an estimate of detection probability (i.e., sightability) for the minimum counts. Estimated sightability for the minimum count surveys was 0.67 (95% credible interval [CrI] = 0.52–0.83) and abundance for the entire area was 5,600 goats (CV = 9%), both in broad agreement with estimates from previous studies. Abundance estimates at the local scale (i.e., individual min. count unit) were reasonably precise ( = 18%), suggesting the integrated approach can increase the amount of information produced at both spatial scales by linking minimum count approaches with more rigorous survey designs. We propose that our integrated approach may be implemented in the context of a modified split-panel monitoring design by altering survey protocols to include frequent minimum counts within local count units and intermittent but more rigorous survey designs with inference to the entire study area or population of interest. Doing so would provide estimates of abundance with appropriate measures of uncertainty at multiple spatial scales, thereby improving inference for population monitoring and management. © 2019 The Wildlife Society.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号