首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Occupancy-abundance relationships and sampling scales   总被引:4,自引:0,他引:4  
The area of occupancy of a species and its abundance are dependent on the spatial scale at which they are measured. However, it is less obvious how the scale of sampling affects their correlation. This study investigated and modeled the effects of sampling unit size and a real extent on the interspecific occupancy-abundance relationships for a tropical tree species assemblage at a local scale and a temperate bird species assemblage at a regional scale. The results showed that both sampling unit size and study extent had profound quantitative effects on the occupancy-abundance relationship, although it remained positive. Several properties of the occupancy-abundance relationship can result from the effects of scale: 1) the linearity of the relationship decreases with the increase of sampling unit size; 2) for a given abundance, the area of occupancy increases with sampling unit size; and 3) variation in the area of occupancy increases with the increase of both sampling unit size and extent, and if the extent is large enough may be sufficient that no occupancy-abundance relationship is observed. Although the occupancy-abundance relationship can be satisfactorily modeled, the parameters depend on the scale used. This suggests that a model derived from one scale cannot be applied to another. In other words, to estimate the rarity or commonness of species using such a model, the estimation must be strictly done using the same sampling scale for all the species.  相似文献   

2.
Since AMBI was published originally in 2000, it has been used in an increasing number of investigations with monitoring purposes, or to analyse impacts on soft-bottom macrobenthic communities. Some guidelines for its correct use were published in 2005; however, a main issue remained without an answer — which are the minimal area and number of replicates necessary, to obtain a precise estimate for AMBI? In this study, new methodologies such as bootstrap techniques have been applied to this particular problem.Data were obtained from sampling carried out in 1995, within the framework of the Littoral Water Quality Monitoring and Control Network of the Basque Country (northern Spain). The sampling strategy consisted of 11 intertidal estuarine sampling stations (0.25m2, sampled for each of six replicates) and 17 subtidal estuarine and coastal sampling stations (0.125m2, sampled for each of six replicates).Two replicates have been established as being sufficient, both for intertidal and subtidal sampling stations, to classify 80% of the pseudosamples into the same disturbance level, in terms of AMBI, for 64% of the stations.For the minimal area, it has been determined also (for both intertidal and subtidal sampling stations) that 0.25m2 is sufficient to classify 80% of the iterations into the same disturbance level, for 64% of the stations.  相似文献   

3.
A total of 24 commercial fields of cabbages and Brussels sprouts were sampled in a grid fashion with 20–25 equally spaced cells with four plants per cell. Using this data base of 80–100 plants, we conducted computer stimulations to compare the treatment decisions that would be made for the major insect pests using published sequential sampling programs and a newly developed variable-intensity sampling program. Additionally, we compared the number of samples required to make the decision. At low thresholds (10–20%) for both Lepidoptera and cabbage aphids, variable intensity-sampling required a smaller sample size and provided more reliable decisions, while at high thresholds (40–50%) sequential sampling provided more reliable decisions. In both procedures, the occurrence of incorrect decisions was minimal. The number of cases in which a decision would not be reached after a 40-plant sample was lower for variable-intensity sampling. Considering the number of samples required to make a correct decision and the greater need for reliable decisions at lower thresholds, variable-intensity sampling was superior to sequential sampling. Additionally, variable-intensity sampling has the advantage of requiring samples to be taken in a greater area of the field and thus increases the probability of detecting localized infestations. Although variable-intensity sampling was not designed to classify pest populations for treatment decisions but rather to achieve sampling precision around the population mean, our present studies indicate that it can also be an effective method to aid in treatment decisions.  相似文献   

4.
Adrian E. Raftery  Le Bao 《Biometrics》2010,66(4):1162-1173
Summary The Joint United Nations Programme on HIV/AIDS (UNAIDS) has decided to use Bayesian melding as the basis for its probabilistic projections of HIV prevalence in countries with generalized epidemics. This combines a mechanistic epidemiological model, prevalence data, and expert opinion. Initially, the posterior distribution was approximated by sampling‐importance‐resampling, which is simple to implement, easy to interpret, transparent to users, and gave acceptable results for most countries. For some countries, however, this is not computationally efficient because the posterior distribution tends to be concentrated around nonlinear ridges and can also be multimodal. We propose instead incremental mixture importance sampling (IMIS), which iteratively builds up a better importance sampling function. This retains the simplicity and transparency of sampling importance resampling, but is much more efficient computationally. It also leads to a simple estimator of the integrated likelihood that is the basis for Bayesian model comparison and model averaging. In simulation experiments and on real data, it outperformed both sampling importance resampling and three publicly available generic Markov chain Monte Carlo algorithms for this kind of problem.  相似文献   

5.
Species distribution models (SDMs) are now being widely used in ecology for management and conservation purposes across terrestrial, freshwater, and marine realms. The increasing interest in SDMs has drawn the attention of ecologists to spatial models and, in particular, to geostatistical models, which are used to associate observations of species occurrence or abundance with environmental covariates in a finite number of locations in order to predict where (and how much of) a species is likely to be present in unsampled locations. Standard geostatistical methodology assumes that the choice of sampling locations is independent of the values of the variable of interest. However, in natural environments, due to practical limitations related to time and financial constraints, this theoretical assumption is often violated. In fact, data commonly derive from opportunistic sampling (e.g., whale or bird watching), in which observers tend to look for a specific species in areas where they expect to find it. These are examples of what is referred to as preferential sampling, which can lead to biased predictions of the distribution of the species. The aim of this study is to discuss a SDM that addresses this problem and that it is more computationally efficient than existing MCMC methods. From a statistical point of view, we interpret the data as a marked point pattern, where the sampling locations form a point pattern and the measurements taken in those locations (i.e., species abundance or occurrence) are the associated marks. Inference and prediction of species distribution is performed using a Bayesian approach, and integrated nested Laplace approximation (INLA) methodology and software are used for model fitting to minimize the computational burden. We show that abundance is highly overestimated at low abundance locations when preferential sampling effects not accounted for, in both a simulated example and a practical application using fishery data. This highlights that ecologists should be aware of the potential bias resulting from preferential sampling and account for it in a model when a survey is based on non‐randomized and/or non‐systematic sampling.  相似文献   

6.
ABSTRACT: BACKGROUND: Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length- dependent) stochastic context-free grammar (SCFG) that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF) approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples), where neither of these two competing approaches generally outperforms the other. RESULTS: In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones), then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst-case time requirements of such an SCFG based sampling method without significant accuracy losses. If, on the other hand, the quality of sampled structures can be observed to strongly react to slight disturbances, there is little hope for improving the complexity by heuristic procedures. We hence provide a reliable test for the hypothesis that a heuristic method could be implemented to improve the time scaling of RNA secondary structure prediction in the worst-case -- without sacrificing much of the accuracy of the results. CONCLUSIONS: Our experiments indicate that absolute errors generally lead to the generation of useless sample sets, whereas relative errors seem to have only small negative impact on both the predictive accuracy and the overall quality of resulting structure samples. Based on these observations, we present some useful ideas for developing a time-reduced sampling method guaranteeing an acceptable predictive accuracy. We also discuss some inherent drawbacks that arise in the context of approximation. The key results of this paper are crucial for the design of an efficient and competitive heuristic prediction method based on the increasingly accepted and attractive statistical sampling approach. This has indeed been indicated by the construction of prototype algorithms (see [25]).  相似文献   

7.
One hundred consecutive superficial mass lesions in various body sites were sampled by both conventional fine needle aspiration (FNA) and by a fine needle without the application of syringe suction. The latter technique is based on the principle of capillarity and may be termed "fine needle capillary" (FNC) sampling. The two sampling techniques were compared using five objective parameters: (1) the amount of diagnostic cellular material present, (2) the retention of appropriate architecture and cellular arrangement, (3) the degree of cellular degeneration, (4) the cellular trauma and (5) the volume of obscuring background blood and clots. There was no statistically significant difference between the efficacies of the two sampling techniques for any of the parameters studied. FNA sampling was diagnostic in a greater number of cases than was FNC sampling, but this difference was not statistically significant at a level of P = .05. When FNC sampling was diagnostic, it more frequently produced superior-quality material; conventional FNA, although diagnostic in a greater number of cases, mostly produced adequate, rather than superior-quality, material. This trend was not, however, statistically significant at a level of P = .05. These findings differ from those of previous studies (which have shown overall superiority of FNC sampling over conventional FNA sampling) and suggest that the technique of fine needle sampling employed for cytodiagnosis can be left to the personal preference of the operator.  相似文献   

8.
Sensitivity of the relative-rate test to taxonomic sampling   总被引:31,自引:11,他引:20  
Relative-rate tests may be used to compare substitution rates between more than two sequences, which yields two main questions: What influence does the number of sequences have on relative-rate tests and what is the influence of the sampling strategy as characterized by the phylogenetic relationships between sequences? Using both simulations and analysis of real data from murids (APRT and LCAT nuclear genes), we show that comparing large numbers of species significantly improves the power of the test. This effect is stronger if species are more distantly related. On the other hand, it appears to be less rewarding to increase outgroup sampling than to use the single nearest outgroup sequence. Rates may be compared between paraphyletic ingroups and using paraphyletic outgroups, but unbalanced taxonomic sampling can bias the test. We present a simple phylogenetic weighting scheme which takes taxonomic sampling into account and significantly improves the relative- rate test in cases of unbalanced sampling. The answers are thus: (1) large taxonomic sampling of compared groups improves relative-rate tests, (2) sampling many outgroups does not bring significant improvement, (3) the only constraint on sampling strategy is that the outgroup be valid, and (4) results are more accurate when phylogenetic relationships between the investigated sequences are taken into account. Given current limitations of the maximum-likelihood and nonparametric approaches, the relative-rate test generalized to any number of species with phylogenetic weighting appears to be the most general test available to compare rates between lineages.   相似文献   

9.
The traditional sampling method for estimating frequency (the number of sub-quadrats containing a basal part of the organisms) is compared, using both computer simulations and direct comparison in the field, to two new methods that use a compound series of variable-sized concentric sub-quadrats. Both the new frequency-score and the new importance-score methods are closer approximations of density than is the standard frequency method, and the estimates produced by both of the new methods are less affected by the choice of sub-quadrat size and the spatial distribution (dispersion) of the organisms (i.e. clumping and regularity). Thus, the two nested-quadrat methods appear to ameliorate the usual frequency limitations associated with sub-quadrat size and organism dispersion, by the use of a range of different sub-quadrat sizes. This is important in community studies, where the component species may show a wide range of densities and dispersions. Both of the new methods are easily employed in the field. The importance-score method involves no more sampling effort than does standard qualitative (presence-absence) sampling, and it can therefore be used to sample a larger quadrat area than would normally be used for frequency sampling. This makes the method much more cost-effective as a means of estimating abundance, and it allows a greater number of the rarer species to be included in the sampling. The frequency-score method is more time-consuming, but it is capable of detecting more subtle community patterns. This means that it is particularly useful for the study of species-poor communities or where small variations in composition need to be detected.  相似文献   

10.
Species richness is a fundamental measurement of community and regional diversity, and it underlies many ecological models and conservation strategies. In spite of its importance, ecologists have not always appreciated the effects of abundance and sampling effort on richness measures and comparisons. We survey a series of common pitfalls in quantifying and comparing taxon richness. These pitfalls can be largely avoided by using accumulation and rarefaction curves, which may be based on either individuals or samples. These taxon sampling curves contain the basic information for valid richness comparisons, including category–subcategory ratios (species-to-genus and species-to-individual ratios). Rarefaction methods – both sample-based and individual-based – allow for meaningful standardization and comparison of datasets. Standardizing data sets by area or sampling effort may produce very different results compared to standardizing by number of individuals collected, and it is not always clear which measure of diversity is more appropriate. Asymptotic richness estimators provide lower-bound estimates for taxon-rich groups such as tropical arthropods, in which observed richness rarely reaches an asymptote, despite intensive sampling. Recent examples of diversity studies of tropical trees, stream invertebrates, and herbaceous plants emphasize the importance of carefully quantifying species richness using taxon sampling curves.  相似文献   

11.
Studies of animal behavior often rely on human observation, which introduces a number of limitations on sampling. Recent developments in automated logging of behaviors make it possible to circumvent some of these problems. Once verified for efficacy and accuracy, these automated systems can be used to determine optimal sampling regimes for behavioral studies. Here, we used a radio-frequency identification (RFID) system to quantify parental effort in a bi-parental songbird species: the tree swallow (Tachycineta bicolor). We found that the accuracy of the RFID monitoring system was similar to that of video-recorded behavioral observations for quantifying parental visits. Using RFID monitoring, we also quantified the optimum duration of sampling periods for male and female parental effort by looking at the relationship between nest visit rates estimated from sampling periods with different durations and the total visit numbers for the day. The optimum sampling duration (the shortest observation time that explained the most variation in total daily visits per unit time) was 1h for both sexes. These results show that RFID and other automated technologies can be used to quantify behavior when human observation is constrained, and the information from these monitoring technologies can be useful for evaluating the efficacy of human observation methods.  相似文献   

12.
Hornak V  Simmerling C 《Proteins》2003,51(4):577-590
Prediction and refinement of protein loop structures are important and challenging tasks for which no general solution has been found. In addition to the accuracy of scoring functions, the main problems reside in (1) insufficient statistical sampling and (2) crossing energy barriers that impede conformational rearrangements of the loop. We approach these two issues by using "low-barrier molecular dynamics," a combination of energy smoothing techniques. To address statistical sampling, locally enhanced sampling (LES) is used to produce multiple copies of the loop, thus improving statistics and reducing energy barriers. We introduce a novel extension of LES that can improve local sampling even further through hierarchical subdivision of copies. Even though LES reduces energy barriers, it cannot provide for crossing infinite barriers, which can be problematic when substantial rearrangement of residues is necessary. To permit this kind of loop residue repacking, a "soft-core" potential energy function is introduced, so that atomic overlaps are temporarily allowed. We tested this new combined methodology to a loop in anti-influenza antibody Fab 17/9 (7 residues long) and to another loop in the antiprogesterone antibody DB3 (8 residues). In both cases, starting from random conformations, we were able to locate correct loop structures (including sidechain orientations) with heavy-atom root-mean-square deviation (fit to the nonloop region) of approximately 1.1 A in Fab 17/9 and approximately 1.8 A in DB3. We show that the combination of LES and soft-core potential substantially improves sampling compared to regular molecular dynamics. Moreover, the sampling improvement obtained with this combined approach is significantly better than that provided by either of the two methods alone.  相似文献   

13.
Since the introduction of the Cytobrush for sampling the uterine cervix, some practitioners have ceased taking a concomitant cervical scraping using a spatula. To examine whether Cytobrush sampling alone is adequate for the diagnosis of cervical lesions, the Cytobrush and spatula samples in 444 smears (most with original diagnoses of at least mild dysplasia) were analyzed separately for the presence of diagnostic cells, endocervical cells and squamous cells. Of the 412 smears showing pathologic findings (mild to severe dysplasia or worse), diagnostic cells were present in 400 Cytobrush samples and in 369 spatula samples; the combination of both samples thus gave a 3% gain in correct diagnoses as compared to use of the Cytobrush samples alone. Another 18 smears would have been underdiagnosed based only on the Cytobrush samples. Endocervical cells were present in 95.3% of the Cytobrush samples and 83.8% of the spatula samples; squamous cells were present in 93.9% of the Cytobrush samples and 96.8% of the spatula samples. Analysis confirmed that it is important that the smear should contain both endocervical and squamous cells. A positive relationship between the absence of squamous cells in the Cytobrush sample and the probability of a false-negative assessment was suggested. It thus seems inadvisable to replace the combination sampling method by Cytobrush sampling alone, which may lead to a false-negative diagnosis.  相似文献   

14.
Abstract. A modification to the pressure probe is described which allows very rapid extraction of sap samples from single higher plant cells. The performance of this rapid-sampling probe was assessed and compared with the unmodified probe for cells of both wheat and Tradescantia. Under some conditions, the unmodified probe operated too slowly to avoid dilution of cell sap during the extraction process. This led to values for apparent sample osmotic pressures that were below the turgor pressures for the same cells. The problem was particularly acute in young wheatleaf epidermal cells which are small, elongate and have high turgor pressure. These exhibited rapid water influx when their turgor was depressed during the sampling of their contents (half-time for pressure recovery in wheat cells was less than 1 s while in Tradescantia cells it was 3–5 s). Dilution during sampling was apparently negligible when the rapid sampling probe was used. The study was complemented by a simple model of the way cells dilute during sampling. Quantitative predictions of the model were consistent with our observed findings. The model is used to assess the major factors which determine a cell's susceptibility to dilution during sampling.  相似文献   

15.
An increasing number of studies are using landscape genomics to investigate local adaptation in wild and domestic populations. Implementation of this approach requires the sampling phase to consider the complexity of environmental settings and the burden of logistical constraints. These important aspects are often underestimated in the literature dedicated to sampling strategies. In this study, we computed simulated genomic data sets to run against actual environmental data in order to trial landscape genomics experiments under distinct sampling strategies. These strategies differed by design approach (to enhance environmental and/or geographical representativeness at study sites), number of sampling locations and sample sizes. We then evaluated how these elements affected statistical performances (power and false discoveries) under two antithetical demographic scenarios. Our results highlight the importance of selecting an appropriate sample size, which should be modified based on the demographic characteristics of the studied population. For species with limited dispersal, sample sizes above 200 units are generally sufficient to detect most adaptive signals, while in random mating populations this threshold should be increased to 400 units. Furthermore, we describe a design approach that maximizes both environmental and geographical representativeness of sampling sites and show how it systematically outperforms random or regular sampling schemes. Finally, we show that although having more sampling locations (between 40 and 50 sites) increase statistical power and reduce false discovery rate, similar results can be achieved with a moderate number of sites (20 sites). Overall, this study provides valuable guidelines for optimizing sampling strategies for landscape genomics experiments.  相似文献   

16.
The compilation of all the available taxonomic and distributional information on the species present in a territory frequently generates a biased picture of the distribution of biodiversity due to the uneven distribution of the sampling effort performed. Thus, quality protocol assessments such as those proposed by Hortal et al. (Conservation Biology 21:853–863, 2007) must be done before using this kind of information for basic and applied purposes. The discrimination of localities that can be considered relatively well-surveyed from those not surveyed enough is a key first step in this protocol and can be attained by the previous definition of a sampling effort surrogate and the calculation of survey completeness using different estimators. Recently it has been suggested that records from exhaustive databases can be used as a sampling-effort surrogate to recognize probable well-surveyed localities. In this paper, we use an Iberian dung beetle database to identify the 50 × 50 km UTM cells that appear to be reliably inventoried, using both data derived from standardized sampling protocols and database records as a surrogate for sampling effort. Observed and predicted species richness values in the shared cells defined as well-surveyed by both methods suggest that the use of database records provides higher species richness values, which are proportionally greater in the richest localities by the inclusion of rare species.  相似文献   

17.
A continuous-flow sampling system (CFS) for convenient and rapid determination of respiratory gas exchange during steady-state exercise was described. CFS was compared to the classical bag collection system (BCS) by utilizing both methods concurrently during exercise for analysis of 32 1-min gas samples. The gas collected by BCS was analyzed by chemical absorption. The error in the gas mixing and sampling technique of CFS contributed to the absolute error of the gas analysis but did not adversely affect the reliability. The linear regression analysis on the data suggests that CFS is a relatively accurate and reliable system for use at light and moderate levels of steady-state work. However, it is hypothesized that unsteady-state conditions and heavy exercise, which elicits high ventilation rates, would compromise the accuracy and reliability of CFS. Therefore, it is recommended that the traditional BCS be utilized for determination of maximal oxygen uptake.  相似文献   

18.
A comparison between nonspatula (cotton swab and Cytobrush) cervical sampling methods and spatula (wooden Ayre spatula and plastic extended-tip Szalay Cyto-Spatula) sampling methods was made in 109 cases. Based on the presence of endocervical cells, there were statistically significant qualitative differences between the non-spatula methods as well as between the spatula methods, but not between the Cytobrush and Cyto-Spatula smears or the cotton swab and Ayre spatula smears. In all kinds of inflammatory lesions, the spatula samples were more accurate and diagnostic than the nonspatula ones. In all cases of cervical intraepithelial neoplasia and in most cases of squamous metaplasia, the Cyto-Spatula sample was the most accurate. It is concluded that the Szalay Cyto-Spatula method is superior to the other cervical sampling methods because it provides well-preserved cells from both the endocervix and the ectocervix in one smear. The Cytobrush should be used in conjunction with spatula sampling (combination method) for effective sampling of the cervix. The Cytobrush alone is effective mainly for endocervical sampling while the Ayre spatula alone is effective mainly for ectocervical sampling; the cotton swab is ineffective for both endocervical and ectocervical sampling.  相似文献   

19.
The pandemic amphibian disease chytridiomycosis often exhibits strong seasonality in both prevalence and disease-associated mortality once it becomes endemic. One hypothesis that could explain this temporal pattern is that simple weather-driven pathogen proliferation (population growth) is a major driver of chytridiomycosis disease dynamics. Despite various elaborations of this hypothesis in the literature for explaining amphibian declines (e.g., the chytrid thermal-optimum hypothesis) it has not been formally tested on infection patterns in the wild. In this study we developed a simple process-based model to simulate the growth of the pathogen Batrachochytrium dendrobatidis (Bd) under varying weather conditions to provide an a priori test of a weather-linked pathogen proliferation hypothesis for endemic chytridiomycosis. We found strong support for several predictions of the proliferation hypothesis when applied to our model species, Litoria pearsoniana, sampled across multiple sites and years: the weather-driven simulations of pathogen growth potential (represented as a growth index in the 30 days prior to sampling; GI30) were positively related to both the prevalence and intensity of Bd infections, which were themselves strongly and positively correlated. In addition, a machine-learning classifier achieved ∼72% success in classifying positive qPCR results when utilising just three informative predictors 1) GI30, 2) frog body size and 3) rain on the day of sampling. Hence, while intrinsic traits of the individuals sampled (species, size, sex) and nuisance sampling variables (rainfall when sampling) influenced infection patterns obtained when sampling via qPCR, our results also strongly suggest that weather-linked pathogen proliferation plays a key role in the infection dynamics of endemic chytridiomycosis in our study system. Predictive applications of the model include surveillance design, outbreak preparedness and response, climate change scenario modelling and the interpretation of historical patterns of amphibian decline.  相似文献   

20.
Given the recurrent bat‐associated disease outbreaks in humans and recent advances in metagenomics sequencing, the microbiota of bats is increasingly being studied. However, obtaining biological samples directly from wild individuals may represent a challenge, and thus, indirect passive sampling (without capturing bats) is sometimes used as an alternative. Currently, it is not known whether the bacterial community assessed using this approach provides an accurate representation of the bat microbiota. This study was designed to compare the use of direct sampling (based on bat capture and handling) and indirect sampling (collection of bat's excretions under bat colonies) in assessing bacterial communities in bats. Using high‐throughput 16S rRNA sequencing of urine and feces samples from Rousettus aegyptiacus, a cave‐dwelling fruit bat species, we found evidence of niche specialization among different excreta samples, independent of the sampling approach. However, sampling approach influenced both the alpha‐ and beta‐diversity of urinary and fecal microbiotas. In particular, increased alpha‐diversity and more overlapping composition between urine and feces samples was seen when direct sampling was used, suggesting that cross‐contamination may occur when collecting samples directly from bats in hand. In contrast, results from indirect sampling in the cave may be biased by environmental contamination. Our methodological comparison suggested some influence of the sampling approach on the bat‐associated microbiota, but both approaches were able to capture differences among excreta samples. Assessment of these techniques opens an avenue to use more indirect sampling, in order to explore microbial community dynamics in bats.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号