首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Ye He  Ling Zhou  Yingcun Xia  Huazhen Lin 《Biometrics》2023,79(3):2157-2170
The existing methods for subgroup analysis can be roughly divided into two categories: finite mixture models (FMM) and regularization methods with an ℓ1-type penalty. In this paper, by introducing the group centers and ℓ2-type penalty in the loss function, we propose a novel center-augmented regularization (CAR) method; this method can be regarded as a unification of the regularization method and FMM and hence exhibits higher efficiency and robustness and simpler computations than the existing methods. In particular, its computational complexity is reduced from the O ( n 2 ) $O(n^2)$ of the conventional pairwise-penalty method to only O ( n K ) $O(nK)$ , where n is the sample size and K is the number of subgroups. The asymptotic normality of CAR is established, and the convergence of the algorithm is proven. CAR is applied to a dataset from a multicenter clinical trial, Buprenorphine in the Treatment of Opiate Dependence; a larger R2 is produced and three additional significant variables are identified compared to those of the existing methods.  相似文献   

2.
K.O. Ekvall  M. Bottai 《Biometrics》2023,79(3):2286-2297
We propose a unified framework for likelihood-based regression modeling when the response variable has finite support. Our work is motivated by the fact that, in practice, observed data are discrete and bounded. The proposed methods assume a model which includes models previously considered for interval-censored variables with log-concave distributions as special cases. The resulting log-likelihood is concave, which we use to establish asymptotic normality of its maximizer as the number of observations n tends to infinity with the number of parameters d fixed, and rates of convergence of L1-regularized estimators when the true parameter vector is sparse and d and n both tend to infinity with log ( d ) / n 0 $\log (d) / n \rightarrow 0$ . We consider an inexact proximal Newton algorithm for computing estimates and give theoretical guarantees for its convergence. The range of possible applications is wide, including but not limited to survival analysis in discrete time, the modeling of outcomes on scored surveys and questionnaires, and, more generally, interval-censored regression. The applicability and usefulness of the proposed methods are illustrated in simulations and data examples.  相似文献   

3.
For ordinal outcomes, the average treatment effect is often ill-defined and hard to interpret. Echoing Agresti and Kateri, we argue that the relative treatment effect can be a useful measure, especially for ordinal outcomes, which is defined as γ = pr { Y i ( 1 ) > Y i ( 0 ) } pr { Y i ( 1 ) < Y i ( 0 ) } , with Y i ( 1 ) and Y i ( 0 ) being the potential outcomes of unit i under treatment and control, respectively. Given the marginal distributions of the potential outcomes, we derive the sharp bounds on γ , which are identifiable parameters based on the observed data. Agresti and Kateri focused on modeling strategies under the assumption of independent potential outcomes, but we allow for arbitrary dependence.  相似文献   

4.
Pragmatic trials evaluating health care interventions often adopt cluster randomization due to scientific or logistical considerations. Systematic reviews have shown that coprimary endpoints are not uncommon in pragmatic trials but are seldom recognized in sample size or power calculations. While methods for power analysis based on K ( K 2 $K\ge 2$ ) binary coprimary endpoints are available for cluster randomized trials (CRTs), to our knowledge, methods for continuous coprimary endpoints are not yet available. Assuming a multivariate linear mixed model (MLMM) that accounts for multiple types of intraclass correlation coefficients among the observations in each cluster, we derive the closed-form joint distribution of K treatment effect estimators to facilitate sample size and power determination with different types of null hypotheses under equal cluster sizes. We characterize the relationship between the power of each test and different types of correlation parameters. We further relax the equal cluster size assumption and approximate the joint distribution of the K treatment effect estimators through the mean and coefficient of variation of cluster sizes. Our simulation studies with a finite number of clusters indicate that the predicted power by our method agrees well with the empirical power, when the parameters in the MLMM are estimated via the expectation-maximization algorithm. An application to a real CRT is presented to illustrate the proposed method.  相似文献   

5.
The power prior has been widely used to discount the amount of information borrowed from historical data in the design and analysis of clinical trials. It is realized by raising the likelihood function of the historical data to a power parameter δ [ 0 , 1 ] $\delta \in [0, 1]$ , which quantifies the heterogeneity between the historical and the new study. In a fully Bayesian approach, a natural extension is to assign a hyperprior to δ such that the posterior of δ can reflect the degree of similarity between the historical and current data. To comply with the likelihood principle, an extra normalizing factor needs to be calculated and such prior is known as the normalized power prior. However, the normalizing factor involves an integral of a prior multiplied by a fractional likelihood and needs to be computed repeatedly over different δ during the posterior sampling. This makes its use prohibitive in practice for most elaborate models. This work provides an efficient framework to implement the normalized power prior in clinical studies. It bypasses the aforementioned efforts by sampling from the power prior with δ = 0 $\delta = 0$ and δ = 1 $\delta = 1$ only. Such a posterior sampling procedure can facilitate the use of a random δ with adaptive borrowing capability in general models. The numerical efficiency of the proposed method is illustrated via extensive simulation studies, a toxicological study, and an oncology study.  相似文献   

6.
Inverse-probability-weighted estimators are the oldest and potentially most commonly used class of procedures for the estimation of causal effects. By adjusting for selection biases via a weighting mechanism, these procedures estimate an effect of interest by constructing a pseudopopulation in which selection biases are eliminated. Despite their ease of use, these estimators require the correct specification of a model for the weighting mechanism, are known to be inefficient, and suffer from the curse of dimensionality. We propose a class of nonparametric inverse-probability-weighted estimators in which the weighting mechanism is estimated via undersmoothing of the highly adaptive lasso, a nonparametric regression function proven to converge at nearly n 1 / 3 $ n^{-1/3}$ -rate to the true weighting mechanism. We demonstrate that our estimators are asymptotically linear with variance converging to the nonparametric efficiency bound. Unlike doubly robust estimators, our procedures require neither derivation of the efficient influence function nor specification of the conditional outcome model. Our theoretical developments have broad implications for the construction of efficient inverse-probability-weighted estimators in large statistical models and a variety of problem settings. We assess the practical performance of our estimators in simulation studies and demonstrate use of our proposed methodology with data from a large-scale epidemiologic study.  相似文献   

7.

Aim

Theoretically, woody biomass turnover time ( τ ) quantified using outflux (i.e. tree mortality) predicts biomass dynamics better than using influx (i.e. productivity). This study aims at using forest inventory data to empirically test the outflux approach and generate a spatially explicit understanding of woody τ in mature forests. We further compared woody τ estimates with dynamic global vegetation models (DGVMs) and with a data assimilation product of C stocks and fluxes—CARDAMOM.

Location

Continents.

Time Period

Historic from 1951 to 2018.

Major Taxa Studied

Trees and forests.

Methods

We compared the approaches of using outflux versus influx for estimating woody τ and predicting biomass accumulation rates. We investigated abiotic and biotic drivers of spatial woody τ and generated a spatially explicit map of woody τ at a 0.25-degree resolution across continents using machine learning. We further examined whether six DGVMs and CARDAMOM generally captured the observational pattern of woody τ .

Results

Woody τ quantified by the outflux approach better (with R2 0.4–0.5) predicted the biomass accumulation rates than the influx approach (with R2 0.1–0.4) across continents. We found large spatial variations of woody τ for mature forests, with highest values in temperate forests (98.8 ± 2.6 y) followed by boreal forests (73.9 ± 3.6 y) and tropical forests. The map of woody τ extrapolated from plot data showed higher values in wetter eastern and pacific coast USA, Africa and eastern Amazon. Climate (temperature and aridity index) and vegetation structure (tree density and forest age) were the dominant drivers of woody τ across continents. The highest woody τ in temperate forests was not captured by either DGVMs or CARDAMOM.

Main Conclusions

Our study empirically demonstrated the preference of using outflux over influx to estimate woody τ for predicting biomass accumulation rates. The spatially explicit map of woody τ and the underlying drivers provide valuable information to improve the representation of forest demography and carbon turnover processes in DGVMs.  相似文献   

8.
The peak growth of plant in summer is an important indicator of the capacity of terrestrial ecosystem productivity, and ongoing studies have shown its responses to climate warming as represented in the mean temperature. However, the impacts from the asymmetrical warming, that is, different rates in the changes of daytime (Tmax) and nighttime (Tmin) warming were mostly ignored. Using 60 flux sites (674 site-year in total) measurements and satellite observations from two independent satellite platforms (Global Inventory Monitoring and Modeling Studies [1982–2015]; MODIS [2000–2020]) over the Northern Hemisphere (≥30°N), here we show that the peak growth, as represented by both flux-based maximum primary productivity and the maximum greenness indices (maximum normalized difference vegetation index and enhanced vegetation index), responded oppositely to daytime and nighttime warming. T max T min + (peak growth showed negative responses to Tmax, but positive responses to Tmin) dominated in most ecosystems and climate types, especially in water-limited ecosystems, while T max + T min (peak growth showed positive responses to Tmax, but negative responses to Tmin) was primarily observed in high latitude regions. These contrasting responses could be explained by the strong association between asymmetric warming and water conditions, including soil moisture, evapotranspiration/potential evapotranspiration, and the vapor pressure deficit. Our results are therefore important to the understanding of the responses of peak growth to climate change, and consequently a better representation of asymmetrical warming in future ecosystem models by differentiating the contributions between daytime and nighttime warming.  相似文献   

9.
Mutant dynamics in fragmented populations have been studied extensively in evolutionary biology. Yet, open questions remain, both experimentally and theoretically. Some of the fundamental properties predicted by models still need to be addressed experimentally. We contribute to this by using a combination of experiments and theory to investigate the role of migration in mutant distribution. In the case of neutral mutants, while the mean frequency of mutants is not influenced by migration, the probability distribution is. To address this empirically, we performed in vitro experiments, where mixtures of GFP-labelled (“mutant”) and non-labelled (“wid-type”) murine cells were grown in wells (demes), and migration was mimicked via cell transfer from well to well. In the presence of migration, we observed a change in the skewedness of the distribution of the mutant frequencies in the wells, consistent with previous and our own model predictions. In the presence of de novo mutant production, we used modelling to investigate the level at which disadvantageous mutants are predicted to exist, which has implications for the adaptive potential of the population in case of an environmental change. In panmictic populations, disadvantageous mutants can persist around a steady state, determined by the rate of mutant production and the selective disadvantage (selection-mutation balance). In a fragmented system that consists of demes connected by migration, a steady-state persistence of disadvantageous mutants is also observed, which, however, is fundamentally different from the mutation-selection balance and characterized by higher mutant levels. The increase in mutant frequencies above the selection-mutation balance can be maintained in small ( N < N c ) demes as long as the migration rate is sufficiently small. The migration rate above which the mutants approach the selection-mutation balance decays exponentially with N / N c . The observed increase in the mutant numbers is not explained by the change in the effective population size. Implications for evolutionary processes in diseases are discussed, where the pre-existence of disadvantageous drug-resistant mutant cells or pathogens drives the response of the disease to treatments.  相似文献   

10.
Genome-scale metabolic network model (GSMM) based on enzyme constraints greatly improves general metabolic models. The turnover number ( k cat ${k}_{\mathrm{cat}}$ ) of enzymes is used as a parameter to limit the reaction when extending GSMM. Therefore, turnover number plays a crucial role in the prediction accuracy of cell metabolism. In this work, we proposed an enzyme-constrained GSMM parameter optimization method. First, sensitivity analysis of the parameters was carried out to select the parameters with the greatest influence on predicting the specific growth rate. Then, differential evolution (DE) algorithm with adaptive mutation strategy was adopted to optimize the parameters. This algorithm can dynamically select five different mutation strategies. Finally, the specific growth rate prediction, flux variability, and phase plane of the optimized model were analyzed to further evaluate the model. The enzyme-constrained GSMM of Saccharomyces cerevisiae, ecYeast8.3.4, was optimized. Results of the sensitivity analysis showed that the optimization variables can be divided into three groups based on sensitivity: most sensitive (149 k cat ${k}_{\mathrm{cat}}$ c), highly sensitive (1759 k cat ${k}_{\mathrm{cat}}$ ), and nonsensitive (2502 k cat ${k}_{\mathrm{cat}}$ ) groups. Six optimization strategies were developed based on the results of the sensitivity analysis. The results showed that the DE with adaptive mutation strategy can indeed improve the model by optimizing highly sensitive parameters. Retaining all parameters and optimizing the highly sensitive parameters are the recommended optimization strategy.  相似文献   

11.
No tillage (NT) has been proposed as a practice to reduce the adverse effects of tillage on contaminant (e.g., sediment and nutrient) losses to waterways. Nonetheless, previous reports on impacts of NT on nitrate ( NO 3 ) leaching are inconsistent. A global meta-analysis was conducted to test the hypothesis that the response of NO 3 leaching under NT, relative to tillage, is associated with tillage type (inversion vs non-inversion tillage), soil properties (e.g., soil organic carbon [SOC]), climate factors (i.e., water input), and management practices (e.g., NT duration and nitrogen fertilizer inputs). Overall, compared with all forms of tillage combined, NT had 4% and 14% greater area-scaled and yield-scaled NO 3 leaching losses, respectively. The NO 3 leaching under NT tended to be 7% greater than that of inversion tillage but comparable to non-inversion tillage. Greater NO 3 leaching under NT, compared with inversion tillage, was most evident under short-duration NT (<5 years), where water inputs were low (<2 mm day−1), in medium texture and low SOC (<1%) soils, and at both higher (>200 kg ha−1) and lower (0–100 kg ha−1) rates of nitrogen addition. Of these, SOC was the most important factor affecting the risk of NO3 leaching under NT compared with inversion tillage. Globally, on average, the greater amount of NO3 leached under NT, compared with inversion tillage, was mainly attributed to corresponding increases in drainage. The percentage of global cropping land with lower risk of NO3 leaching under NT, relative to inversion tillage, increased with NT duration from 3 years (31%) to 15 years (54%). This study highlighted that the benefits of NT adoption for mitigating NO 3 leaching are most likely in long-term NT cropping systems on high-SOC soils.  相似文献   

12.
Climate change leads to increasing temperature and more extreme hot and drought events. Ecosystem capability to cope with climate warming depends on vegetation's adjusting pace with temperature change. How environmental stresses impair such a vegetation pace has not been carefully investigated. Here we show that dryness substantially dampens vegetation pace in warm regions to adjust the optimal temperature of gross primary production (GPP) ( T opt GPP ) in response to change in temperature over space and time. T opt GPP spatially converges to an increase of 1.01°C (95% CI: 0.97, 1.05) per 1°C increase in the yearly maximum temperature (Tmax) across humid or cold sites worldwide (37oS–79oN) but only 0.59°C (95% CI: 0.46, 0.74) per 1°C increase in Tmax across dry and warm sites. T opt GPP temporally changes by 0.81°C (95% CI: 0.75, 0.87) per 1°C interannual variation in Tmax at humid or cold sites and 0.42°C (95% CI: 0.17, 0.66) at dry and warm sites. Regardless of the water limitation, the maximum GPP (GPPmax) similarly increases by 0.23 g C m−2 day−1 per 1°C increase in T opt GPP in either humid or dry areas. Our results indicate that the future climate warming likely stimulates vegetation productivity more substantially in humid than water-limited regions.  相似文献   

13.
In an observational study, the treatment received and the outcome exhibited may be associated in the absence of an effect caused by the treatment, even after controlling for observed covariates. Two tactics are common: (i) a test for unmeasured bias may be obtained using a secondary outcome for which the effect is known and (ii) a sensitivity analysis may explore the magnitude of unmeasured bias that would need to be present to explain the observed association as something other than an effect caused by the treatment. Can such a test for unmeasured bias inform the sensitivity analysis? If the test for bias does not discover evidence of unmeasured bias, then ask: Are conclusions therefore insensitive to larger unmeasured biases? Conversely, if the test for bias does find evidence of bias, then ask: What does that imply about sensitivity to biases? This problem is formulated in a new way as a convex quadratically constrained quadratic program and solved on a large scale using interior point methods by a modern solver. That is, a convex quadratic function of N variables is minimized subject to constraints on linear and convex quadratic functions of these variables. The quadratic function that is minimized is a statistic for the primary outcome that is a function of the unknown treatment assignment probabilities. The quadratic function that constrains this minimization is a statistic for subsidiary outcome that is also a function of these same unknown treatment assignment probabilities. In effect, the first statistic is minimized over a confidence set for the unknown treatment assignment probabilities supplied by the unaffected outcome. This process avoids the mistake of interpreting the failure to reject a hypothesis as support for the truth of that hypothesis. The method is illustrated by a study of the effects of light daily alcohol consumption on high-density lipoprotein (HDL) cholesterol levels. In this study, the method quickly optimizes a nonlinear function of N = 800 $N=800$ variables subject to linear and quadratic constraints. In the example, strong evidence of unmeasured bias is found using the subsidiary outcome, but, perhaps surprisingly, this finding makes the primary comparison insensitive to larger biases.  相似文献   

14.
This paper is motivated by studying differential brain activities to multiple experimental condition presentations in intracranial electroencephalography (iEEG) experiments. Contrasting effects of experimental conditions are often zero in most regions and nonzero in some local regions, yielding locally sparse functions. Such studies are essentially a function-on-scalar regression problem, with interest being focused not only on estimating nonparametric functions but also on recovering the function supports. We propose a weighted group bridge approach for simultaneous function estimation and support recovery in function-on-scalar mixed effect models, while accounting for heterogeneity present in functional data. We use B-splines to transform sparsity of functions to its sparse vector counterpart of increasing dimension, and propose a fast nonconvex optimization algorithm using nested alternative direction method of multipliers (ADMM) for estimation. Large sample properties are established. In particular, we show that the estimated coefficient functions are rate optimal in the minimax sense under the L2 norm and resemble a phase transition phenomenon. For support estimation, we derive a convergence rate under the L $L_{\infty }$ norm that leads to a selection consistency property under δ-sparsity, and obtain a result under strict sparsity using a simple sufficient regularity condition. An adjusted extended Bayesian information criterion is proposed for parameter tuning. The developed method is illustrated through simulations and an application to a novel iEEG data set to study multisensory integration.  相似文献   

15.
Digestate, a by-product of biogas production, is widely recognized as a promising renewable nitrogen (N) source with high potential to replace synthetic fertilizers. Yet, inefficient digestate use can lead to pollutant N losses as ammonia (NH3) volatilization, nitrous oxide (N2O) emissions and nitrate ( NO 3 ) leaching. Cover crops (CCs) may reduce some of these losses and recycle the N back into the soil after incorporation, but the effect on the N balance depends on the CC species. In a one-year field study, we tested two application methods (i.e., surface broadcasting, BDC; and shallow injection, INJ) of the liquid fraction of separated co-digested cattle slurry (digestate liquid fraction [DLF]), combined with different winter cover crop (CC) options (i.e., rye, white mustard or bare fallow), as starter fertilizer for maize. Later, side-dressing with urea was required to fulfil maize N-requirements. We tested treatment effects on yield, N-uptake, N-use efficiency parameters, and N-losses in the form of N2O emissions and NO 3 leaching. CC development and biomass production were strongly affected by their contrasting frost tolerance, with spring-regrowth for rye, while mustard was winter killed. After the CCs, injection of DLF increased N2O emissions significantly compared with BDC (emission factor of 2.69% vs. 1.66%). Nitrous oxide emissions accounted for a small part (11%–13%) of the overall yield-scaled N losses (0.46–0.97 kg N Mg grain−1). The adoption of CCs reduced fall NO 3 leaching, being 51% and 64% lower for mustard and rye than under bare soil. In addition, rye reduced NO 3 leaching during spring and summer after termination by promoting N immobilization, thus leading to −57% lower annual leaching losses compared with mustard. DLF application method modified N-loss pathways, but not the cumulative yield-scaled N losses. Overall, these insights contribute to inform an evidence-based design of cropping systems in which nutrients are recycled more efficiently.  相似文献   

16.
Tropical and subtropical forest biomes are a main hotspot for the global nitrogen (N) cycle. Yet, our understanding of global soil N cycle patterns and drivers and their response to N deposition in these biomes remains elusive. By a meta-analysis of 2426-single and 161-paired observations from 89 published 15 N pool dilution and tracing studies, we found that gross N mineralization (GNM), immobilization of ammonium ( I NH 4 ) and nitrate ( I NO 3 ), and dissimilatory nitrate reduction to ammonium (DNRA) were significantly higher in tropical forests than in subtropical forests. Soil N cycle was conservative in tropical forests with ratios of gross nitrification (GN) to I NH 4 (GN/ I NH 4 ) and of soil nitrate to ammonium (NO3/NH4+) less than one, but was leaky in subtropical forests with GN/ I NH 4 and NO3/NH4+ higher than one. Soil NH4+ dynamics were mainly controlled by soil substrate (e.g., total N), but climatic factors (e.g., precipitation and/or temperature) were more important in controlling soil NO3 dynamics. Soil texture played a role, as GNM and I NH 4 were positively correlated with silt and clay contents, while I NO 3 and DNRA were positively correlated with sand and clay contents, respectively. The soil N cycle was more sensitive to N deposition in tropical forests than in subtropical forests. Nitrogen deposition leads to a leaky N cycle in tropical forests, as evidenced by the increase in GN/ I NH 4 , NO3/NH4+, and nitrous oxide emissions and the decrease in I NO 3 and DNRA, mainly due to the decrease in soil microbial biomass and pH. Dominant tree species can also influence soil N cycle pattern, which has changed from conservative in deciduous forests to leaky in coniferous forests. We provide global evidence that tropical, but not subtropical, forests are characterized by soil N dynamics sustaining N availability and that N deposition inhibits soil N retention and stimulates N losses in these biomes.  相似文献   

17.
The existence of a large-biomass carbon (C) sink in Northern Hemisphere extra-tropical ecosystems (NHee) is well-established, but the relative contribution of different potential drivers remains highly uncertain. Here we isolated the historical role of carbon dioxide (CO2) fertilization by integrating estimates from 24 CO2-enrichment experiments, an ensemble of 10 dynamic global vegetation models (DGVMs) and two observation-based biomass datasets. Application of the emergent constraint technique revealed that DGVMs underestimated the historical response of plant biomass to increasing [CO2] in forests ( β Forest Mod ) but overestimated the response in grasslands ( β Grass Mod ) since the 1850s. Combining the constrained β Forest Mod (0.86 ± 0.28 kg C m−2 [100 ppm]−1) with observed forest biomass changes derived from inventories and satellites, we identified that CO2 fertilization alone accounted for more than half (54 ± 18% and 64 ± 21%, respectively) of the increase in biomass C storage since the 1990s. Our results indicate that CO2 fertilization dominated the forest biomass C sink over the past decades, and provide an essential step toward better understanding the key role of forests in land-based policies for mitigating climate change.  相似文献   

18.
The question of how individual patient data from cohort studies or historical clinical trials can be leveraged for designing more powerful, or smaller yet equally powerful, clinical trials becomes increasingly important in the era of digitalization. Today, the traditional statistical analyses approaches may seem questionable to practitioners in light of ubiquitous historical prognostic information. Several methodological developments aim at incorporating historical information in the design and analysis of future clinical trials, most importantly Bayesian information borrowing, propensity score methods, stratification, and covariate adjustment. Adjusting the analysis with respect to a prognostic score, which was obtained from some model applied to historical data, received renewed interest from a machine learning perspective, and we study the potential of this approach for randomized clinical trials. In an idealized situation of a normal outcome in a two-arm trial with 1:1 allocation, we derive a simple sample size reduction formula as a function of two criteria characterizing the prognostic score: (1) the coefficient of determination R2 on historical data and (2) the correlation ρ between the estimated and the true unknown prognostic scores. While maintaining the same power, the original total sample size n planned for the unadjusted analysis reduces to ( 1 R 2 ρ 2 ) × n $(1 - R^2 \rho ^2) \times n$ in an adjusted analysis. Robustness in less ideal situations was assessed empirically. We conclude that there is potential for substantially more powerful or smaller trials, but only when prognostic scores can be accurately estimated.  相似文献   

19.

Aim

Understanding connections between environment and biodiversity is crucial for conservation, identifying causes of ecosystem stress, and predicting population responses to changing environments. Explaining biodiversity requires an understanding of how species richness and environment covary across scales. Here, we identify scales and locations at which biodiversity is generated and correlates with environment.

Location

Full latitudinal range per continent.

Time Period

Present day.

Major Taxa Studied

Terrestrial vertebrates: all mammals, carnivorans, bats, songbirds, hummingbirds, amphibians.

Methods

We describe the use of wavelet power spectra, cross-power and coherence for identifying scale-dependent trends across Earth's surface. Spectra reveal scale- and location-dependent coherence between species richness and topography (E), mean annual precipitation (Pn), temperature (Tm) and annual temperature range (ΔT).

Results

>97% of species richness of taxa studied is generated at large scales, that is, wavelengths 10 3 km, with 30%–69% generated at scales 10 4 km. At these scales, richness tends to be highly coherent and anti-correlated with E and ΔT, and positively correlated with Pn and Tm. Coherence between carnivoran richness and ΔT is low across scales, implying insensitivity to seasonal temperature variations. Conversely, amphibian richness is strongly anti-correlated with ΔT at large scales. At scales 10 3 km, examined taxa, except carnivorans, show highest richness within the tropics. Terrestrial plateaux exhibit high coherence between carnivorans and E at scales 10 3 km, consistent with contribution of large-scale tectonic processes to biodiversity. Results are similar across different continents and for global latitudinal averages. Spectral admittance permits derivation of rules-of-thumb relating long-wavelength environmental and species richness trends.

Main Conclusions

Sensitivities of mammal, bird and amphibian populations to environment are highly scale dependent. At large scales, carnivoran richness is largely independent of temperature and precipitation, whereas amphibian richness correlates strongly with precipitation and temperature, and anti-correlates with temperature range. These results pave the way for spectral-based calibration of models that predict biodiversity response to climate change scenarios.  相似文献   

20.
Co-firing residual lignocellulosic biomass with fossil fuels is often used to reduce greenhouse gas (GHG) emissions, especially in processes like cement production where fuel costs are critical and residual biomass can be obtained at a low cost. Since plants remove CO2 from the atmosphere, CO2 emissions from biomass combustion are often assumed to have zero global warming potential ( GWP bCO 2 = 0) and do not contribute to climate forcing. However, diverting residual biomass to energy use has recently been shown to increase the atmospheric CO2 load when compared to business-as-usual (BAU) practices, resulting in GWP bCO 2 values between 0 and 1. A detailed process model for a natural gas-fired cement plant producing 4200 megagrams of clinker per day was used to calculate the material and energy flows, as well as the lifecycle emissions associated with cement production without and with diverted biomass (supplying 50% of precalciner energy demand) from forestry and landfill sources. Biomass co-firing reduced natural gas demand in the precalciner of the cement plant by 39% relative to the reference scenario (100% natural gas), but the total demands for thermal, electrical, and diesel (transportation) energy increased by at least 14%. Assuming GWP bCO 2 values of zero for biomass combustion, cement's lifecycle GHG intensity changed from the reference (natural gas only) plant by −40, −23, and − 89 kg CO2/Mg clinker for diverted biomass from slash burning, forest floor and landfill biomass, respectively. However, using the calculated GWP bCO 2 values for diverted biomass from these same fuel sources, the lifecycle GHG intensities changes were −37, +20 and +28 kg CO2/Mg clinker, respectively. The switch from decreasing to increasing cement plant GHG emissions (i.e., forest floor or landfill feedstocks scenarios) highlights the importance of calculating and using the GWP bCO 2 factor when quantifying lifecycle GHG impacts associated with diverting residual biomass to bioenergy use.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号