首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 355 毫秒
1.
Inverse-probability-weighted estimators are the oldest and potentially most commonly used class of procedures for the estimation of causal effects. By adjusting for selection biases via a weighting mechanism, these procedures estimate an effect of interest by constructing a pseudopopulation in which selection biases are eliminated. Despite their ease of use, these estimators require the correct specification of a model for the weighting mechanism, are known to be inefficient, and suffer from the curse of dimensionality. We propose a class of nonparametric inverse-probability-weighted estimators in which the weighting mechanism is estimated via undersmoothing of the highly adaptive lasso, a nonparametric regression function proven to converge at nearly n 1 / 3 $ n^{-1/3}$ -rate to the true weighting mechanism. We demonstrate that our estimators are asymptotically linear with variance converging to the nonparametric efficiency bound. Unlike doubly robust estimators, our procedures require neither derivation of the efficient influence function nor specification of the conditional outcome model. Our theoretical developments have broad implications for the construction of efficient inverse-probability-weighted estimators in large statistical models and a variety of problem settings. We assess the practical performance of our estimators in simulation studies and demonstrate use of our proposed methodology with data from a large-scale epidemiologic study.  相似文献   

2.
Ye He  Ling Zhou  Yingcun Xia  Huazhen Lin 《Biometrics》2023,79(3):2157-2170
The existing methods for subgroup analysis can be roughly divided into two categories: finite mixture models (FMM) and regularization methods with an ℓ1-type penalty. In this paper, by introducing the group centers and ℓ2-type penalty in the loss function, we propose a novel center-augmented regularization (CAR) method; this method can be regarded as a unification of the regularization method and FMM and hence exhibits higher efficiency and robustness and simpler computations than the existing methods. In particular, its computational complexity is reduced from the O ( n 2 ) $O(n^2)$ of the conventional pairwise-penalty method to only O ( n K ) $O(nK)$ , where n is the sample size and K is the number of subgroups. The asymptotic normality of CAR is established, and the convergence of the algorithm is proven. CAR is applied to a dataset from a multicenter clinical trial, Buprenorphine in the Treatment of Opiate Dependence; a larger R2 is produced and three additional significant variables are identified compared to those of the existing methods.  相似文献   

3.
K.O. Ekvall  M. Bottai 《Biometrics》2023,79(3):2286-2297
We propose a unified framework for likelihood-based regression modeling when the response variable has finite support. Our work is motivated by the fact that, in practice, observed data are discrete and bounded. The proposed methods assume a model which includes models previously considered for interval-censored variables with log-concave distributions as special cases. The resulting log-likelihood is concave, which we use to establish asymptotic normality of its maximizer as the number of observations n tends to infinity with the number of parameters d fixed, and rates of convergence of L1-regularized estimators when the true parameter vector is sparse and d and n both tend to infinity with log ( d ) / n 0 $\log (d) / n \rightarrow 0$ . We consider an inexact proximal Newton algorithm for computing estimates and give theoretical guarantees for its convergence. The range of possible applications is wide, including but not limited to survival analysis in discrete time, the modeling of outcomes on scored surveys and questionnaires, and, more generally, interval-censored regression. The applicability and usefulness of the proposed methods are illustrated in simulations and data examples.  相似文献   

4.
The power prior has been widely used to discount the amount of information borrowed from historical data in the design and analysis of clinical trials. It is realized by raising the likelihood function of the historical data to a power parameter δ [ 0 , 1 ] $\delta \in [0, 1]$ , which quantifies the heterogeneity between the historical and the new study. In a fully Bayesian approach, a natural extension is to assign a hyperprior to δ such that the posterior of δ can reflect the degree of similarity between the historical and current data. To comply with the likelihood principle, an extra normalizing factor needs to be calculated and such prior is known as the normalized power prior. However, the normalizing factor involves an integral of a prior multiplied by a fractional likelihood and needs to be computed repeatedly over different δ during the posterior sampling. This makes its use prohibitive in practice for most elaborate models. This work provides an efficient framework to implement the normalized power prior in clinical studies. It bypasses the aforementioned efforts by sampling from the power prior with δ = 0 $\delta = 0$ and δ = 1 $\delta = 1$ only. Such a posterior sampling procedure can facilitate the use of a random δ with adaptive borrowing capability in general models. The numerical efficiency of the proposed method is illustrated via extensive simulation studies, a toxicological study, and an oncology study.  相似文献   

5.
For ordinal outcomes, the average treatment effect is often ill-defined and hard to interpret. Echoing Agresti and Kateri, we argue that the relative treatment effect can be a useful measure, especially for ordinal outcomes, which is defined as γ = pr { Y i ( 1 ) > Y i ( 0 ) } pr { Y i ( 1 ) < Y i ( 0 ) } , with Y i ( 1 ) and Y i ( 0 ) being the potential outcomes of unit i under treatment and control, respectively. Given the marginal distributions of the potential outcomes, we derive the sharp bounds on γ , which are identifiable parameters based on the observed data. Agresti and Kateri focused on modeling strategies under the assumption of independent potential outcomes, but we allow for arbitrary dependence.  相似文献   

6.
Tropical and subtropical forest biomes are a main hotspot for the global nitrogen (N) cycle. Yet, our understanding of global soil N cycle patterns and drivers and their response to N deposition in these biomes remains elusive. By a meta-analysis of 2426-single and 161-paired observations from 89 published 15 N pool dilution and tracing studies, we found that gross N mineralization (GNM), immobilization of ammonium ( I NH 4 ) and nitrate ( I NO 3 ), and dissimilatory nitrate reduction to ammonium (DNRA) were significantly higher in tropical forests than in subtropical forests. Soil N cycle was conservative in tropical forests with ratios of gross nitrification (GN) to I NH 4 (GN/ I NH 4 ) and of soil nitrate to ammonium (NO3/NH4+) less than one, but was leaky in subtropical forests with GN/ I NH 4 and NO3/NH4+ higher than one. Soil NH4+ dynamics were mainly controlled by soil substrate (e.g., total N), but climatic factors (e.g., precipitation and/or temperature) were more important in controlling soil NO3 dynamics. Soil texture played a role, as GNM and I NH 4 were positively correlated with silt and clay contents, while I NO 3 and DNRA were positively correlated with sand and clay contents, respectively. The soil N cycle was more sensitive to N deposition in tropical forests than in subtropical forests. Nitrogen deposition leads to a leaky N cycle in tropical forests, as evidenced by the increase in GN/ I NH 4 , NO3/NH4+, and nitrous oxide emissions and the decrease in I NO 3 and DNRA, mainly due to the decrease in soil microbial biomass and pH. Dominant tree species can also influence soil N cycle pattern, which has changed from conservative in deciduous forests to leaky in coniferous forests. We provide global evidence that tropical, but not subtropical, forests are characterized by soil N dynamics sustaining N availability and that N deposition inhibits soil N retention and stimulates N losses in these biomes.  相似文献   

7.
The question of how individual patient data from cohort studies or historical clinical trials can be leveraged for designing more powerful, or smaller yet equally powerful, clinical trials becomes increasingly important in the era of digitalization. Today, the traditional statistical analyses approaches may seem questionable to practitioners in light of ubiquitous historical prognostic information. Several methodological developments aim at incorporating historical information in the design and analysis of future clinical trials, most importantly Bayesian information borrowing, propensity score methods, stratification, and covariate adjustment. Adjusting the analysis with respect to a prognostic score, which was obtained from some model applied to historical data, received renewed interest from a machine learning perspective, and we study the potential of this approach for randomized clinical trials. In an idealized situation of a normal outcome in a two-arm trial with 1:1 allocation, we derive a simple sample size reduction formula as a function of two criteria characterizing the prognostic score: (1) the coefficient of determination R2 on historical data and (2) the correlation ρ between the estimated and the true unknown prognostic scores. While maintaining the same power, the original total sample size n planned for the unadjusted analysis reduces to ( 1 R 2 ρ 2 ) × n $(1 - R^2 \rho ^2) \times n$ in an adjusted analysis. Robustness in less ideal situations was assessed empirically. We conclude that there is potential for substantially more powerful or smaller trials, but only when prognostic scores can be accurately estimated.  相似文献   

8.

Aim

Understanding connections between environment and biodiversity is crucial for conservation, identifying causes of ecosystem stress, and predicting population responses to changing environments. Explaining biodiversity requires an understanding of how species richness and environment covary across scales. Here, we identify scales and locations at which biodiversity is generated and correlates with environment.

Location

Full latitudinal range per continent.

Time Period

Present day.

Major Taxa Studied

Terrestrial vertebrates: all mammals, carnivorans, bats, songbirds, hummingbirds, amphibians.

Methods

We describe the use of wavelet power spectra, cross-power and coherence for identifying scale-dependent trends across Earth's surface. Spectra reveal scale- and location-dependent coherence between species richness and topography (E), mean annual precipitation (Pn), temperature (Tm) and annual temperature range (ΔT).

Results

>97% of species richness of taxa studied is generated at large scales, that is, wavelengths 10 3 km, with 30%–69% generated at scales 10 4 km. At these scales, richness tends to be highly coherent and anti-correlated with E and ΔT, and positively correlated with Pn and Tm. Coherence between carnivoran richness and ΔT is low across scales, implying insensitivity to seasonal temperature variations. Conversely, amphibian richness is strongly anti-correlated with ΔT at large scales. At scales 10 3 km, examined taxa, except carnivorans, show highest richness within the tropics. Terrestrial plateaux exhibit high coherence between carnivorans and E at scales 10 3 km, consistent with contribution of large-scale tectonic processes to biodiversity. Results are similar across different continents and for global latitudinal averages. Spectral admittance permits derivation of rules-of-thumb relating long-wavelength environmental and species richness trends.

Main Conclusions

Sensitivities of mammal, bird and amphibian populations to environment are highly scale dependent. At large scales, carnivoran richness is largely independent of temperature and precipitation, whereas amphibian richness correlates strongly with precipitation and temperature, and anti-correlates with temperature range. These results pave the way for spectral-based calibration of models that predict biodiversity response to climate change scenarios.  相似文献   

9.
Inference of population structure from genetic data plays an important role in population and medical genetics studies. With the advancement and decreasing cost of sequencing technology, the increasingly available whole genome sequencing data provide much richer information about the underlying population structure. The traditional method originally developed for array-based genotype data for computing and selecting top principal components (PCs) that capture population structure may not perform well on sequencing data for two reasons. First, the number of genetic variants p is much larger than the sample size n in sequencing data such that the sample-to-marker ratio n / p $n/p$ is nearly zero, violating the assumption of the Tracy-Widom test used in their method. Second, their method might not be able to handle the linkage disequilibrium well in sequencing data. To resolve those two practical issues, we propose a new method called ERStruct to determine the number of top informative PCs based on sequencing data. More specifically, we propose to use the ratio of consecutive eigenvalues as a more robust test statistic, and then we approximate its null distribution using modern random matrix theory. Both simulation studies and applications to two public data sets from the HapMap 3 and the 1000 Genomes Projects demonstrate the empirical performance of our ERStruct method.  相似文献   

10.
Dateng Li  Jing Cao  Song Zhang 《Biometrics》2020,76(4):1064-1074
Cluster randomized trials (CRTs) are widely used in different areas of medicine and public health. Recently, with increasing complexity of medical therapies and technological advances in monitoring multiple outcomes, many clinical trials attempt to evaluate multiple co-primary endpoints. In this study, we present a power analysis method for CRTs with K2 binary co-primary endpoints. It is developed based on the GEE (generalized estimating equation) approach, and three types of correlations are considered: inter-subject correlation within each endpoint, intra-subject correlation across endpoints, and inter-subject correlation across endpoints. A closed-form joint distribution of the K test statistics is derived, which facilitates the evaluation of power and type I error for arbitrarily constructed hypotheses. We further present a theorem that characterizes the relationship between various correlations and testing power. We assess the performance of the proposed power analysis method based on extensive simulation studies. An application example to a real clinical trial is presented.  相似文献   

11.

Aim

Theoretically, woody biomass turnover time ( τ ) quantified using outflux (i.e. tree mortality) predicts biomass dynamics better than using influx (i.e. productivity). This study aims at using forest inventory data to empirically test the outflux approach and generate a spatially explicit understanding of woody τ in mature forests. We further compared woody τ estimates with dynamic global vegetation models (DGVMs) and with a data assimilation product of C stocks and fluxes—CARDAMOM.

Location

Continents.

Time Period

Historic from 1951 to 2018.

Major Taxa Studied

Trees and forests.

Methods

We compared the approaches of using outflux versus influx for estimating woody τ and predicting biomass accumulation rates. We investigated abiotic and biotic drivers of spatial woody τ and generated a spatially explicit map of woody τ at a 0.25-degree resolution across continents using machine learning. We further examined whether six DGVMs and CARDAMOM generally captured the observational pattern of woody τ .

Results

Woody τ quantified by the outflux approach better (with R2 0.4–0.5) predicted the biomass accumulation rates than the influx approach (with R2 0.1–0.4) across continents. We found large spatial variations of woody τ for mature forests, with highest values in temperate forests (98.8 ± 2.6 y) followed by boreal forests (73.9 ± 3.6 y) and tropical forests. The map of woody τ extrapolated from plot data showed higher values in wetter eastern and pacific coast USA, Africa and eastern Amazon. Climate (temperature and aridity index) and vegetation structure (tree density and forest age) were the dominant drivers of woody τ across continents. The highest woody τ in temperate forests was not captured by either DGVMs or CARDAMOM.

Main Conclusions

Our study empirically demonstrated the preference of using outflux over influx to estimate woody τ for predicting biomass accumulation rates. The spatially explicit map of woody τ and the underlying drivers provide valuable information to improve the representation of forest demography and carbon turnover processes in DGVMs.  相似文献   

12.
No tillage (NT) has been proposed as a practice to reduce the adverse effects of tillage on contaminant (e.g., sediment and nutrient) losses to waterways. Nonetheless, previous reports on impacts of NT on nitrate ( NO 3 ) leaching are inconsistent. A global meta-analysis was conducted to test the hypothesis that the response of NO 3 leaching under NT, relative to tillage, is associated with tillage type (inversion vs non-inversion tillage), soil properties (e.g., soil organic carbon [SOC]), climate factors (i.e., water input), and management practices (e.g., NT duration and nitrogen fertilizer inputs). Overall, compared with all forms of tillage combined, NT had 4% and 14% greater area-scaled and yield-scaled NO 3 leaching losses, respectively. The NO 3 leaching under NT tended to be 7% greater than that of inversion tillage but comparable to non-inversion tillage. Greater NO 3 leaching under NT, compared with inversion tillage, was most evident under short-duration NT (<5 years), where water inputs were low (<2 mm day−1), in medium texture and low SOC (<1%) soils, and at both higher (>200 kg ha−1) and lower (0–100 kg ha−1) rates of nitrogen addition. Of these, SOC was the most important factor affecting the risk of NO3 leaching under NT compared with inversion tillage. Globally, on average, the greater amount of NO3 leached under NT, compared with inversion tillage, was mainly attributed to corresponding increases in drainage. The percentage of global cropping land with lower risk of NO3 leaching under NT, relative to inversion tillage, increased with NT duration from 3 years (31%) to 15 years (54%). This study highlighted that the benefits of NT adoption for mitigating NO 3 leaching are most likely in long-term NT cropping systems on high-SOC soils.  相似文献   

13.
In this work, we applied a multi-information source modeling technique to solve a multi-objective Bayesian optimization problem involving the simultaneous minimization of cost and maximization of growth for serum-free C2C12 cells using a hyper-volume improvement acquisition function. In sequential batches of custom media experiments designed using our Bayesian criteria, collected using multiple assays targeting different cellular growth dynamics, the algorithm learned to identify the trade-off relationship between long-term growth and cost. We were able to identify several media with > 100 % $>100\%$ more growth of C2C12 cells than the control, as well as a medium with 23% more growth at only 62.5% of the cost of the control. These algorithmically generated media also maintained growth far past the study period, indicating the modeling approach approximates the cell growth well from an extremely limited data set.  相似文献   

14.
Digestate, a by-product of biogas production, is widely recognized as a promising renewable nitrogen (N) source with high potential to replace synthetic fertilizers. Yet, inefficient digestate use can lead to pollutant N losses as ammonia (NH3) volatilization, nitrous oxide (N2O) emissions and nitrate ( NO 3 ) leaching. Cover crops (CCs) may reduce some of these losses and recycle the N back into the soil after incorporation, but the effect on the N balance depends on the CC species. In a one-year field study, we tested two application methods (i.e., surface broadcasting, BDC; and shallow injection, INJ) of the liquid fraction of separated co-digested cattle slurry (digestate liquid fraction [DLF]), combined with different winter cover crop (CC) options (i.e., rye, white mustard or bare fallow), as starter fertilizer for maize. Later, side-dressing with urea was required to fulfil maize N-requirements. We tested treatment effects on yield, N-uptake, N-use efficiency parameters, and N-losses in the form of N2O emissions and NO 3 leaching. CC development and biomass production were strongly affected by their contrasting frost tolerance, with spring-regrowth for rye, while mustard was winter killed. After the CCs, injection of DLF increased N2O emissions significantly compared with BDC (emission factor of 2.69% vs. 1.66%). Nitrous oxide emissions accounted for a small part (11%–13%) of the overall yield-scaled N losses (0.46–0.97 kg N Mg grain−1). The adoption of CCs reduced fall NO 3 leaching, being 51% and 64% lower for mustard and rye than under bare soil. In addition, rye reduced NO 3 leaching during spring and summer after termination by promoting N immobilization, thus leading to −57% lower annual leaching losses compared with mustard. DLF application method modified N-loss pathways, but not the cumulative yield-scaled N losses. Overall, these insights contribute to inform an evidence-based design of cropping systems in which nutrients are recycled more efficiently.  相似文献   

15.
The existence of a large-biomass carbon (C) sink in Northern Hemisphere extra-tropical ecosystems (NHee) is well-established, but the relative contribution of different potential drivers remains highly uncertain. Here we isolated the historical role of carbon dioxide (CO2) fertilization by integrating estimates from 24 CO2-enrichment experiments, an ensemble of 10 dynamic global vegetation models (DGVMs) and two observation-based biomass datasets. Application of the emergent constraint technique revealed that DGVMs underestimated the historical response of plant biomass to increasing [CO2] in forests ( β Forest Mod ) but overestimated the response in grasslands ( β Grass Mod ) since the 1850s. Combining the constrained β Forest Mod (0.86 ± 0.28 kg C m−2 [100 ppm]−1) with observed forest biomass changes derived from inventories and satellites, we identified that CO2 fertilization alone accounted for more than half (54 ± 18% and 64 ± 21%, respectively) of the increase in biomass C storage since the 1990s. Our results indicate that CO2 fertilization dominated the forest biomass C sink over the past decades, and provide an essential step toward better understanding the key role of forests in land-based policies for mitigating climate change.  相似文献   

16.
Use of lentiviral vectors (LVs) in clinical Cell and Gene Therapy applications is growing. However, functional product loss during capture chromatography, typically anion-exchange (AIEX), remains a significant unresolved challenge for the design of economic processes. Despite AIEX's extensive use, variable performance and generally low recovery is reported. This poor understanding of product loss mechanisms highlights a significant gap in our knowledge of LV adsorption and other types of vector delivery systems. This work demonstrates HIV-1-LV recovery over quaternary-amine membrane adsorbents is a function of time in the adsorbed state. Kinetic data for product loss in the column bound state was generated. Fitting a second order-like rate model, we observed a rapid drop in functional recovery due to increased irreversible binding for vectors encoding two separate transgenes ( t Y 1 / 2 ${t}_{{Y}_{1/2}}$ = 12.7 and 18.7 min). Upon gradient elution, a two-peak elution profile implicating the presence of two distinct binding subpopulations is observed. Characterizing the loss kinetics of these two subpopulations showed a higher rate of vector loss in the weaker binding peak. This work highlights time spent in the adsorbed state as a critical factor impacting LV product loss and the need for consideration in LV AIEX process development workflows.  相似文献   

17.
The peak growth of plant in summer is an important indicator of the capacity of terrestrial ecosystem productivity, and ongoing studies have shown its responses to climate warming as represented in the mean temperature. However, the impacts from the asymmetrical warming, that is, different rates in the changes of daytime (Tmax) and nighttime (Tmin) warming were mostly ignored. Using 60 flux sites (674 site-year in total) measurements and satellite observations from two independent satellite platforms (Global Inventory Monitoring and Modeling Studies [1982–2015]; MODIS [2000–2020]) over the Northern Hemisphere (≥30°N), here we show that the peak growth, as represented by both flux-based maximum primary productivity and the maximum greenness indices (maximum normalized difference vegetation index and enhanced vegetation index), responded oppositely to daytime and nighttime warming. T max T min + (peak growth showed negative responses to Tmax, but positive responses to Tmin) dominated in most ecosystems and climate types, especially in water-limited ecosystems, while T max + T min (peak growth showed positive responses to Tmax, but negative responses to Tmin) was primarily observed in high latitude regions. These contrasting responses could be explained by the strong association between asymmetric warming and water conditions, including soil moisture, evapotranspiration/potential evapotranspiration, and the vapor pressure deficit. Our results are therefore important to the understanding of the responses of peak growth to climate change, and consequently a better representation of asymmetrical warming in future ecosystem models by differentiating the contributions between daytime and nighttime warming.  相似文献   

18.
In an observational study, the treatment received and the outcome exhibited may be associated in the absence of an effect caused by the treatment, even after controlling for observed covariates. Two tactics are common: (i) a test for unmeasured bias may be obtained using a secondary outcome for which the effect is known and (ii) a sensitivity analysis may explore the magnitude of unmeasured bias that would need to be present to explain the observed association as something other than an effect caused by the treatment. Can such a test for unmeasured bias inform the sensitivity analysis? If the test for bias does not discover evidence of unmeasured bias, then ask: Are conclusions therefore insensitive to larger unmeasured biases? Conversely, if the test for bias does find evidence of bias, then ask: What does that imply about sensitivity to biases? This problem is formulated in a new way as a convex quadratically constrained quadratic program and solved on a large scale using interior point methods by a modern solver. That is, a convex quadratic function of N variables is minimized subject to constraints on linear and convex quadratic functions of these variables. The quadratic function that is minimized is a statistic for the primary outcome that is a function of the unknown treatment assignment probabilities. The quadratic function that constrains this minimization is a statistic for subsidiary outcome that is also a function of these same unknown treatment assignment probabilities. In effect, the first statistic is minimized over a confidence set for the unknown treatment assignment probabilities supplied by the unaffected outcome. This process avoids the mistake of interpreting the failure to reject a hypothesis as support for the truth of that hypothesis. The method is illustrated by a study of the effects of light daily alcohol consumption on high-density lipoprotein (HDL) cholesterol levels. In this study, the method quickly optimizes a nonlinear function of N = 800 $N=800$ variables subject to linear and quadratic constraints. In the example, strong evidence of unmeasured bias is found using the subsidiary outcome, but, perhaps surprisingly, this finding makes the primary comparison insensitive to larger biases.  相似文献   

19.
Coccolith dissolution together with post-mortem morphological features are immensely important phenomena that can affect assemblage compositions, complicate taxonomic identification as well as provide valuable palaeoenvironmental insights. This study summarizes the effects of pH oscillations on post-mortem coccolith morphologies and the abundances and compositions of calcareous nannoplankton assemblages in three distinct types of material—(i) Cretaceous chalk, (ii) Miocene marls, and (iii) late Holocene calcareous ooze. Two independent experimental runs within a semi-enclosed system setting were realized to observe assemblage alterations. One experiment was realized with the presence of bacteria and, in contrast, the second one inhibited their potential effect on the studied system. The pH was gradually decreased within the range of 8.3–6.4 using a reaction of CO2 with H2O forming weak carbonic acid (H2CO3), thereby affecting [ CO 3 2 ]. Further, a subsequent overgrowth study was carried out during spontaneous degassing accompanied by a gradual pH rise. The experiment revealed that the process and intensity of coccolith corrosion and subsequent overgrowth build-ups are influenced by a plethora of different factors such as (i) pH and associated seawater chemistry, (ii) mineral composition of the sediment, (iii) the presence of coccoliths within a protective substrate (faecal pellets, pores, pits), and (iv) the presence/absence of bacteria. Nannoplankton assemblages with corroded coccoliths or with coccoliths with overgrowth build-ups showed that the observed relative abundances of taxa experienced alteration from the original compositions. Additionally, extreme pH oscillations may result in enhanced morphological changes that make coccoliths unidentifiable structures, and might even evoke the absence of coccoliths in the fossil record.  相似文献   

20.
Co-firing residual lignocellulosic biomass with fossil fuels is often used to reduce greenhouse gas (GHG) emissions, especially in processes like cement production where fuel costs are critical and residual biomass can be obtained at a low cost. Since plants remove CO2 from the atmosphere, CO2 emissions from biomass combustion are often assumed to have zero global warming potential ( GWP bCO 2 = 0) and do not contribute to climate forcing. However, diverting residual biomass to energy use has recently been shown to increase the atmospheric CO2 load when compared to business-as-usual (BAU) practices, resulting in GWP bCO 2 values between 0 and 1. A detailed process model for a natural gas-fired cement plant producing 4200 megagrams of clinker per day was used to calculate the material and energy flows, as well as the lifecycle emissions associated with cement production without and with diverted biomass (supplying 50% of precalciner energy demand) from forestry and landfill sources. Biomass co-firing reduced natural gas demand in the precalciner of the cement plant by 39% relative to the reference scenario (100% natural gas), but the total demands for thermal, electrical, and diesel (transportation) energy increased by at least 14%. Assuming GWP bCO 2 values of zero for biomass combustion, cement's lifecycle GHG intensity changed from the reference (natural gas only) plant by −40, −23, and − 89 kg CO2/Mg clinker for diverted biomass from slash burning, forest floor and landfill biomass, respectively. However, using the calculated GWP bCO 2 values for diverted biomass from these same fuel sources, the lifecycle GHG intensities changes were −37, +20 and +28 kg CO2/Mg clinker, respectively. The switch from decreasing to increasing cement plant GHG emissions (i.e., forest floor or landfill feedstocks scenarios) highlights the importance of calculating and using the GWP bCO 2 factor when quantifying lifecycle GHG impacts associated with diverting residual biomass to bioenergy use.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号