首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
For ordinal outcomes, the average treatment effect is often ill-defined and hard to interpret. Echoing Agresti and Kateri, we argue that the relative treatment effect can be a useful measure, especially for ordinal outcomes, which is defined as γ = pr { Y i ( 1 ) > Y i ( 0 ) } pr { Y i ( 1 ) < Y i ( 0 ) } , with Y i ( 1 ) and Y i ( 0 ) being the potential outcomes of unit i under treatment and control, respectively. Given the marginal distributions of the potential outcomes, we derive the sharp bounds on γ , which are identifiable parameters based on the observed data. Agresti and Kateri focused on modeling strategies under the assumption of independent potential outcomes, but we allow for arbitrary dependence.  相似文献   

2.
K.O. Ekvall  M. Bottai 《Biometrics》2023,79(3):2286-2297
We propose a unified framework for likelihood-based regression modeling when the response variable has finite support. Our work is motivated by the fact that, in practice, observed data are discrete and bounded. The proposed methods assume a model which includes models previously considered for interval-censored variables with log-concave distributions as special cases. The resulting log-likelihood is concave, which we use to establish asymptotic normality of its maximizer as the number of observations n tends to infinity with the number of parameters d fixed, and rates of convergence of L1-regularized estimators when the true parameter vector is sparse and d and n both tend to infinity with log ( d ) / n 0 $\log (d) / n \rightarrow 0$ . We consider an inexact proximal Newton algorithm for computing estimates and give theoretical guarantees for its convergence. The range of possible applications is wide, including but not limited to survival analysis in discrete time, the modeling of outcomes on scored surveys and questionnaires, and, more generally, interval-censored regression. The applicability and usefulness of the proposed methods are illustrated in simulations and data examples.  相似文献   

3.
Tropical and subtropical forest biomes are a main hotspot for the global nitrogen (N) cycle. Yet, our understanding of global soil N cycle patterns and drivers and their response to N deposition in these biomes remains elusive. By a meta-analysis of 2426-single and 161-paired observations from 89 published 15 N pool dilution and tracing studies, we found that gross N mineralization (GNM), immobilization of ammonium ( I NH 4 ) and nitrate ( I NO 3 ), and dissimilatory nitrate reduction to ammonium (DNRA) were significantly higher in tropical forests than in subtropical forests. Soil N cycle was conservative in tropical forests with ratios of gross nitrification (GN) to I NH 4 (GN/ I NH 4 ) and of soil nitrate to ammonium (NO3/NH4+) less than one, but was leaky in subtropical forests with GN/ I NH 4 and NO3/NH4+ higher than one. Soil NH4+ dynamics were mainly controlled by soil substrate (e.g., total N), but climatic factors (e.g., precipitation and/or temperature) were more important in controlling soil NO3 dynamics. Soil texture played a role, as GNM and I NH 4 were positively correlated with silt and clay contents, while I NO 3 and DNRA were positively correlated with sand and clay contents, respectively. The soil N cycle was more sensitive to N deposition in tropical forests than in subtropical forests. Nitrogen deposition leads to a leaky N cycle in tropical forests, as evidenced by the increase in GN/ I NH 4 , NO3/NH4+, and nitrous oxide emissions and the decrease in I NO 3 and DNRA, mainly due to the decrease in soil microbial biomass and pH. Dominant tree species can also influence soil N cycle pattern, which has changed from conservative in deciduous forests to leaky in coniferous forests. We provide global evidence that tropical, but not subtropical, forests are characterized by soil N dynamics sustaining N availability and that N deposition inhibits soil N retention and stimulates N losses in these biomes.  相似文献   

4.
Co-firing residual lignocellulosic biomass with fossil fuels is often used to reduce greenhouse gas (GHG) emissions, especially in processes like cement production where fuel costs are critical and residual biomass can be obtained at a low cost. Since plants remove CO2 from the atmosphere, CO2 emissions from biomass combustion are often assumed to have zero global warming potential ( GWP bCO 2 = 0) and do not contribute to climate forcing. However, diverting residual biomass to energy use has recently been shown to increase the atmospheric CO2 load when compared to business-as-usual (BAU) practices, resulting in GWP bCO 2 values between 0 and 1. A detailed process model for a natural gas-fired cement plant producing 4200 megagrams of clinker per day was used to calculate the material and energy flows, as well as the lifecycle emissions associated with cement production without and with diverted biomass (supplying 50% of precalciner energy demand) from forestry and landfill sources. Biomass co-firing reduced natural gas demand in the precalciner of the cement plant by 39% relative to the reference scenario (100% natural gas), but the total demands for thermal, electrical, and diesel (transportation) energy increased by at least 14%. Assuming GWP bCO 2 values of zero for biomass combustion, cement's lifecycle GHG intensity changed from the reference (natural gas only) plant by −40, −23, and − 89 kg CO2/Mg clinker for diverted biomass from slash burning, forest floor and landfill biomass, respectively. However, using the calculated GWP bCO 2 values for diverted biomass from these same fuel sources, the lifecycle GHG intensities changes were −37, +20 and +28 kg CO2/Mg clinker, respectively. The switch from decreasing to increasing cement plant GHG emissions (i.e., forest floor or landfill feedstocks scenarios) highlights the importance of calculating and using the GWP bCO 2 factor when quantifying lifecycle GHG impacts associated with diverting residual biomass to bioenergy use.  相似文献   

5.
Genome-scale metabolic network model (GSMM) based on enzyme constraints greatly improves general metabolic models. The turnover number ( k cat ${k}_{\mathrm{cat}}$ ) of enzymes is used as a parameter to limit the reaction when extending GSMM. Therefore, turnover number plays a crucial role in the prediction accuracy of cell metabolism. In this work, we proposed an enzyme-constrained GSMM parameter optimization method. First, sensitivity analysis of the parameters was carried out to select the parameters with the greatest influence on predicting the specific growth rate. Then, differential evolution (DE) algorithm with adaptive mutation strategy was adopted to optimize the parameters. This algorithm can dynamically select five different mutation strategies. Finally, the specific growth rate prediction, flux variability, and phase plane of the optimized model were analyzed to further evaluate the model. The enzyme-constrained GSMM of Saccharomyces cerevisiae, ecYeast8.3.4, was optimized. Results of the sensitivity analysis showed that the optimization variables can be divided into three groups based on sensitivity: most sensitive (149 k cat ${k}_{\mathrm{cat}}$ c), highly sensitive (1759 k cat ${k}_{\mathrm{cat}}$ ), and nonsensitive (2502 k cat ${k}_{\mathrm{cat}}$ ) groups. Six optimization strategies were developed based on the results of the sensitivity analysis. The results showed that the DE with adaptive mutation strategy can indeed improve the model by optimizing highly sensitive parameters. Retaining all parameters and optimizing the highly sensitive parameters are the recommended optimization strategy.  相似文献   

6.
Ye He  Ling Zhou  Yingcun Xia  Huazhen Lin 《Biometrics》2023,79(3):2157-2170
The existing methods for subgroup analysis can be roughly divided into two categories: finite mixture models (FMM) and regularization methods with an ℓ1-type penalty. In this paper, by introducing the group centers and ℓ2-type penalty in the loss function, we propose a novel center-augmented regularization (CAR) method; this method can be regarded as a unification of the regularization method and FMM and hence exhibits higher efficiency and robustness and simpler computations than the existing methods. In particular, its computational complexity is reduced from the O ( n 2 ) $O(n^2)$ of the conventional pairwise-penalty method to only O ( n K ) $O(nK)$ , where n is the sample size and K is the number of subgroups. The asymptotic normality of CAR is established, and the convergence of the algorithm is proven. CAR is applied to a dataset from a multicenter clinical trial, Buprenorphine in the Treatment of Opiate Dependence; a larger R2 is produced and three additional significant variables are identified compared to those of the existing methods.  相似文献   

7.
Inference of population structure from genetic data plays an important role in population and medical genetics studies. With the advancement and decreasing cost of sequencing technology, the increasingly available whole genome sequencing data provide much richer information about the underlying population structure. The traditional method originally developed for array-based genotype data for computing and selecting top principal components (PCs) that capture population structure may not perform well on sequencing data for two reasons. First, the number of genetic variants p is much larger than the sample size n in sequencing data such that the sample-to-marker ratio n / p $n/p$ is nearly zero, violating the assumption of the Tracy-Widom test used in their method. Second, their method might not be able to handle the linkage disequilibrium well in sequencing data. To resolve those two practical issues, we propose a new method called ERStruct to determine the number of top informative PCs based on sequencing data. More specifically, we propose to use the ratio of consecutive eigenvalues as a more robust test statistic, and then we approximate its null distribution using modern random matrix theory. Both simulation studies and applications to two public data sets from the HapMap 3 and the 1000 Genomes Projects demonstrate the empirical performance of our ERStruct method.  相似文献   

8.
Inverse-probability-weighted estimators are the oldest and potentially most commonly used class of procedures for the estimation of causal effects. By adjusting for selection biases via a weighting mechanism, these procedures estimate an effect of interest by constructing a pseudopopulation in which selection biases are eliminated. Despite their ease of use, these estimators require the correct specification of a model for the weighting mechanism, are known to be inefficient, and suffer from the curse of dimensionality. We propose a class of nonparametric inverse-probability-weighted estimators in which the weighting mechanism is estimated via undersmoothing of the highly adaptive lasso, a nonparametric regression function proven to converge at nearly n 1 / 3 $ n^{-1/3}$ -rate to the true weighting mechanism. We demonstrate that our estimators are asymptotically linear with variance converging to the nonparametric efficiency bound. Unlike doubly robust estimators, our procedures require neither derivation of the efficient influence function nor specification of the conditional outcome model. Our theoretical developments have broad implications for the construction of efficient inverse-probability-weighted estimators in large statistical models and a variety of problem settings. We assess the practical performance of our estimators in simulation studies and demonstrate use of our proposed methodology with data from a large-scale epidemiologic study.  相似文献   

9.
A control theory perspective on determination of optimal dynamic treatment regimes is considered. The aim is to adapt statistical methodology that has been developed for medical or other biostatistical applications to incorporate powerful control techniques that have been designed for engineering or other technological problems. Data tend to be sparse and noisy in the biostatistical area and interest has tended to be in statistical inference for treatment effects. In engineering fields, experimental data can be more easily obtained and reproduced and interest is more often in performance and stability of proposed controllers rather than modeling and inference per se. We propose that modeling and estimation should be based on standard statistical techniques but subsequent treatment policy should be obtained from robust control. To bring focus, we concentrate on A‐learning methodology as developed in the biostatistical literature and H ‐synthesis from control theory. Simulations and two applications demonstrate robustness of the H strategy compared to standard A‐learning in the presence of model misspecification or measurement error.  相似文献   

10.
Mutant dynamics in fragmented populations have been studied extensively in evolutionary biology. Yet, open questions remain, both experimentally and theoretically. Some of the fundamental properties predicted by models still need to be addressed experimentally. We contribute to this by using a combination of experiments and theory to investigate the role of migration in mutant distribution. In the case of neutral mutants, while the mean frequency of mutants is not influenced by migration, the probability distribution is. To address this empirically, we performed in vitro experiments, where mixtures of GFP-labelled (“mutant”) and non-labelled (“wid-type”) murine cells were grown in wells (demes), and migration was mimicked via cell transfer from well to well. In the presence of migration, we observed a change in the skewedness of the distribution of the mutant frequencies in the wells, consistent with previous and our own model predictions. In the presence of de novo mutant production, we used modelling to investigate the level at which disadvantageous mutants are predicted to exist, which has implications for the adaptive potential of the population in case of an environmental change. In panmictic populations, disadvantageous mutants can persist around a steady state, determined by the rate of mutant production and the selective disadvantage (selection-mutation balance). In a fragmented system that consists of demes connected by migration, a steady-state persistence of disadvantageous mutants is also observed, which, however, is fundamentally different from the mutation-selection balance and characterized by higher mutant levels. The increase in mutant frequencies above the selection-mutation balance can be maintained in small ( N < N c ) demes as long as the migration rate is sufficiently small. The migration rate above which the mutants approach the selection-mutation balance decays exponentially with N / N c . The observed increase in the mutant numbers is not explained by the change in the effective population size. Implications for evolutionary processes in diseases are discussed, where the pre-existence of disadvantageous drug-resistant mutant cells or pathogens drives the response of the disease to treatments.  相似文献   

11.
Pragmatic trials evaluating health care interventions often adopt cluster randomization due to scientific or logistical considerations. Systematic reviews have shown that coprimary endpoints are not uncommon in pragmatic trials but are seldom recognized in sample size or power calculations. While methods for power analysis based on K ( K 2 $K\ge 2$ ) binary coprimary endpoints are available for cluster randomized trials (CRTs), to our knowledge, methods for continuous coprimary endpoints are not yet available. Assuming a multivariate linear mixed model (MLMM) that accounts for multiple types of intraclass correlation coefficients among the observations in each cluster, we derive the closed-form joint distribution of K treatment effect estimators to facilitate sample size and power determination with different types of null hypotheses under equal cluster sizes. We characterize the relationship between the power of each test and different types of correlation parameters. We further relax the equal cluster size assumption and approximate the joint distribution of the K treatment effect estimators through the mean and coefficient of variation of cluster sizes. Our simulation studies with a finite number of clusters indicate that the predicted power by our method agrees well with the empirical power, when the parameters in the MLMM are estimated via the expectation-maximization algorithm. An application to a real CRT is presented to illustrate the proposed method.  相似文献   

12.

Aim

Theoretically, woody biomass turnover time ( τ ) quantified using outflux (i.e. tree mortality) predicts biomass dynamics better than using influx (i.e. productivity). This study aims at using forest inventory data to empirically test the outflux approach and generate a spatially explicit understanding of woody τ in mature forests. We further compared woody τ estimates with dynamic global vegetation models (DGVMs) and with a data assimilation product of C stocks and fluxes—CARDAMOM.

Location

Continents.

Time Period

Historic from 1951 to 2018.

Major Taxa Studied

Trees and forests.

Methods

We compared the approaches of using outflux versus influx for estimating woody τ and predicting biomass accumulation rates. We investigated abiotic and biotic drivers of spatial woody τ and generated a spatially explicit map of woody τ at a 0.25-degree resolution across continents using machine learning. We further examined whether six DGVMs and CARDAMOM generally captured the observational pattern of woody τ .

Results

Woody τ quantified by the outflux approach better (with R2 0.4–0.5) predicted the biomass accumulation rates than the influx approach (with R2 0.1–0.4) across continents. We found large spatial variations of woody τ for mature forests, with highest values in temperate forests (98.8 ± 2.6 y) followed by boreal forests (73.9 ± 3.6 y) and tropical forests. The map of woody τ extrapolated from plot data showed higher values in wetter eastern and pacific coast USA, Africa and eastern Amazon. Climate (temperature and aridity index) and vegetation structure (tree density and forest age) were the dominant drivers of woody τ across continents. The highest woody τ in temperate forests was not captured by either DGVMs or CARDAMOM.

Main Conclusions

Our study empirically demonstrated the preference of using outflux over influx to estimate woody τ for predicting biomass accumulation rates. The spatially explicit map of woody τ and the underlying drivers provide valuable information to improve the representation of forest demography and carbon turnover processes in DGVMs.  相似文献   

13.
The existence of a large-biomass carbon (C) sink in Northern Hemisphere extra-tropical ecosystems (NHee) is well-established, but the relative contribution of different potential drivers remains highly uncertain. Here we isolated the historical role of carbon dioxide (CO2) fertilization by integrating estimates from 24 CO2-enrichment experiments, an ensemble of 10 dynamic global vegetation models (DGVMs) and two observation-based biomass datasets. Application of the emergent constraint technique revealed that DGVMs underestimated the historical response of plant biomass to increasing [CO2] in forests ( β Forest Mod ) but overestimated the response in grasslands ( β Grass Mod ) since the 1850s. Combining the constrained β Forest Mod (0.86 ± 0.28 kg C m−2 [100 ppm]−1) with observed forest biomass changes derived from inventories and satellites, we identified that CO2 fertilization alone accounted for more than half (54 ± 18% and 64 ± 21%, respectively) of the increase in biomass C storage since the 1990s. Our results indicate that CO2 fertilization dominated the forest biomass C sink over the past decades, and provide an essential step toward better understanding the key role of forests in land-based policies for mitigating climate change.  相似文献   

14.
Researchers often use a two-step process to analyze multivariate data. First, dimensionality is reduced using a technique such as principal component analysis, followed by a group comparison using a t-test or analysis of variance. Although this practice is often discouraged, the statistical properties of this procedure are not well understood, starting with the hypothesis being tested. We suggest that this approach might be considering two distinct hypotheses, one of which is a global test of no differences in the mean vectors, and the other being a focused test of a specific linear combination where the coefficients have been estimated from the data. We study the asymptotic properties of the two-sample t-statistic for these two scenarios, assuming a nonsparse setting. We show that the size of the global test agrees with the presumed level but that the test has poor power. In contrast, the size of the focused test can be arbitrarily distorted with certain mean and covariance structures. A simple method is provided to correct the size of the focused test. Data analyses and simulations are used to illustrate the results. Recommendations on the use of this two-step method and the related use of principal components for prediction are provided.  相似文献   

15.
Climate change leads to increasing temperature and more extreme hot and drought events. Ecosystem capability to cope with climate warming depends on vegetation's adjusting pace with temperature change. How environmental stresses impair such a vegetation pace has not been carefully investigated. Here we show that dryness substantially dampens vegetation pace in warm regions to adjust the optimal temperature of gross primary production (GPP) ( T opt GPP ) in response to change in temperature over space and time. T opt GPP spatially converges to an increase of 1.01°C (95% CI: 0.97, 1.05) per 1°C increase in the yearly maximum temperature (Tmax) across humid or cold sites worldwide (37oS–79oN) but only 0.59°C (95% CI: 0.46, 0.74) per 1°C increase in Tmax across dry and warm sites. T opt GPP temporally changes by 0.81°C (95% CI: 0.75, 0.87) per 1°C interannual variation in Tmax at humid or cold sites and 0.42°C (95% CI: 0.17, 0.66) at dry and warm sites. Regardless of the water limitation, the maximum GPP (GPPmax) similarly increases by 0.23 g C m−2 day−1 per 1°C increase in T opt GPP in either humid or dry areas. Our results indicate that the future climate warming likely stimulates vegetation productivity more substantially in humid than water-limited regions.  相似文献   

16.
Use of lentiviral vectors (LVs) in clinical Cell and Gene Therapy applications is growing. However, functional product loss during capture chromatography, typically anion-exchange (AIEX), remains a significant unresolved challenge for the design of economic processes. Despite AIEX's extensive use, variable performance and generally low recovery is reported. This poor understanding of product loss mechanisms highlights a significant gap in our knowledge of LV adsorption and other types of vector delivery systems. This work demonstrates HIV-1-LV recovery over quaternary-amine membrane adsorbents is a function of time in the adsorbed state. Kinetic data for product loss in the column bound state was generated. Fitting a second order-like rate model, we observed a rapid drop in functional recovery due to increased irreversible binding for vectors encoding two separate transgenes ( t Y 1 / 2 ${t}_{{Y}_{1/2}}$ = 12.7 and 18.7 min). Upon gradient elution, a two-peak elution profile implicating the presence of two distinct binding subpopulations is observed. Characterizing the loss kinetics of these two subpopulations showed a higher rate of vector loss in the weaker binding peak. This work highlights time spent in the adsorbed state as a critical factor impacting LV product loss and the need for consideration in LV AIEX process development workflows.  相似文献   

17.
Linda M. Haines 《Biometrics》2020,76(2):540-548
Multinomial N-mixture models are commonly used to fit data from a removal sampling protocol. If the mixing distribution is negative binomial, the distribution of the counts does not appear to have been identified, and practitioners approximate the requisite likelihood by placing an upper bound on the embedded infinite sum. In this paper, the distribution which underpins the multinomial N-mixture model with a negative binomial mixing distribution is shown to belong to the broad class of multivariate negative binomial distributions. Specifically, the likelihood can be expressed in closed form as the product of conditional and marginal likelihoods and the information matrix shown to be block diagonal. As a consequence, the nature of the maximum likelihood estimates of the unknown parameters and their attendant standard errors can be examined and tests of the hypothesis of the Poisson against the negative binomial mixing distribution formulated. In addition, appropriate multinomial N-mixture models for data sets which include zero site totals can also be constructed. Two illustrative examples are provided.  相似文献   

18.
The power prior has been widely used to discount the amount of information borrowed from historical data in the design and analysis of clinical trials. It is realized by raising the likelihood function of the historical data to a power parameter δ [ 0 , 1 ] $\delta \in [0, 1]$ , which quantifies the heterogeneity between the historical and the new study. In a fully Bayesian approach, a natural extension is to assign a hyperprior to δ such that the posterior of δ can reflect the degree of similarity between the historical and current data. To comply with the likelihood principle, an extra normalizing factor needs to be calculated and such prior is known as the normalized power prior. However, the normalizing factor involves an integral of a prior multiplied by a fractional likelihood and needs to be computed repeatedly over different δ during the posterior sampling. This makes its use prohibitive in practice for most elaborate models. This work provides an efficient framework to implement the normalized power prior in clinical studies. It bypasses the aforementioned efforts by sampling from the power prior with δ = 0 $\delta = 0$ and δ = 1 $\delta = 1$ only. Such a posterior sampling procedure can facilitate the use of a random δ with adaptive borrowing capability in general models. The numerical efficiency of the proposed method is illustrated via extensive simulation studies, a toxicological study, and an oncology study.  相似文献   

19.
No tillage (NT) has been proposed as a practice to reduce the adverse effects of tillage on contaminant (e.g., sediment and nutrient) losses to waterways. Nonetheless, previous reports on impacts of NT on nitrate ( NO 3 ) leaching are inconsistent. A global meta-analysis was conducted to test the hypothesis that the response of NO 3 leaching under NT, relative to tillage, is associated with tillage type (inversion vs non-inversion tillage), soil properties (e.g., soil organic carbon [SOC]), climate factors (i.e., water input), and management practices (e.g., NT duration and nitrogen fertilizer inputs). Overall, compared with all forms of tillage combined, NT had 4% and 14% greater area-scaled and yield-scaled NO 3 leaching losses, respectively. The NO 3 leaching under NT tended to be 7% greater than that of inversion tillage but comparable to non-inversion tillage. Greater NO 3 leaching under NT, compared with inversion tillage, was most evident under short-duration NT (<5 years), where water inputs were low (<2 mm day−1), in medium texture and low SOC (<1%) soils, and at both higher (>200 kg ha−1) and lower (0–100 kg ha−1) rates of nitrogen addition. Of these, SOC was the most important factor affecting the risk of NO3 leaching under NT compared with inversion tillage. Globally, on average, the greater amount of NO3 leached under NT, compared with inversion tillage, was mainly attributed to corresponding increases in drainage. The percentage of global cropping land with lower risk of NO3 leaching under NT, relative to inversion tillage, increased with NT duration from 3 years (31%) to 15 years (54%). This study highlighted that the benefits of NT adoption for mitigating NO 3 leaching are most likely in long-term NT cropping systems on high-SOC soils.  相似文献   

20.
In an observational study, the treatment received and the outcome exhibited may be associated in the absence of an effect caused by the treatment, even after controlling for observed covariates. Two tactics are common: (i) a test for unmeasured bias may be obtained using a secondary outcome for which the effect is known and (ii) a sensitivity analysis may explore the magnitude of unmeasured bias that would need to be present to explain the observed association as something other than an effect caused by the treatment. Can such a test for unmeasured bias inform the sensitivity analysis? If the test for bias does not discover evidence of unmeasured bias, then ask: Are conclusions therefore insensitive to larger unmeasured biases? Conversely, if the test for bias does find evidence of bias, then ask: What does that imply about sensitivity to biases? This problem is formulated in a new way as a convex quadratically constrained quadratic program and solved on a large scale using interior point methods by a modern solver. That is, a convex quadratic function of N variables is minimized subject to constraints on linear and convex quadratic functions of these variables. The quadratic function that is minimized is a statistic for the primary outcome that is a function of the unknown treatment assignment probabilities. The quadratic function that constrains this minimization is a statistic for subsidiary outcome that is also a function of these same unknown treatment assignment probabilities. In effect, the first statistic is minimized over a confidence set for the unknown treatment assignment probabilities supplied by the unaffected outcome. This process avoids the mistake of interpreting the failure to reject a hypothesis as support for the truth of that hypothesis. The method is illustrated by a study of the effects of light daily alcohol consumption on high-density lipoprotein (HDL) cholesterol levels. In this study, the method quickly optimizes a nonlinear function of N = 800 $N=800$ variables subject to linear and quadratic constraints. In the example, strong evidence of unmeasured bias is found using the subsidiary outcome, but, perhaps surprisingly, this finding makes the primary comparison insensitive to larger biases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号