首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Ye He  Ling Zhou  Yingcun Xia  Huazhen Lin 《Biometrics》2023,79(3):2157-2170
The existing methods for subgroup analysis can be roughly divided into two categories: finite mixture models (FMM) and regularization methods with an ℓ1-type penalty. In this paper, by introducing the group centers and ℓ2-type penalty in the loss function, we propose a novel center-augmented regularization (CAR) method; this method can be regarded as a unification of the regularization method and FMM and hence exhibits higher efficiency and robustness and simpler computations than the existing methods. In particular, its computational complexity is reduced from the O ( n 2 ) $O(n^2)$ of the conventional pairwise-penalty method to only O ( n K ) $O(nK)$ , where n is the sample size and K is the number of subgroups. The asymptotic normality of CAR is established, and the convergence of the algorithm is proven. CAR is applied to a dataset from a multicenter clinical trial, Buprenorphine in the Treatment of Opiate Dependence; a larger R2 is produced and three additional significant variables are identified compared to those of the existing methods.  相似文献   

2.
For ordinal outcomes, the average treatment effect is often ill-defined and hard to interpret. Echoing Agresti and Kateri, we argue that the relative treatment effect can be a useful measure, especially for ordinal outcomes, which is defined as γ = pr { Y i ( 1 ) > Y i ( 0 ) } pr { Y i ( 1 ) < Y i ( 0 ) } , with Y i ( 1 ) and Y i ( 0 ) being the potential outcomes of unit i under treatment and control, respectively. Given the marginal distributions of the potential outcomes, we derive the sharp bounds on γ , which are identifiable parameters based on the observed data. Agresti and Kateri focused on modeling strategies under the assumption of independent potential outcomes, but we allow for arbitrary dependence.  相似文献   

3.
Pragmatic trials evaluating health care interventions often adopt cluster randomization due to scientific or logistical considerations. Systematic reviews have shown that coprimary endpoints are not uncommon in pragmatic trials but are seldom recognized in sample size or power calculations. While methods for power analysis based on K ( K 2 $K\ge 2$ ) binary coprimary endpoints are available for cluster randomized trials (CRTs), to our knowledge, methods for continuous coprimary endpoints are not yet available. Assuming a multivariate linear mixed model (MLMM) that accounts for multiple types of intraclass correlation coefficients among the observations in each cluster, we derive the closed-form joint distribution of K treatment effect estimators to facilitate sample size and power determination with different types of null hypotheses under equal cluster sizes. We characterize the relationship between the power of each test and different types of correlation parameters. We further relax the equal cluster size assumption and approximate the joint distribution of the K treatment effect estimators through the mean and coefficient of variation of cluster sizes. Our simulation studies with a finite number of clusters indicate that the predicted power by our method agrees well with the empirical power, when the parameters in the MLMM are estimated via the expectation-maximization algorithm. An application to a real CRT is presented to illustrate the proposed method.  相似文献   

4.

Aim

Understanding connections between environment and biodiversity is crucial for conservation, identifying causes of ecosystem stress, and predicting population responses to changing environments. Explaining biodiversity requires an understanding of how species richness and environment covary across scales. Here, we identify scales and locations at which biodiversity is generated and correlates with environment.

Location

Full latitudinal range per continent.

Time Period

Present day.

Major Taxa Studied

Terrestrial vertebrates: all mammals, carnivorans, bats, songbirds, hummingbirds, amphibians.

Methods

We describe the use of wavelet power spectra, cross-power and coherence for identifying scale-dependent trends across Earth's surface. Spectra reveal scale- and location-dependent coherence between species richness and topography (E), mean annual precipitation (Pn), temperature (Tm) and annual temperature range (ΔT).

Results

>97% of species richness of taxa studied is generated at large scales, that is, wavelengths 10 3 km, with 30%–69% generated at scales 10 4 km. At these scales, richness tends to be highly coherent and anti-correlated with E and ΔT, and positively correlated with Pn and Tm. Coherence between carnivoran richness and ΔT is low across scales, implying insensitivity to seasonal temperature variations. Conversely, amphibian richness is strongly anti-correlated with ΔT at large scales. At scales 10 3 km, examined taxa, except carnivorans, show highest richness within the tropics. Terrestrial plateaux exhibit high coherence between carnivorans and E at scales 10 3 km, consistent with contribution of large-scale tectonic processes to biodiversity. Results are similar across different continents and for global latitudinal averages. Spectral admittance permits derivation of rules-of-thumb relating long-wavelength environmental and species richness trends.

Main Conclusions

Sensitivities of mammal, bird and amphibian populations to environment are highly scale dependent. At large scales, carnivoran richness is largely independent of temperature and precipitation, whereas amphibian richness correlates strongly with precipitation and temperature, and anti-correlates with temperature range. These results pave the way for spectral-based calibration of models that predict biodiversity response to climate change scenarios.  相似文献   

5.
K.O. Ekvall  M. Bottai 《Biometrics》2023,79(3):2286-2297
We propose a unified framework for likelihood-based regression modeling when the response variable has finite support. Our work is motivated by the fact that, in practice, observed data are discrete and bounded. The proposed methods assume a model which includes models previously considered for interval-censored variables with log-concave distributions as special cases. The resulting log-likelihood is concave, which we use to establish asymptotic normality of its maximizer as the number of observations n tends to infinity with the number of parameters d fixed, and rates of convergence of L1-regularized estimators when the true parameter vector is sparse and d and n both tend to infinity with log ( d ) / n 0 $\log (d) / n \rightarrow 0$ . We consider an inexact proximal Newton algorithm for computing estimates and give theoretical guarantees for its convergence. The range of possible applications is wide, including but not limited to survival analysis in discrete time, the modeling of outcomes on scored surveys and questionnaires, and, more generally, interval-censored regression. The applicability and usefulness of the proposed methods are illustrated in simulations and data examples.  相似文献   

6.
The power prior has been widely used to discount the amount of information borrowed from historical data in the design and analysis of clinical trials. It is realized by raising the likelihood function of the historical data to a power parameter δ [ 0 , 1 ] $\delta \in [0, 1]$ , which quantifies the heterogeneity between the historical and the new study. In a fully Bayesian approach, a natural extension is to assign a hyperprior to δ such that the posterior of δ can reflect the degree of similarity between the historical and current data. To comply with the likelihood principle, an extra normalizing factor needs to be calculated and such prior is known as the normalized power prior. However, the normalizing factor involves an integral of a prior multiplied by a fractional likelihood and needs to be computed repeatedly over different δ during the posterior sampling. This makes its use prohibitive in practice for most elaborate models. This work provides an efficient framework to implement the normalized power prior in clinical studies. It bypasses the aforementioned efforts by sampling from the power prior with δ = 0 $\delta = 0$ and δ = 1 $\delta = 1$ only. Such a posterior sampling procedure can facilitate the use of a random δ with adaptive borrowing capability in general models. The numerical efficiency of the proposed method is illustrated via extensive simulation studies, a toxicological study, and an oncology study.  相似文献   

7.
Fluorescence (FL) quenching of 3-aminoquinoline (3AQ) by halide ions Cl Br and I has been explored in an aqueous acidic medium using the steady-state and time-domain FL measurement techniques. The halide ions showed no significant change in the absorption spectra of 3AQ in an aqueous acidic medium. The FL intensity was strongly quenched by I ions and the order of FL quenching by halide ions was I > Br > Cl . The decrease in FL lifetime along with the reduction in FL intensity of 3AQ suggested the dynamic nature of quenching. The obtained K SV values were 328 M 1 for I ions and 119 M 1 for Br ions and the k q values were ~ 1.66 × 10 10 M 1 s 1 and 6.02 × 10 9 M 1 s 1 , respectively. The observations suggested that the likely governing mechanism for FL quenching may be an electron transfer process and the involvement of the heavy atom effects.  相似文献   

8.
No tillage (NT) has been proposed as a practice to reduce the adverse effects of tillage on contaminant (e.g., sediment and nutrient) losses to waterways. Nonetheless, previous reports on impacts of NT on nitrate ( NO 3 ) leaching are inconsistent. A global meta-analysis was conducted to test the hypothesis that the response of NO 3 leaching under NT, relative to tillage, is associated with tillage type (inversion vs non-inversion tillage), soil properties (e.g., soil organic carbon [SOC]), climate factors (i.e., water input), and management practices (e.g., NT duration and nitrogen fertilizer inputs). Overall, compared with all forms of tillage combined, NT had 4% and 14% greater area-scaled and yield-scaled NO 3 leaching losses, respectively. The NO 3 leaching under NT tended to be 7% greater than that of inversion tillage but comparable to non-inversion tillage. Greater NO 3 leaching under NT, compared with inversion tillage, was most evident under short-duration NT (<5 years), where water inputs were low (<2 mm day−1), in medium texture and low SOC (<1%) soils, and at both higher (>200 kg ha−1) and lower (0–100 kg ha−1) rates of nitrogen addition. Of these, SOC was the most important factor affecting the risk of NO3 leaching under NT compared with inversion tillage. Globally, on average, the greater amount of NO3 leached under NT, compared with inversion tillage, was mainly attributed to corresponding increases in drainage. The percentage of global cropping land with lower risk of NO3 leaching under NT, relative to inversion tillage, increased with NT duration from 3 years (31%) to 15 years (54%). This study highlighted that the benefits of NT adoption for mitigating NO 3 leaching are most likely in long-term NT cropping systems on high-SOC soils.  相似文献   

9.
In an observational study, the treatment received and the outcome exhibited may be associated in the absence of an effect caused by the treatment, even after controlling for observed covariates. Two tactics are common: (i) a test for unmeasured bias may be obtained using a secondary outcome for which the effect is known and (ii) a sensitivity analysis may explore the magnitude of unmeasured bias that would need to be present to explain the observed association as something other than an effect caused by the treatment. Can such a test for unmeasured bias inform the sensitivity analysis? If the test for bias does not discover evidence of unmeasured bias, then ask: Are conclusions therefore insensitive to larger unmeasured biases? Conversely, if the test for bias does find evidence of bias, then ask: What does that imply about sensitivity to biases? This problem is formulated in a new way as a convex quadratically constrained quadratic program and solved on a large scale using interior point methods by a modern solver. That is, a convex quadratic function of N variables is minimized subject to constraints on linear and convex quadratic functions of these variables. The quadratic function that is minimized is a statistic for the primary outcome that is a function of the unknown treatment assignment probabilities. The quadratic function that constrains this minimization is a statistic for subsidiary outcome that is also a function of these same unknown treatment assignment probabilities. In effect, the first statistic is minimized over a confidence set for the unknown treatment assignment probabilities supplied by the unaffected outcome. This process avoids the mistake of interpreting the failure to reject a hypothesis as support for the truth of that hypothesis. The method is illustrated by a study of the effects of light daily alcohol consumption on high-density lipoprotein (HDL) cholesterol levels. In this study, the method quickly optimizes a nonlinear function of N = 800 $N=800$ variables subject to linear and quadratic constraints. In the example, strong evidence of unmeasured bias is found using the subsidiary outcome, but, perhaps surprisingly, this finding makes the primary comparison insensitive to larger biases.  相似文献   

10.
Digestate, a by-product of biogas production, is widely recognized as a promising renewable nitrogen (N) source with high potential to replace synthetic fertilizers. Yet, inefficient digestate use can lead to pollutant N losses as ammonia (NH3) volatilization, nitrous oxide (N2O) emissions and nitrate ( NO 3 ) leaching. Cover crops (CCs) may reduce some of these losses and recycle the N back into the soil after incorporation, but the effect on the N balance depends on the CC species. In a one-year field study, we tested two application methods (i.e., surface broadcasting, BDC; and shallow injection, INJ) of the liquid fraction of separated co-digested cattle slurry (digestate liquid fraction [DLF]), combined with different winter cover crop (CC) options (i.e., rye, white mustard or bare fallow), as starter fertilizer for maize. Later, side-dressing with urea was required to fulfil maize N-requirements. We tested treatment effects on yield, N-uptake, N-use efficiency parameters, and N-losses in the form of N2O emissions and NO 3 leaching. CC development and biomass production were strongly affected by their contrasting frost tolerance, with spring-regrowth for rye, while mustard was winter killed. After the CCs, injection of DLF increased N2O emissions significantly compared with BDC (emission factor of 2.69% vs. 1.66%). Nitrous oxide emissions accounted for a small part (11%–13%) of the overall yield-scaled N losses (0.46–0.97 kg N Mg grain−1). The adoption of CCs reduced fall NO 3 leaching, being 51% and 64% lower for mustard and rye than under bare soil. In addition, rye reduced NO 3 leaching during spring and summer after termination by promoting N immobilization, thus leading to −57% lower annual leaching losses compared with mustard. DLF application method modified N-loss pathways, but not the cumulative yield-scaled N losses. Overall, these insights contribute to inform an evidence-based design of cropping systems in which nutrients are recycled more efficiently.  相似文献   

11.
Studies of anthropological genetics and bioarcheology often examine the degree of among-group variation in quantitative traits such as craniometrics and anthropometrics. One comparative index of among-group differentiation is the minimum value of Wright's F ST as estimated from quantitative traits. This measure has been used in certain population-genetic applications such as comparison with F ST estimated from genetic data, although some inferences are limited by how well the data and study design fit the underlying population-genetic model. In many cases, all that is needed is a simple measure of among-group variation. One such measure is R 2 , the proportion of total phenotypic variation accounted for by among-group phenotypic variation, a measure easily obtained from analysis of variance and regression methods. This paper shows that R 2 and minimum F ST are closely related as Min F ST R 2 / 2 R 2 . R 2 is computationally easy and may be useful in cases where all we need is a simple measure of relative among-group differentiation.  相似文献   

12.
The question of how individual patient data from cohort studies or historical clinical trials can be leveraged for designing more powerful, or smaller yet equally powerful, clinical trials becomes increasingly important in the era of digitalization. Today, the traditional statistical analyses approaches may seem questionable to practitioners in light of ubiquitous historical prognostic information. Several methodological developments aim at incorporating historical information in the design and analysis of future clinical trials, most importantly Bayesian information borrowing, propensity score methods, stratification, and covariate adjustment. Adjusting the analysis with respect to a prognostic score, which was obtained from some model applied to historical data, received renewed interest from a machine learning perspective, and we study the potential of this approach for randomized clinical trials. In an idealized situation of a normal outcome in a two-arm trial with 1:1 allocation, we derive a simple sample size reduction formula as a function of two criteria characterizing the prognostic score: (1) the coefficient of determination R2 on historical data and (2) the correlation ρ between the estimated and the true unknown prognostic scores. While maintaining the same power, the original total sample size n planned for the unadjusted analysis reduces to ( 1 R 2 ρ 2 ) × n $(1 - R^2 \rho ^2) \times n$ in an adjusted analysis. Robustness in less ideal situations was assessed empirically. We conclude that there is potential for substantially more powerful or smaller trials, but only when prognostic scores can be accurately estimated.  相似文献   

13.
Inference of population structure from genetic data plays an important role in population and medical genetics studies. With the advancement and decreasing cost of sequencing technology, the increasingly available whole genome sequencing data provide much richer information about the underlying population structure. The traditional method originally developed for array-based genotype data for computing and selecting top principal components (PCs) that capture population structure may not perform well on sequencing data for two reasons. First, the number of genetic variants p is much larger than the sample size n in sequencing data such that the sample-to-marker ratio n / p $n/p$ is nearly zero, violating the assumption of the Tracy-Widom test used in their method. Second, their method might not be able to handle the linkage disequilibrium well in sequencing data. To resolve those two practical issues, we propose a new method called ERStruct to determine the number of top informative PCs based on sequencing data. More specifically, we propose to use the ratio of consecutive eigenvalues as a more robust test statistic, and then we approximate its null distribution using modern random matrix theory. Both simulation studies and applications to two public data sets from the HapMap 3 and the 1000 Genomes Projects demonstrate the empirical performance of our ERStruct method.  相似文献   

14.
Tropical and subtropical forest biomes are a main hotspot for the global nitrogen (N) cycle. Yet, our understanding of global soil N cycle patterns and drivers and their response to N deposition in these biomes remains elusive. By a meta-analysis of 2426-single and 161-paired observations from 89 published 15 N pool dilution and tracing studies, we found that gross N mineralization (GNM), immobilization of ammonium ( I NH 4 ) and nitrate ( I NO 3 ), and dissimilatory nitrate reduction to ammonium (DNRA) were significantly higher in tropical forests than in subtropical forests. Soil N cycle was conservative in tropical forests with ratios of gross nitrification (GN) to I NH 4 (GN/ I NH 4 ) and of soil nitrate to ammonium (NO3/NH4+) less than one, but was leaky in subtropical forests with GN/ I NH 4 and NO3/NH4+ higher than one. Soil NH4+ dynamics were mainly controlled by soil substrate (e.g., total N), but climatic factors (e.g., precipitation and/or temperature) were more important in controlling soil NO3 dynamics. Soil texture played a role, as GNM and I NH 4 were positively correlated with silt and clay contents, while I NO 3 and DNRA were positively correlated with sand and clay contents, respectively. The soil N cycle was more sensitive to N deposition in tropical forests than in subtropical forests. Nitrogen deposition leads to a leaky N cycle in tropical forests, as evidenced by the increase in GN/ I NH 4 , NO3/NH4+, and nitrous oxide emissions and the decrease in I NO 3 and DNRA, mainly due to the decrease in soil microbial biomass and pH. Dominant tree species can also influence soil N cycle pattern, which has changed from conservative in deciduous forests to leaky in coniferous forests. We provide global evidence that tropical, but not subtropical, forests are characterized by soil N dynamics sustaining N availability and that N deposition inhibits soil N retention and stimulates N losses in these biomes.  相似文献   

15.
In this work, we applied a multi-information source modeling technique to solve a multi-objective Bayesian optimization problem involving the simultaneous minimization of cost and maximization of growth for serum-free C2C12 cells using a hyper-volume improvement acquisition function. In sequential batches of custom media experiments designed using our Bayesian criteria, collected using multiple assays targeting different cellular growth dynamics, the algorithm learned to identify the trade-off relationship between long-term growth and cost. We were able to identify several media with > 100 % $>100\%$ more growth of C2C12 cells than the control, as well as a medium with 23% more growth at only 62.5% of the cost of the control. These algorithmically generated media also maintained growth far past the study period, indicating the modeling approach approximates the cell growth well from an extremely limited data set.  相似文献   

16.
Co-firing residual lignocellulosic biomass with fossil fuels is often used to reduce greenhouse gas (GHG) emissions, especially in processes like cement production where fuel costs are critical and residual biomass can be obtained at a low cost. Since plants remove CO2 from the atmosphere, CO2 emissions from biomass combustion are often assumed to have zero global warming potential ( GWP bCO 2 = 0) and do not contribute to climate forcing. However, diverting residual biomass to energy use has recently been shown to increase the atmospheric CO2 load when compared to business-as-usual (BAU) practices, resulting in GWP bCO 2 values between 0 and 1. A detailed process model for a natural gas-fired cement plant producing 4200 megagrams of clinker per day was used to calculate the material and energy flows, as well as the lifecycle emissions associated with cement production without and with diverted biomass (supplying 50% of precalciner energy demand) from forestry and landfill sources. Biomass co-firing reduced natural gas demand in the precalciner of the cement plant by 39% relative to the reference scenario (100% natural gas), but the total demands for thermal, electrical, and diesel (transportation) energy increased by at least 14%. Assuming GWP bCO 2 values of zero for biomass combustion, cement's lifecycle GHG intensity changed from the reference (natural gas only) plant by −40, −23, and − 89 kg CO2/Mg clinker for diverted biomass from slash burning, forest floor and landfill biomass, respectively. However, using the calculated GWP bCO 2 values for diverted biomass from these same fuel sources, the lifecycle GHG intensities changes were −37, +20 and +28 kg CO2/Mg clinker, respectively. The switch from decreasing to increasing cement plant GHG emissions (i.e., forest floor or landfill feedstocks scenarios) highlights the importance of calculating and using the GWP bCO 2 factor when quantifying lifecycle GHG impacts associated with diverting residual biomass to bioenergy use.  相似文献   

17.
Use of lentiviral vectors (LVs) in clinical Cell and Gene Therapy applications is growing. However, functional product loss during capture chromatography, typically anion-exchange (AIEX), remains a significant unresolved challenge for the design of economic processes. Despite AIEX's extensive use, variable performance and generally low recovery is reported. This poor understanding of product loss mechanisms highlights a significant gap in our knowledge of LV adsorption and other types of vector delivery systems. This work demonstrates HIV-1-LV recovery over quaternary-amine membrane adsorbents is a function of time in the adsorbed state. Kinetic data for product loss in the column bound state was generated. Fitting a second order-like rate model, we observed a rapid drop in functional recovery due to increased irreversible binding for vectors encoding two separate transgenes ( t Y 1 / 2 ${t}_{{Y}_{1/2}}$ = 12.7 and 18.7 min). Upon gradient elution, a two-peak elution profile implicating the presence of two distinct binding subpopulations is observed. Characterizing the loss kinetics of these two subpopulations showed a higher rate of vector loss in the weaker binding peak. This work highlights time spent in the adsorbed state as a critical factor impacting LV product loss and the need for consideration in LV AIEX process development workflows.  相似文献   

18.

Aim

Theoretically, woody biomass turnover time ( τ ) quantified using outflux (i.e. tree mortality) predicts biomass dynamics better than using influx (i.e. productivity). This study aims at using forest inventory data to empirically test the outflux approach and generate a spatially explicit understanding of woody τ in mature forests. We further compared woody τ estimates with dynamic global vegetation models (DGVMs) and with a data assimilation product of C stocks and fluxes—CARDAMOM.

Location

Continents.

Time Period

Historic from 1951 to 2018.

Major Taxa Studied

Trees and forests.

Methods

We compared the approaches of using outflux versus influx for estimating woody τ and predicting biomass accumulation rates. We investigated abiotic and biotic drivers of spatial woody τ and generated a spatially explicit map of woody τ at a 0.25-degree resolution across continents using machine learning. We further examined whether six DGVMs and CARDAMOM generally captured the observational pattern of woody τ .

Results

Woody τ quantified by the outflux approach better (with R2 0.4–0.5) predicted the biomass accumulation rates than the influx approach (with R2 0.1–0.4) across continents. We found large spatial variations of woody τ for mature forests, with highest values in temperate forests (98.8 ± 2.6 y) followed by boreal forests (73.9 ± 3.6 y) and tropical forests. The map of woody τ extrapolated from plot data showed higher values in wetter eastern and pacific coast USA, Africa and eastern Amazon. Climate (temperature and aridity index) and vegetation structure (tree density and forest age) were the dominant drivers of woody τ across continents. The highest woody τ in temperate forests was not captured by either DGVMs or CARDAMOM.

Main Conclusions

Our study empirically demonstrated the preference of using outflux over influx to estimate woody τ for predicting biomass accumulation rates. The spatially explicit map of woody τ and the underlying drivers provide valuable information to improve the representation of forest demography and carbon turnover processes in DGVMs.  相似文献   

19.
Climate change will alter both the amount and pattern of precipitation and soil water availability, which will directly affect plant growth and nutrient acquisition, and potentially, ecosystem functions like nutrient cycling and losses as well. Given their role in facilitating plant nutrient acquisition and water stress resistance, arbuscular mycorrhizal (AM) fungi may modulate the effects of changing water availability on plants and ecosystem functions. The well‐characterized mycorrhizal tomato (Solanum lycopersicum L.) genotype 76R (referred to as MYC+) and the mutant mycorrhiza‐defective tomato genotype rmc were grown in microcosms in a glasshouse experiment manipulating both the pattern and amount of water supply in unsterilized field soil. Following 4 weeks of differing water regimes, we tested how AM fungi affected plant productivity and nutrient acquisition, short‐term interception of a 15 NH 4 + pulse, and inorganic nitrogen (N) leaching from microcosms. AM fungi enhanced plant nutrient acquisition with both lower and more variable water availability, for instance increasing plant P uptake more with a pulsed water supply compared to a regular supply and increasing shoot N concentration more when lower water amounts were applied. Although uptake of the short‐term 15 NH 4 + pulse was higher in rmc plants, possibly due to higher N demand, AM fungi subtly modulated NO 3 ? leaching, decreasing losses by 54% at low and high water levels in the regular water regime, with small absolute amounts of NO 3 ? leached (<1 kg N/ha). Since this study shows that AM fungi will likely be an important moderator of plant and ecosystem responses to adverse effects of more variable precipitation, management strategies that bolster AM fungal communities may in turn create systems that are more resilient to these changes.  相似文献   

20.
Photoinduced electron transfer (PET) is the most common mechanism proposed to account for quenching of fluorophores. Herein, the intrinsic fluorescence of dapoxetine (DPX) hydrochloride is in the “OFF” state, owing to the deactivation effect of PET. When the amine moiety is protonated, the fluorescence is restored. Protonation of the nitrogen atom of the tertiary amine moiety in DPX leads to “ON” state of fluorescence due to hindrance of the deactivating effect of PET by protonation of the amine moiety. This permits specific and sensitive determination of DPX in human plasma [lower limit of quantification (LLOQ) = 30.0  ng mL 1 ]. The suggested method adopts protonation of DPX using 0.25 M hydrochloric acid in anionic micelles [6.94 mM sodium dodecyl sulfate (SDS)] leads to a marked enhancement of DPX-fluorescence, after excitation at 290 nm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号