首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 17 毫秒
1.
There have been considerable advances in the methodology for estimating dynamic treatment regimens, and for the design of sequential trials that can be used to collect unconfounded data to inform such regimens. However, relatively little attention has been paid to how such methodology could be used to advance understanding of optimal treatment strategies in a continuous dose setting, even though it is often the case that considerable patient heterogeneity in drug response along with a narrow therapeutic window may necessitate the tailoring of dosing over time. Such is the case with warfarin, a common oral anticoagulant. We propose novel, realistic simulation models based on pharmacokinetic‐pharmacodynamic properties of the drug that can be used to evaluate potentially optimal dosing strategies. Our results suggest that this methodology can lead to a dosing strategy that performs well both within and across populations with different pharmacokinetic characteristics, and may assist in the design of randomized trials by narrowing the list of potential dosing strategies to those which are most promising.  相似文献   

2.
Summary For many diseases where there are several treatment options often there is no consensus on the best treatment to give individual patients. In such cases, it may be necessary to define a strategy for treatment assignment; that is, an algorithm that dictates the treatment an individual should receive based on their measured characteristics. Such a strategy or algorithm is also referred to as a treatment regime. The optimal treatment regime is the strategy that would provide the most public health benefit by minimizing as many poor outcomes as possible. Using a measure that is a generalization of attributable risk (AR) and notions of potential outcomes, we derive an estimator for the proportion of events that could have been prevented had the optimal treatment regime been implemented. Traditional AR studies look at the added risk that can be attributed to exposure of some contaminant; here we will instead study the benefit that can be attributed to using the optimal treatment strategy. We will show how regression models can be used to estimate the optimal treatment strategy and the attributable benefit of that strategy. We also derive the large sample properties of this estimator. As a motivating example, we will apply our methods to an observational study of 3856 patients treated at the Duke University Medical Center with prior coronary artery bypass graft surgery and further heart‐related problems requiring a catheterization. The patients may be treated with either medical therapy alone or a combination of medical therapy and percutaneous coronary intervention without a general consensus on which is the best treatment for individual patients.  相似文献   

3.
Data-driven methods for personalizing treatment assignment have garnered much attention from clinicians and researchers. Dynamic treatment regimes formalize this through a sequence of decision rules that map individual patient characteristics to a recommended treatment. Observational studies are commonly used for estimating dynamic treatment regimes due to the potentially prohibitive costs of conducting sequential multiple assignment randomized trials. However, estimating a dynamic treatment regime from observational data can lead to bias in the estimated regime due to unmeasured confounding. Sensitivity analyses are useful for assessing how robust the conclusions of the study are to a potential unmeasured confounder. A Monte Carlo sensitivity analysis is a probabilistic approach that involves positing and sampling from distributions for the parameters governing the bias. We propose a method for performing a Monte Carlo sensitivity analysis of the bias due to unmeasured confounding in the estimation of dynamic treatment regimes. We demonstrate the performance of the proposed procedure with a simulation study and apply it to an observational study examining tailoring the use of antidepressant medication for reducing symptoms of depression using data from Kaiser Permanente Washington.  相似文献   

4.
SUMMARY: SScore is an R package that facilitates the comparison of gene expression between Affymetrix GeneChips using the S-score algorithm. The S-score algorithm uses probe level data directly to assess differences in gene expression, without requiring a preliminary separate step of probe set expression summary estimation. Therefore, the algorithm avoids introduction of error associated with the expression summary estimation process and has been demonstrated to improve the accuracy of identifying differentially expressed genes. The S-score produces accurate results even when few or no replicates are available. AVAILABILITY: The R package SScore is available from Bioconductor at http://www.bioconductor.org  相似文献   

5.
The field of precision medicine aims to tailor treatment based on patient-specific factors in a reproducible way. To this end, estimating an optimal individualized treatment regime (ITR) that recommends treatment decisions based on patient characteristics to maximize the mean of a prespecified outcome is of particular interest. Several methods have been proposed for estimating an optimal ITR from clinical trial data in the parallel group setting where each subject is randomized to a single intervention. However, little work has been done in the area of estimating the optimal ITR from crossover study designs. Such designs naturally lend themselves to precision medicine since they allow for observing the response to multiple treatments for each patient. In this paper, we introduce a method for estimating the optimal ITR using data from a 2 × 2 crossover study with or without carryover effects. The proposed method is similar to policy search methods such as outcome weighted learning; however, we take advantage of the crossover design by using the difference in responses under each treatment as the observed reward. We establish Fisher and global consistency, present numerical experiments, and analyze data from a feeding trial to demonstrate the improved performance of the proposed method compared to standard methods for a parallel study design.  相似文献   

6.
In the past several years, oligonucleotide microarrays have emerged as a widely used tool for the simultaneous, non-biased measurement of expression levels for thousands of genes. Several challenges exist in successfully utilizing this biotechnology; principal among these is analysis of microarray data. An experiment to measure differential gene expression can consist of a dozen microarrays, each consisting of over a hundred thousand data points. Previously, we have described the use of a novel algorithm for analyzing oligonucleotide microarrays and assessing changes in gene expression. This algorithm describes changes in expression in terms of the statistical significance (S-score) of change, which combines signals detected by multiple probe pairs according to an error model characteristic of oligonucleotide arrays. Software is available that simplifies the use of the application of this algorithm so that it may be applied to improving the analysis of oligonucleotide microarray data. The application of this method to problems of the central nervous system is discussed.  相似文献   

7.
Viral capsids are composed of multiple copies of one or a few chemically distinct capsid proteins and are mostly stabilized by inter subunit protein-protein interactions. There have been efforts to identify and analyze these protein-protein interactions, in terms of their extent and similarity, between the subunit interfaces related by quasi- and icosahedral symmetry. Here, we describe a new method to map quaternary interactions in spherical virus capsids onto polar angle space with respect to the icosahedral symmetry axes using azimuthal orthographic diagrams. This approach enables one to map the nonredundant interactions in a spherical virus capsid, irrespective of its size or triangulation number (T), onto the reference icosahedral asymmetric unit space. The resultant diagrams represent characteristic fingerprints of quaternary interactions of the respective capsids. Hence, they can be used as road maps of the protein-protein interactions to visualize the distribution and the density of the interactions. In addition, unlike the previous studies, the fingerprints of different capsids, when represented in a matrix form, can be compared with one another to quantitatively evaluate the similarity (S-score) in the subunit environments and the associated protein-protein interactions. The S-score selectively distinguishes the similarity, or lack of it, in the locations of the quaternary interactions as opposed to other well-known structural similarity metrics (e.g., RMSD, TM-score). Application of this method on a subset of T = 1 and T = 3 capsids suggests that S-score values range between 1 and 0.6 for capsids that belong to the same virus family/genus; 0.6-0.3 for capsids from different families with the same T-number and similar subunit fold; and <0.3 for comparisons of the dissimilar capsids that display different quaternary architectures (T-numbers). Finally, the sequence conserved interface residues within a virus family, whose spatial locations were also conserved have been hypothesized as the essential residues for self-assembly of the member virus capsids.  相似文献   

8.
Recent statistical methodology for precision medicine has focused on either identification of subgroups with enhanced treatment effects or estimating optimal treatment decision rules so that treatment is allocated in a way that maximizes, on average, predefined patient outcomes. Less attention has been given to subgroup testing, which involves evaluation of whether at least a subgroup of the population benefits from an investigative treatment, compared to some control or standard of care. In this work, we propose a general framework for testing for the existence of a subgroup with enhanced treatment effects based on the difference of the estimated value functions under an estimated optimal treatment regime and a fixed regime that assigns everyone to the same treatment. Our proposed test does not require specification of the parametric form of the subgroup and allows heterogeneous treatment effects within the subgroup. The test applies to cases when the outcome of interest is either a time-to-event or a (uncensored) scalar, and is valid at the exceptional law. To demonstrate the empirical performance of the proposed test, we study the type I error and power of the test statistics in simulations and also apply our test to data from a Phase III trial in patients with hematological malignancies.  相似文献   

9.
Okosun KO  Ouifki R  Marcus N 《Bio Systems》2011,106(2-3):136-145
We derive and analyse a deterministic model for the transmission of malaria disease with mass action form of infection. Firstly, we calculate the basic reproduction number, R(0), and investigate the existence and stability of equilibria. The system is found to exhibit backward bifurcation. The implication of this occurrence is that the classical epidemiological requirement for effective eradication of malaria, R(0)<1, is no longer sufficient, even though necessary. Secondly, by using optimal control theory we derive the conditions under which it is optimal to eradicate the disease and examine the impact of a possible combined vaccination and treatment strategy on the disease transmission. When eradication is impossible, we derive the necessary conditions for optimal control of the disease using Pontryagin's Maximum Principle. The results obtained from the numerical simulations of the model show that a possible vaccination combined with effective treatment regime would reduce the spread of the disease appreciably.  相似文献   

10.
11.
Spores of the strain NCIB 8122 of Bacillus cereus have been depleted of coats by treatment with 0.1% sodium dodecyl sulfate--200 mM 2-mercaptoethanol--0.5 M NaCl (pH 9.6). The coat-depleted spores did not show any decrease in viability, heat resistance, refractility, dipicolinic acid content, or specific activities of several protoplastic enzymes. The germinative response of the coat-depleted spores to adenosine and several analogues thereof was found qualitatively similar to that obtained with intact spores. However, germination kinetics appeared to be affected by coat removal, since germination rate measured as loss of refractility was eight times slower even at inducer concentrations 10-fold higher than those required to promote optimal germination response of intact spores. Loss of heat resistance, on the other hand, was hardly affected by coat removal. These results suggest that, even though spore coats are not essential for the triggering reaction, they are required for a rapid evolution of the later events in the germination process.  相似文献   

12.
The Mascot score (M-score) is one of the conventional validity measures in data base identification of peptides and proteins by MS/MS data. Although tremendously useful, M-score has a number of limitations. For the same MS/MS data, M-score may change if the protein data base is expanded. A low M-value may not necessarily mean poor match but rather poor MS/MS quality. In addition M-score does not fully utilize the advantage of combined use of complementary fragmentation techniques collisionally activated dissociation (CAD) and electron capture dissociation (ECD). To address these issues, a new data base-independent scoring method (S-score) was designed that is based on the maximum length of the peptide sequence tag provided by the combined CAD and ECD data. The quality of MS/MS spectra assessed by S-score allows poor data (39% of all MS/MS spectra) to be filtered out before the data base search, speeding up the data analysis and eliminating a major source of false positive identifications. Spectra with below threshold M-scores (poor matches) but high S-scores are validated. Spectra with zero M-score (no data base match) but high S-score are classified as belonging to modified sequences. As an extension of S-score, an extremely reliable sequence tag was developed based on complementary fragments simultaneously appearing in CAD and ECD spectra. Comparison of this tag with the data base-derived sequence gives the most reliable peptide identification validation to date. The combined use of M- and S-scoring provides positive sequence identification from >25% of all MS/MS data, a 40% improvement over traditional M-scoring performed on the same Fourier transform MS instrumentation. The number of proteins reliably identified from Escherichia coli cell lysate hereby increased by 29% compared with the traditional M-score approach. Finally S-scoring provides a quantitative measure of the quality of fragmentation techniques such as the minimum abundance of the precursor ion, the MS/MS of which gives the threshold S-score value of 2.  相似文献   

13.
Many estimators of the average effect of a treatment on an outcome require estimation of the propensity score, the outcome regression, or both. It is often beneficial to utilize flexible techniques, such as semiparametric regression or machine learning, to estimate these quantities. However, optimal estimation of these regressions does not necessarily lead to optimal estimation of the average treatment effect, particularly in settings with strong instrumental variables. A recent proposal addressed these issues via the outcome-adaptive lasso, a penalized regression technique for estimating the propensity score that seeks to minimize the impact of instrumental variables on treatment effect estimators. However, a notable limitation of this approach is that its application is restricted to parametric models. We propose a more flexible alternative that we call the outcome highly adaptive lasso. We discuss the large sample theory for this estimator and propose closed-form confidence intervals based on the proposed estimator. We show via simulation that our method offers benefits over several popular approaches.  相似文献   

14.
1. Animals foraging for resources are under a variety of selective pressures, and separate optimality models have been developed predicting the optimal reproductive strategies they should adopt. 2. In most cases, the proximate behavioural mechanisms adopted to achieve such optimality goals have been identified. This is the case, for example, for optimal patch time and sex allocation in insect parasitoids. However, behaviours modelled within this framework have mainly been studied separately, even though real animals have to optimize some behaviours simultaneously. 3. For this reason, it would be better if proximate behavioural rules were designed to attain several goals simultaneously. Despite their importance, such multi-objective proximate rules remain to be discovered. 4. Based on experiments on insect parasitoids that simultaneously examine their optimal patch time and sex allocation strategies, it is shown here that animals can adopt multi-objective behavioural mechanisms that appear consistent with the two optimal goals simultaneously. 5. Results of computer simulations demonstrate that these behavioural mechanisms are indeed consistent with optimal reproductive strategies and have thus been most likely selected over the course of the evolutionary time.  相似文献   

15.
16.
Intracellular delivery of nucleic acids to mammalian cells using polyplex nanoparticles (NPs) remains a challenge both in vitro and in vivo, with transfections often suffering from variable efficacy. To improve reproducibility and efficacy of transfections in vitro using a next-generation polyplex transfection material poly(beta-amino ester)s (PBAEs), the influence of multiple variables in the preparation of these NPs on their transfection efficacy was explored. The results indicate that even though PBAE/pDNA polyplex NPs are formed by the self-assembly of polyelectrolytes, their transfection is not affected by the manner in which the components are mixed, facilitating self-assembly in a single step, but timing for self-assembly of 5–20 min is optimal. In addition, even though the biomaterials are biodegradable in water, their efficacy is not affected by up to eight freeze-thaw cycles of the polymer. It was found that there is a greater stability of nucleic acid-complexed polymer as a polyplex nanoparticle compared with free polymer. Finally, by exploring multiple buffer systems, it was identified that utilization of divalent cation magnesium or calcium acetate buffers at pH 5.0 is optimal for transfection using these polymeric materials, boosting transfection several folds compared with monovalent cations. Together, these results can improve the reproducibility and efficacy of PBAE and similar polyplex nanoparticle transfections and improve the robustness of using these biomaterials for bioengineering and biotechnology applications.  相似文献   

17.
《Process Biochemistry》2007,42(8):1200-1210
A novel nonlinear biological batch process monitoring and fault identification approach based on kernel Fisher discriminant analysis (kernel FDA) is proposed. This method has a powerful ability to deal with nonlinear data and does not need to predict the future observations of variables. So it is more sensitive to fault detection. In order to improve the monitoring performance, variable trajectories of the batch processes are separated into several blocks. Then data in the original space is mapped into high-dimensional feature space via nonlinear kernel function and the optimal kernel Fisher feature vector and discriminant vector are extracted to perform process monitoring and fault identification. The key to the proposed approach is to calculate the distance of block data which are projected to the optimal kernel Fisher discriminant vector between new batch and reference batch. Through comparing distance with the predefined threshold, it can be considered whether the batch is normal or abnormal. Similar degree between the present discriminant vector and the optimal discriminant vector of fault in historical data set is used to perform fault diagnosis. The proposed method is applied to the process of fed-batch penicillin fermentation simulator benchmark and shows that it can effectively capture nonlinear relationships among process variables and is more efficient than MPCA approach.  相似文献   

18.
In randomized clinical trials involving survival time, a challenge that arises frequently, for example, in cancer studies (Manegold, Symanowski, Gatzemeier, Reck, von Pawel, Kortsik, Nackaerts, Lianes and Vogelzang, 2005. Second-line (post-study) chemotherapy received by patients treated in the phase III trial of pemetrexed plus cisplatin versus cisplatin alone in malignant pleural mesothelioma. Annals of Oncology 16, 923--927), is that subjects may initiate secondary treatments during the follow-up. The marginal structural Cox model and the method of inverse probability of treatment weighting (IPTW) have been proposed, originally for observational studies, to make causal inference on time-dependent treatments. In this paper, we adopt the marginal structural Cox model and propose an inferential method that improves the efficiency of the usual IPTW method by tailoring it to the setting of randomized clinical trials. The improvement in efficiency does not depend on any additional assumptions other than those required by the IPTW method, which is achieved by exploiting the knowledge that the study treatment is independent of baseline covariates due to randomization. The finite-sample performance of the proposed method is demonstrated via simulations and by application to data from a cancer clinical trial.  相似文献   

19.
Identifying drug-drug interactions (DDIs) is a major challenge in drug development. Previous attempts have established formal approaches for pharmacokinetic (PK) DDIs, but there is not a feasible solution for pharmacodynamic (PD) DDIs because the endpoint is often a serious adverse event rather than a measurable change in drug concentration. Here, we developed a metric “S-score” that measures the strength of network connection between drug targets to predict PD DDIs. Utilizing known PD DDIs as golden standard positives (GSPs), we observed a significant correlation between S-score and the likelihood a PD DDI occurs. Our prediction was robust and surpassed existing methods as validated by two independent GSPs. Analysis of clinical side effect data suggested that the drugs having predicted DDIs have similar side effects. We further incorporated this clinical side effects evidence with S-score to increase the prediction specificity and sensitivity through a Bayesian probabilistic model. We have predicted 9,626 potential PD DDIs at the accuracy of 82% and the recall of 62%. Importantly, our algorithm provided opportunities for better understanding the potential molecular mechanisms or physiological effects underlying DDIs, as illustrated by the case studies.  相似文献   

20.
Psychotherapies and cultures cannot be made experimentally independent because important input, process, and outcome variables essentially involve participants' meanings, emics, which are not naturally invariant across cultures. Factor analysis can be helpful in describing the analogues of a specific psychotherapy in several cultures once relevant emic variables have been developed for each culture. The special problems of cross-cultural research complicate the usual problems of psychotherapy research, those of defining outcome and specific therapies and of measuring the response function of outcome to the various amounts of the therapy. However, the special problems of cross-cultural research, of meaning variation, also exist in intracultural psychotherapy research, though they have received little notice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号