首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary .  Rigorous statistical evaluation of the predictive values of novel biomarkers is critical prior to applying novel biomarkers into routine standard care. It is important to identify factors that influence the performance of a biomarker in order to determine the optimal conditions for test performance. We propose a covariate-specific time-dependent positive predictive values curve to quantify the predictive accuracy of a prognostic marker measured on a continuous scale and with censored failure time outcome. The covariate effect is accommodated with a semiparametric regression model framework. In particular, we adopt a smoothed survival time regression technique ( Dabrowska, 1997 ,  The Annals of Statistics   25, 1510–1540) to account for the situation where risk for the disease occurrence and progression is likely to change over time. In addition, we provide asymptotic distribution theory and resampling-based procedures for making statistical inference on the covariate-specific positive predictive values. We illustrate our approach with numerical studies and a dataset from a prostate cancer study.  相似文献   

2.
For medical decision making and patient information, predictions of future status variables play an important role. Risk prediction models can be derived with many different statistical approaches. To compare them, measures of predictive performance are derived from ROC methodology and from probability forecasting theory. These tools can be applied to assess single markers, multivariable regression models and complex model selection algorithms. This article provides a systematic review of the modern way of assessing risk prediction models. Particular attention is put on proper benchmarks and resampling techniques that are important for the interpretation of measured performance. All methods are illustrated with data from a clinical study in head and neck cancer patients.  相似文献   

3.
An interpretation for the ROC curve and inference using GLM procedures   总被引:7,自引:0,他引:7  
Pepe MS 《Biometrics》2000,56(2):352-359
The accuracy of a medical diagnostic test is often summarized in a receiver operating characteristic (ROC) curve. This paper puts forth an interpretation for each point on the ROC curve as being a conditional probability of a test result from a random diseased subject exceeding that from a random nondiseased subject. This interpretation gives rise to new methods for making inference about ROC curves. It is shown that inference can be achieved with binary regression techniques applied to indicator variables constructed from pairs of test results, one component of the pair being from a diseased subject and the other from a nondiseased subject. Within the generalized linear model (GLM) binary regression framework, ROC curves can be estimated, and we highlight a new semiparametric estimator. Covariate effects can also be evaluated with the GLM models. The methodology is applied to a pancreatic cancer dataset where we use the regression framework to compare two different serum biomarkers. Asymptotic distribution theory is developed to facilitate inference and to provide insight into factors influencing variability of estimated model parameters.  相似文献   

4.
Semi-competing risks data include the time to a nonterminating event and the time to a terminating event, while competing risks data include the time to more than one terminating event. Our work is motivated by a prostate cancer study, which has one nonterminating event and two terminating events with both semi-competing risks and competing risks present as well as two censoring times. In this paper, we propose a new multi-risks survival (MRS) model for this type of data. In addition, the proposed MRS model can accommodate noninformative right-censoring times for nonterminating and terminating events. Properties of the proposed MRS model are examined in detail. Theoretical and empirical results show that the estimates of the cumulative incidence function for a nonterminating event may be biased if the information on a terminating event is ignored. A Markov chain Monte Carlo sampling algorithm is also developed. Our methodology is further assessed using simulations and also an analysis of the real data from a prostate cancer study. As a result, a prostate-specific antigen velocity greater than 2.0 ng/mL per year and higher biopsy Gleason scores are positively associated with a shorter time to death due to prostate cancer.  相似文献   

5.
Clinical prediction models play a key role in risk stratification, therapy assignment and many other fields of medical decision making. Before they can enter clinical practice, their usefulness has to be demonstrated using systematic validation. Methods to assess their predictive performance have been proposed for continuous, binary, and time-to-event outcomes, but the literature on validation methods for discrete time-to-event models with competing risks is sparse. The present paper tries to fill this gap and proposes new methodology to quantify discrimination, calibration, and prediction error (PE) for discrete time-to-event outcomes in the presence of competing risks. In our case study, the goal was to predict the risk of ventilator-associated pneumonia (VAP) attributed to Pseudomonas aeruginosa in intensive care units (ICUs). Competing events are extubation, death, and VAP due to other bacteria. The aim of this application is to validate complex prediction models developed in previous work on more recently available validation data.  相似文献   

6.
Regular magnetic resonance imaging has been recommended for the purpose of screening for silicone implant rupture. However, when its use as a screening test is critically examined, it appears that evidence to support its use is lacking. For example, there is no conclusive evidence at this time to show that using magnetic resonance imaging screening of asymptomatic women leads to a reduction in patient morbidity. Furthermore, based on existing data, it is unclear whether the potential benefits of screening magnetic resonance imaging tests outweigh the risks and potential costs for the patient. In the face of this uncertainty, shared medical decision making can be recommended. For different women, underlying beliefs and values will sway decision making in different directions. By engaging a woman in the process of shared medical decision making, however, the plastic surgeon and her or his patients can make a mutually agreeable choice that reflects the patient's individual values and health preferences.  相似文献   

7.
Semi-competing risks refer to the time-to-event analysis setting, where the occurrence of a non-terminal event is subject to whether a terminal event has occurred, but not vice versa. Semi-competing risks arise in a broad range of clinical contexts, including studies of preeclampsia, a condition that may arise during pregnancy and for which delivery is a terminal event. Models that acknowledge semi-competing risks enable investigation of relationships between covariates and the joint timing of the outcomes, but methods for model selection and prediction of semi-competing risks in high dimensions are lacking. Moreover, in such settings researchers commonly analyze only a single or composite outcome, losing valuable information and limiting clinical utility—in the obstetric setting, this means ignoring valuable insight into timing of delivery after preeclampsia has onset. To address this gap, we propose a novel penalized estimation framework for frailty-based illness–death multi-state modeling of semi-competing risks. Our approach combines non-convex and structured fusion penalization, inducing global sparsity as well as parsimony across submodels. We perform estimation and model selection via a pathwise routine for non-convex optimization, and prove statistical error rate results in this setting. We present a simulation study investigating estimation error and model selection performance, and a comprehensive application of the method to joint risk modeling of preeclampsia and timing of delivery using pregnancy data from an electronic health record.  相似文献   

8.
The evolution of “informatics” technologies has the potential to generate massive databases, but the extent to which personalized medicine may be effectuated depends on the extent to which these rich databases may be utilized to advance understanding of the disease molecular profiles and ultimately integrated for treatment selection, necessitating robust methodology for dimension reduction. Yet, statistical methods proposed to address challenges arising with the high‐dimensionality of omics‐type data predominately rely on linear models and emphasize associations deriving from prognostic biomarkers. Existing methods are often limited for discovering predictive biomarkers that interact with treatment and fail to elucidate the predictive power of their resultant selection rules. In this article, we present a Bayesian predictive method for personalized treatment selection that is devised to integrate both the treatment predictive and disease prognostic characteristics of a particular patient's disease. The method appropriately characterizes the structural constraints inherent to prognostic and predictive biomarkers, and hence properly utilizes these complementary sources of information for treatment selection. The methodology is illustrated through a case study of lower grade glioma. Theoretical considerations are explored to demonstrate the manner in which treatment selection is impacted by prognostic features. Additionally, simulations based on an actual leukemia study are provided to ascertain the method's performance with respect to selection rules derived from competing methods.  相似文献   

9.
Anticipating how biodiversity will respond to climate change is challenged by the fact that climate variables affect individuals in competition with others, but interest lies at the scale of species and landscapes. By omitting the individual scale, models cannot accommodate the processes that determine future biodiversity. We demonstrate how individual-scale inference can be applied to the problem of anticipating vulnerability of species to climate. The approach places climate vulnerability in the context of competition for light and soil moisture. Sensitivities to climate and competition interactions aggregated from the individual tree scale provide estimates of which species are vulnerable to which variables in different habitats. Vulnerability is explored in terms of specific demographic responses (growth, fecundity and survival) and in terms of the synthetic response (the combination of demographic rates), termed climate tracking. These indices quantify risks for individuals in the context of their competitive environments. However, by aggregating in specific ways (over individuals, years, and other input variables), we provide ways to summarize and rank species in terms of their risks from climate change.  相似文献   

10.
MOTIVATION: Modern mass spectrometry allows the determination of proteomic fingerprints of body fluids like serum, saliva or urine. These measurements can be used in many medical applications in order to diagnose the current state or predict the evolution of a disease. Recent developments in machine learning allow one to exploit such datasets, characterized by small numbers of very high-dimensional samples. RESULTS: We propose a systematic approach based on decision tree ensemble methods, which is used to automatically determine proteomic biomarkers and predictive models. The approach is validated on two datasets of surface-enhanced laser desorption/ionization time of flight measurements, for the diagnosis of rheumatoid arthritis and inflammatory bowel diseases. The results suggest that the methodology can handle a broad class of similar problems.  相似文献   

11.
Schulte PA 《Mutation research》2005,592(1-2):155-163
Building on mechanistic information, much of molecular epidemiologic research has focused on validating biomarkers, that is, assessing their ability to accurately indicate exposure, effect, disease, or susceptibility. To be of use in surveillance, medical screening, or interventions, biomarkers must already be validated so that they can be used as outcomes or indicators that can serve a particular function. In surveillance, biomarkers can be used as indicators of hazard, exposure, disease, and population risk. However, to obtain rates for these measures, the population at risk will need to be assessed. In medical screening, biomarkers can serve as early indicators of disease in asymptomatic people. This allows for the identification of those who should receive diagnostic confirmation and early treatment. In intervention (which includes risk assessment and communication, risk management, and various prevention efforts), biomarkers can be used to assess the effectiveness of a prevention or control strategy as well as help determine whether the appropriate individuals are assigned to the correct intervention category. Biomarkers can be used to provide group and individual risk assessments that can be the basis for marshalling resources. Critical for using biomarkers in surveillance, medical screening, and intervention is the justification that the biomarkers can provide information not otherwise accessible by a less expensive and easier-to-obtain source of information, such as medical records, surveys, or vital statistics. The ability to use validated biomarkers in surveillance, medical screening, and intervention will depend on the extent to which a strategy for evidence-based procedures for biomarker knowledge transfer can be developed and implemented. This will require the interaction of researchers and decision-makers to collaborate on public health and medical issues.  相似文献   

12.
This article develops omnibus tests for comparing cause-specific hazard rates and cumulative incidence functions at specified covariate levels. Confidence bands for the difference and the ratio of two conditional cumulative incidence functions are also constructed. The omnibus test is formulated in terms of a test process given by a weighted difference of estimates of cumulative cause-specific hazard rates under Cox proportional hazards models. A simulation procedure is devised for sampling from the null distribution of the test process, leading to graphical and numerical technques for detecting significant differences in the risks. The approach is applied to a cohort study of type-specific HIV infection rates.  相似文献   

13.
The Flexner Report highlighted the importance of teaching medical students to reason about uncertainty. The science of medical decision making seeks to explain how medical judgments and decisions ought ideally to be made, how they are actually made in practice, and how they can be improved, given the constraints of medical practice. The field considers both clinical decisions by or for individual patients and societal decisions designed to benefit the public. Despite the relevance of decision making to medical practice, it currently receives little formal attention in the U.S. medical school curriculum. This article suggests three roles for medical decision making in medical education. First, basic decision science would be a valuable prerequisite to medical training. Second, several decision-related competencies would be important outcomes of medical education; these include the physician's own decision skills, the ability to guide patients in shared decisions, and knowledge of health policy decisions at the societal level. Finally, decision making could serve as a unifying principle in the design of the medical curriculum, integrating other curricular content around the need to create physicians who are competent and caring decision makers.  相似文献   

14.
Cohort studies provide information on relative hazards and pure risks of disease. For rare outcomes, large cohorts are needed to have sufficient numbers of events, making it costly to obtain covariate information on all cohort members. We focus on nested case-control designs that are used to estimate relative hazard in the Cox regression model. In 1997, Langholz and Borgan showed that pure risk can also be estimated from nested case-control data. However, these approaches do not take advantage of some covariates that may be available on all cohort members. Researchers have used weight calibration to increase the efficiency of relative hazard estimates from case-cohort studies and nested cased-control studies. Our objective is to extend weight calibration approaches to nested case-control designs to improve precision of estimates of relative hazards and pure risks. We show that calibrating sample weights additionally against follow-up times multiplied by relative hazards during the risk projection period improves estimates of pure risk. Efficiency improvements for relative hazards for variables that are available on the entire cohort also contribute to improved efficiency for pure risks. We develop explicit variance formulas for the weight-calibrated estimates. Simulations show how much precision is improved by calibration and confirm the validity of inference based on asymptotic normality. Examples are provided using data from the American Association of Retired Persons Diet and Health Cohort Study.  相似文献   

15.
Radiation is used in medicine to diagnose and treat diseases but it can also cause harm to the body by burning or mutation. This depends on whether the radiation is ionizing or nonionizing. Despite its vast applications in surgery, dermatology and cosmetics, little is taught and thus known about non-ionizing radiation.This review article discusses the fundamentals of non-ionizing electromagnetic radiations. The main aim is to extensively explain the different types of non-ionizing radiation. This will equip students and medical personnel with knowledge on different medical applications and expose them to a variety of specializations in medicine that utilize non-ionizing radiation. The article discusses the physics, hazard, means of protection and medical application of each type of radiation: ultraviolet radiation, light (both visible light and LASER), infrared radiation, microwaves and extremely low frequency radiation separately. It presents these terms in a simple manner that avoids rigors mathematics and physics, which makes them comprehensible for medical students.The development of new diagnostic and therapeutic approaches could also lead to increased hazards to the body unless they are treated with precaution. If not adequately monitored, a significant health risk may be posed to potentially exposed employees. Hence proper dosage should be used for non-ionizing radiation. This is only possible through understanding of the risks/benefits of these radiations by studying the physics and radiobiological effects of each individual radiation.  相似文献   

16.
We review a recent shift in conceptions of interoception and its relationship to hierarchical inference in the brain. The notion of interoceptive inference means that bodily states are regulated by autonomic reflexes that are enslaved by descending predictions from deep generative models of our internal and external milieu. This re-conceptualization illuminates several issues in cognitive and clinical neuroscience with implications for experiences of selfhood and emotion. We first contextualize interoception in terms of active (Bayesian) inference in the brain, highlighting its enactivist (embodied) aspects. We then consider the key role of uncertainty or precision and how this might translate into neuromodulation. We next examine the implications for understanding the functional anatomy of the emotional brain, surveying recent observations on agranular cortex. Finally, we turn to theoretical issues, namely, the role of interoception in shaping a sense of embodied self and feelings. We will draw links between physiological homoeostasis and allostasis, early cybernetic ideas of predictive control and hierarchical generative models in predictive processing. The explanatory scope of interoceptive inference ranges from explanations for autism and depression, through to consciousness. We offer a brief survey of these exciting developments.This article is part of the themed issue ‘Interoception beyond homeostasis: affect, cognition and mental health’.  相似文献   

17.
Tian L  Cai T  Wei LJ 《Biometrics》2009,65(3):894-902
Summary .  Suppose that we are interested in using new bio- or clinical markers, in addition to the conventional markers, to improve prediction or diagnosis of the patient's clinical outcome. The incremental value from the new markers is typically assessed by averaging across patients in the entire study population. However, when measuring the new markers is costly or invasive, an overall improvement does not justify measuring the new markers in all patients. A more practical strategy is to utilize the patient's conventional markers to decide whether the new markers are needed for improving prediction of his/her health outcomes. In this article, we propose inference procedures for the incremental values of new markers across various subgroups of patients classified by the conventional markers. The resulting point and interval estimates can be quite useful for medical decision makers seeking to balance the predictive or diagnostic value of new markers against their associated cost and risk. Our proposals are theoretically justified and illustrated empirically with two real examples.  相似文献   

18.
Temporal aspects have traditionally not been recognized adequately in life cycle assessment (LCA). The dynamic LCA model recently proposed offers a significant step forward in the dynamic assessment of global warming impacts. The results obtained with dynamic LCA are highly sensitive to the choice of a time horizon. Therefore, decision making between alternative systems can be critical because conclusions are dependent on the specific time horizon. In this article, we develop a decision‐making methodology based on the concept of time dominance. We introduce instantaneous and cumulative time dominance criteria to the dynamic LCA context and argue why the dominance of an alternative should also imply preference. Our approach allows for the rejection of certain alternatives without the determination of a specific time horizon. The number of decision‐relevant alternatives can thereby be reduced and the decision problem facilitated. We demonstrate our methodology by means of a case study of end‐of‐life alternatives for a wooden chair derived from the original authors of dynamic LCA and discuss the implications and limitations of the approach. The methodology based on time dominance criteria is supplementary to the dynamic LCA model, but does not substitute it. The overall value of this article stretches beyond LCA onto more general assessments of global warming, for example, in policy where the choice of a time horizon is equally significant.  相似文献   

19.
National experts in the field of developmental toxicology were interviewed in order to elicit the principles, or rules-of-thumb, they use in determining if a compound or agent is likely to be a developmental hazard during pregnancy. Several levels of individual and cumulative consensus activity were carried out that resulted in consensus in 71 rules and partial consensus in an additional 24 rules of 145 rules initially elicited. Rules could be divided generically into those affecting the expert's confidence in a piece of scientific evidence and those determining the weight of importance of that evidence in deciding about hazard identification. Topically, the rules also divided into those about the general nature or characteristics of a compound, animal studies testing for an effect of the compound, and human reports about the presence of absence of developmental effects associated with the compound. Several conclusions about the methodology include the following: 1) expert systems must be based on the knowledge of more than one expert; 2) considerable human effort is expended in evaluating the certainty of scientific evidence before combining the evidence for problem solving; 3) how experts use evidence of different degrees of uncertainty in their decisions is a major area that is yet to be determined and that may greatly affect subsequent efforts in artificial intelligence; and 4) knowledge elicitation by interview has limitations but is a workable methodology for medical decision making.  相似文献   

20.
We propose inference procedures for general factorial designs with time-to-event endpoints. Similar to additive Aalen models, null hypotheses are formulated in terms of cumulative hazards. Deviations are measured in terms of quadratic forms in Nelson–Aalen-type integrals. Different from existing approaches, this allows to work without restrictive model assumptions as proportional hazards. In particular, crossing survival or hazard curves can be detected without a significant loss of power. For a distribution-free application of the method, a permutation strategy is suggested. The resulting procedures' asymptotic validity is proven and small sample performances are analyzed in extensive simulations. The analysis of a data set on asthma illustrates the applicability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号