首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Ecological data often show temporal, spatial, hierarchical (random effects), or phylogenetic structure. Modern statistical approaches are increasingly accounting for such dependencies. However, when performing cross‐validation, these structures are regularly ignored, resulting in serious underestimation of predictive error. One cause for the poor performance of uncorrected (random) cross‐validation, noted often by modellers, are dependence structures in the data that persist as dependence structures in model residuals, violating the assumption of independence. Even more concerning, because often overlooked, is that structured data also provides ample opportunity for overfitting with non‐causal predictors. This problem can persist even if remedies such as autoregressive models, generalized least squares, or mixed models are used. Block cross‐validation, where data are split strategically rather than randomly, can address these issues. However, the blocking strategy must be carefully considered. Blocking in space, time, random effects or phylogenetic distance, while accounting for dependencies in the data, may also unwittingly induce extrapolations by restricting the ranges or combinations of predictor variables available for model training, thus overestimating interpolation errors. On the other hand, deliberate blocking in predictor space may also improve error estimates when extrapolation is the modelling goal. Here, we review the ecological literature on non‐random and blocked cross‐validation approaches. We also provide a series of simulations and case studies, in which we show that, for all instances tested, block cross‐validation is nearly universally more appropriate than random cross‐validation if the goal is predicting to new data or predictor space, or for selecting causal predictors. We recommend that block cross‐validation be used wherever dependence structures exist in a dataset, even if no correlation structure is visible in the fitted model residuals, or if the fitted models account for such correlations.  相似文献   

3.
BackgroundThere is increasing interest in applying image texture quantifiers to assess the intra-tumor heterogeneity observed in FDG-PET images of various cancers. Use of these quantifiers as prognostic indicators of disease outcome and/or treatment response has yielded inconsistent results. We study the general applicability of some well-established texture quantifiers to the image data unique to FDG-PET.MethodsWe first created computer-simulated test images with statistical properties consistent with clinical image data for cancers of the uterine cervix. We specifically isolated second-order statistical effects from low-order effects and analyzed the resulting variation in common texture quantifiers in response to contrived image variations. We then analyzed the quantifiers computed for FIGOIIb cervical cancers via receiver operating characteristic (ROC) curves and via contingency table analysis of detrended quantifier values.ResultsWe found that image texture quantifiers depend strongly on low-effects such as tumor volume and SUV distribution. When low-order effects are controlled, the image texture quantifiers tested were not able to discern only the second-order effects. Furthermore, the results of clinical tumor heterogeneity studies might be tunable via choice of patient population analyzed.ConclusionSome image texture quantifiers are strongly affected by factors distinct from the second-order effects researchers ostensibly seek to assess via those quantifiers.  相似文献   

4.
PurposeThis study aims to investigate the use of machine learning models for delivery error prediction in proton pencil beam scanning (PBS) delivery.MethodsA dataset of planned and delivered PBS spot parameters was generated from a set of 20 prostate patient treatments. Planned spot parameters (spot position, MU and energy) were extracted from the treatment planning system (TPS) for each beam. Delivered spot parameters were extracted from irradiation log-files for each beam delivery following treatment. The dataset was used as a training dataset for three machine learning models which were trained to predict delivered spot parameters based on planned parameters. K-fold cross validation was employed for hyper-parameter tuning and model selection where the mean absolute error (MAE) was used as the model evaluation metric. The model with lowest MAE was then selected to generate a predicted dose distribution for a test prostate patient within a commercial TPS.ResultsAnalysis of the spot position delivery error between planned and delivered values resulted in standard deviations of 0.39 mm and 0.44 mm for x and y spot positions respectively. Prediction error standard deviation values of spot positions using the selected model were 0.22 mm and 0.11 mm for x and y spot positions respectively. Finally, a three-way comparison of dose distributions and DVH values for select OARs indicates that the random-forest-predicted dose distribution within the test prostate patient was in closer agreement to the delivered dose distribution than the planned distribution.ConclusionsPBS delivery error can be accurately predicted using machine learning techniques.  相似文献   

5.
The study of soil mean weight diameter (MWD), essential for sustainable soil management, has recently received much attention. As the estimation of MWD is challenging, labor-intensive, and time-consuming, there is a crucial need to develop a predictive estimation method to generate helpful information required for the soil health assessment to save time and cost involved in soil analysis. Pedotransfer functions (PTFs) are used to estimate parameters that are ‘difficult to measure’ and time-consuming with the help of ’easy to measure’ parameters. In the current study, empirical PTFs, i.e., multi-linear regression (MLR), and four machine learning based PTFs, i.e., artificial neural network (ANN), support vector machine (SVM), classification and regression trees (CART), and random forest (RF) were used for mean weight diameter prediction in Karnal district of Haryana, India. A total of 121 soil samples from 0‐15 and 15‐30 cm soil depths were collected from seventeen villages of Nilokheri, Nissing, and Assandh blocks of Karnal district. Soil parameters such as bulk density (BD), fractal dimension (D), soil texture (i.e., sand, silt, and clay), organic carbon (OC), and glomalin content were used as the input variables. Two input combinations, i.e., one with texture data (dataset 1) and the other with fractal dimension data replacing texture (dataset 2), were used, and the complete dataset (121) was divided into training and testing datasets in a 4:1 ratio. The model performance was evaluated by statistical parameters such as mean absolute error (MAE), mean absolute percentage error (MAPE), root mean square error (RMSE), normalized root mean square error (NRMSE), and determination coefficient (R2). The comparison results showed that including the fractal dimension in the input dataset improved the prediction capability of ANN, SVM, and RF. MLR and CART showed lower predictive ability than the other three approaches (i.e., ANN, SVM, and RF). In the training dataset, RMSE (mm) for the SVM model was 8.33% lower with D than with texture as the input, whereas, in the testing dataset, it was 16.67% lower. Because SVM is more flexible and effectively captures non-linear relationships, it performed better than the other models in predicting MWD. As seen in this study, the SVM model with input data D is the best in its class and has a high potential for MWD prediction in the Karnal district of Haryana, India.  相似文献   

6.
《植物生态学报》2016,40(2):102
Aims Forest canopy closure is one of the essential factors in forest survey, and plays an important role in forest ecosystem management. It is of great significance to study how to apply LiDAR (light detection and ranging) data efficiently in remote sensing estimation of forest canopy closure. LiDAR can be used to obtain data fast and accurately and therefore be used as training and validation data to estimate forest canopy closure in large spatial scale. It can compensate for the insufficiency (e.g. labor-intensive, time-consuming) of conventional ground survey, and provide foundations to forest inventory.Methods In this study, we estimated canopy closure of a temperate forest in Genhe forest of Da Hinggan Ling area, Nei Mongol, China, using LiDAR and LANDSAT ETM+ data. Firstly, we calculated the canopy closure from ALS (Airborne Laser Scanning) high density point cloud data. Then, the estimated canopy closure from ALS data was used as training and validation data to modeling and inversion from eight vegetation indices computed from LANDSAT ETM+ data. Three approaches, multi-variable stepwise regression (MSR), random forest (RF) and Cubist, were developed and tested to estimate canopy closure from these vegetation indices, respectively.Important findings The validation results showed that the Cubist model yielded the highest accuracy compared to the other two models (determination coefficient (R2) = 0.722, root mean square error (RMSE) = 0.126, relative root mean square error (rRMSE) = 0.209, estimation accuracy (EA) = 79.883%). The combination of LiDAR data and LANDSAT ETM+ showed great potential to accurately estimate the canopy closure of the temperate forest. However, the model prediction capability needs to be further improved in order to be applied in larger spatial scale. More independent variables from other remotely sensed datasets, e.g. topographic data, texture information from high-resolution imagery, should be added into the model. These variables can help to reduce the influence of optical image, vegetation indices, terrain and shadow and so on. Moreover, the accuracy of the LiDAR-derived canopy closure needs to be further validated in future studies.  相似文献   

7.
BackgroundThe purpose of this study was to characterize pre-treatment non-contrast computed tomography (CT) and 18F-fluorodeoxyglucose positron emission tomography (PET) based radiomics signatures predictive of pathological response and clinical outcomes in rectal cancer patients treated with neoadjuvant chemoradiotherapy (NACR T).Materials and methodsAn exploratory analysis was performed using pre-treatment non-contrast CT and PET imaging dataset. The association of tumor regression grade (TRG) and neoadjuvant rectal (NAR) score with pre-treatment CT and PET features was assessed using machine learning algorithms. Three separate predictive models were built for composite features from CT + PET.ResultsThe patterns of pathological response were TRG 0 (n = 13; 19.7%), 1 (n = 34; 51.5%), 2 (n = 16; 24.2%), and 3 (n = 3; 4.5%). There were 20 (30.3%) patients with low, 22 (33.3%) with intermediate and 24 (36.4%) with high NAR scores. Three separate predictive models were built for composite features from CT + PET and analyzed separately for clinical endpoints. Composite features with α = 0.2 resulted in the best predictive power using logistic regression. For pathological response prediction, the signature resulted in 88.1% accuracy in predicting TRG 0 vs. TRG 1–3; 91% accuracy in predicting TRG 0–1 vs. TRG 2–3. For the surrogate of DFS and OS, it resulted in 67.7% accuracy in predicting low vs. intermediate vs. high NAR scores.ConclusionThe pre-treatment composite radiomics signatures were highly predictive of pathological response in rectal cancer treated with NACR T. A larger cohort is warranted for further validation.  相似文献   

8.

Background

Highly parallel analysis of gene expression has recently been used to identify gene sets or ‘signatures’ to improve patient diagnosis and risk stratification. Once a signature is generated, traditional statistical testing is used to evaluate its prognostic performance. However, due to the dimensionality of microarrays, this can lead to false interpretation of these signatures.

Principal Findings

A method was developed to test batches of a user-specified number of randomly chosen signatures in patient microarray datasets. The percentage of random generated signatures yielding prognostic value was assessed using ROC analysis by calculating the area under the curve (AUC) in six public available cancer patient microarray datasets. We found that a signature consisting of randomly selected genes has an average 10% chance of reaching significance when assessed in a single dataset, but can range from 1% to ∼40% depending on the dataset in question. Increasing the number of validation datasets markedly reduces this number.

Conclusions

We have shown that the use of an arbitrary cut-off value for evaluation of signature significance is not suitable for this type of research, but should be defined for each dataset separately. Our method can be used to establish and evaluate signature performance of any derived gene signature in a dataset by comparing its performance to thousands of randomly generated signatures. It will be of most interest for cases where few data are available and testing in multiple datasets is limited.  相似文献   

9.
PurposeTo analyze the uncertainties of the rectum due to anisotropic shape variations by using a statistical point distribution model (PDM).Materials and methodsThe PDM was applied to the rectum contours that were delineated on planning computed tomography (CT) and cone-beam CT (CBCT) at 80 fractions of 11 patients. The standard deviations (SDs) of systematic and random errors of the shape variations of the whole rectum and the region in which the rectum overlapped with the PTV (ROP regions) were derived from the PDMs at all fractions of each patient. The systematic error was derived by using the PDMs of planning and average rectum surface determined from rectum surfaces at all fractions, while the random error was derived by using a PDM-based covariance matrix at all fractions of each patient.ResultsRegarding whole rectum, the population SDs were larger than 1.0 mm along all directions for random error, and along the anterior, superior, and inferior directions for systematic error. The deviation is largest along the superior and inferior directions for systematic and random errors, respectively. For ROP regions, the population SDs of systematic error were larger than 1.0 mm along the superior and inferior directions. The population SDs of random error for the ROP regions were larger than 1.0 mm except along the right and posterior directions.ConclusionsThe anisotropic shape variations of the rectum, especially in the ROP regions, should be considered when determining a planning risk volume (PRV) margins for the rectum associated with the acute toxicities.  相似文献   

10.
PurposeTo compare radiomic features extracted from diagnostic computed tomography (CT) images with and without contrast enhancement in delayed phase for non-small cell lung cancer (NSCLC) patients.MethodsDiagnostic CT images from 269 tumors [non-contrast CT, 188 (dataset NE); contrast-enhanced CT, 81 (dataset CE)] were enrolled in this study. Eighteen first-order and seventy-five texture features were extracted by setting five bin width levels for CT values. Reproducible features were selected by the intraclass correlation coefficient (ICC). Radiomic features were compared between datasets NE and CE. Subgroup analyses were performed based on the CT acquisition period, exposure value, and patient characteristics.ResultsEighty features were considered reproducible (0.5 ≤ ICC). Twelve of the sixteen first-order features, independent of the bin width levels, were statistically different between datasets NE and CE (p < 0.05), and the p-values of two first-order features depending on the bin width levels were reduced with narrower bin widths. Sixteen out of sixty-two features showed a significant difference, regardless of the bin width (p < 0.05). There were significant differences between datasets NE and CE with older age, lighter body weight, better performance status, being a smoker, larger gross tumor volume, and tumor location at central region.ConclusionsContrast enhancement in the delayed phase of CT images for NSCLC patients affected some of the radiomic features and the variability of radiomic features due to contrast uptake may depend largely on the patient characteristics.  相似文献   

11.
Split-test Bonferroni correction for QEEG statistical maps   总被引:2,自引:0,他引:2  
With statistical testing, corrections for multiple comparisons, such as Bonferroni adjustments, have given rise to controversies in the scientific community, because of their negative impact on statistical power. This impact is especially problematic for high-multidimensional data, such as multi-electrode brain recordings. With brain imaging data, a reliable method is needed to assess statistical significance of the data without losing statistical power. Conjunction analysis allows the combination of significance and consistency of an effect. Through a balanced combination of information from retest experiments (multiple trials split testing), we present an intuitively appealing, novel approach for brain imaging conjunction. The method is then tested and validated on synthetic data followed by a real-world test on QEEG data from patients with Alzheimer’s disease. This latter application requires both reliable type-I error and type-II error rates, because of the poor signal-to-noise ratio inherent in EEG signals.  相似文献   

12.
PurposeDosomics is a novel texture analysis method to parameterize regions of interest and to produce dose features that encode the spatial and statistical distribution of radiotherapy dose at higher resolution than organ-level dose-volume histograms. This study investigates the stability of dosomics features extraction, as their variation due to changes of grid resolution and algorithm dose calculation.Material and MethodsDataset has been generated considering all the possible combinations of four grid resolutions and two algorithms dose calculation of 18 clinical delivered dose distributions, leading to a 144 3D dose distributions dataset. Dosomics features extraction has been performed with an in-house developed software. A total number of 214 dosomics features has been extracted from four different region of interest: PTV, the two closest OARs and a RING structure.Reproducibility and stability of each extracted dosomic feature (Rfe, Sfe), have been analyzed in terms of intraclass correlation coefficient (ICC) and coefficient of variation.ResultsDosomics features extraction was found reproducible (ICC > 0.99). Dosomic features, across the combination of grid resolutions and algorithms dose calculation, are more stable in the RING for all the considered feature’s families. Sfe is higher in OARs, in particular for GLSZM features’ families. Highest Sfe have been found in the PTV, in particular in the GLCM features’ family.ConclusionStability and reproducibility of dosomics features have been evaluated for a representative clinical dose distribution case mix. These results suggest that, in terms of stability, dosomic studies should always perform a reporting of grid resolution and algorithm dose calculation.  相似文献   

13.
Personalized medicine aims to identify those patients who have good or poor prognosis for overall disease outcomes or therapeutic efficacy for a specific treatment. A well-established approach is to identify a set of biomarkers using statistical methods with a classification algorithm to identify patient subgroups for treatment selection. However, there are potential false positives and false negatives in classification resulting in incorrect patient treatment assignment. In this paper, we propose a hybrid mixture model taking uncertainty in class labels into consideration, where the class labels are modeled by a Bernoulli random variable. An EM algorithm was developed to estimate the model parameters, and a parametric bootstrap method was used to test the significance of the predictive variables that were associated with subgroup memberships. Simulation experiments showed that the proposed method averagely had higher accuracy in identifying the subpopulations than the Naïve Bayes classifier and logistic regression. A breast cancer dataset was analyzed to illustrate the proposed hybrid mixture model.  相似文献   

14.
Multipoint (MP) linkage analysis represents a valuable tool for whole-genome studies but suffers from the disadvantage that its probability distribution is unknown and varies as a function of marker information and density, genetic model, number and structure of pedigrees, and the affection status distribution [Xing and Elston: Genet Epidemiol 2006;30:447-458; Hodge et al.: Genet Epidemiol 2008;32:800-815]. This implies that the MP significance criterion can differ for each marker and each dataset, and this fact makes planning and evaluation of MP linkage studies difficult. One way to circumvent this difficulty is to use simulations or permutation testing. Another approach is to use an alternative statistical paradigm to assess the statistical evidence for linkage, one that does not require computation of a p value. Here we show how to use the evidential statistical paradigm for planning, conducting, and interpreting MP linkage studies when the disease model is known (lod analysis) or unknown (mod analysis). As a key feature, the evidential paradigm decouples uncertainty (i.e. error probabilities) from statistical evidence. In the planning stage, the user calculates error probabilities, as functions of one's design choices (sample size, choice of alternative hypothesis, choice of likelihood ratio (LR) criterion k) in order to ensure a reliable study design. In the data analysis stage one no longer pays attention to those error probabilities. In this stage, one calculates the LR for two simple hypotheses (i.e. trait locus is unlinked vs. trait locus is located at a particular position) as a function of the parameter of interest (position). The LR directly measures the strength of evidence for linkage in a given data set and remains completely divorced from the error probabilities calculated in the planning stage. An important consequence of this procedure is that one can use the same criterion k for all analyses. This contrasts with the situation described above, in which the value one uses to conclude significance may differ for each marker and each dataset in order to accommodate a fixed test size, α. In this study we accomplish two goals that lead to a general algorithm for conducting evidential MP linkage studies. (1) We provide two theoretical results that translate into guidelines for investigators conducting evidential MP linkage: (a) Comparing mods to lods, error rates (including probabilities of weak evidence) are generally higher for mods when the null hypothesis is true, but lower for mods in the presence of true linkage. Royall [J Am Stat Assoc 2000;95:760-780] has shown that errors based on lods are bounded and generally small. Therefore when the true disease model is unknown and one chooses to use mods, one needs to control misleading evidence rates only under the null hypothesis; (b) for any given pair of contiguous marker loci, error rates under the null are greatest at the midpoint between the markers spaced furthest apart, which provides an obvious simple alternative hypothesis to specify for planning MP linkage studies. (2) We demonstrate through extensive simulation that this evidential approach can yield low error rates under the null and alternative hypotheses for both lods and mods, despite the fact that mod scores are not true LRs. Using these results we provide a coherent approach to implement a MP linkage study using the evidential paradigm.  相似文献   

15.
PurposeIt is vital to appropriately power clinical trials towards discovery of novel disease-modifying therapies for Parkinson’s disease (PD). Thus, it is critical to improve prediction of outcome in PD patients.MethodsWe systematically probed a range of robust predictor algorithms, aiming to find best combinations of features for significantly improved prediction of motor outcome (MDS-UPDRS-III) in PD. We analyzed 204 PD patients with 18 features (clinical measures; dopamine-transporter (DAT) SPECT imaging measures), performing different randomized arrangements and utilizing data from 64%/6%/30% of patients in each arrangement for training/training validation/final testing. We pursued 3 approaches: i) 10 predictor algorithms (accompanied with automated machine learning hyperparameter tuning) were first applied on 32 experimentally created combinations of 18 features, ii) we utilized Feature Subset Selector Algorithms (FSSAs) for more systematic initial feature selection, and iii) considered all possible combinations between 18 features (262,143 states) to assess contributions of individual features.ResultsA specific set (set 18) applied to the LOLIMOT (Local Linear Model Trees) predictor machine resulted in the lowest absolute error 4.32 ± 0.19, when we firstly experimentally created 32 combinations of 18 features. Subsequently, 2 FSSAs (Genetic Algorithm (GA) and Ant Colony Optimization (ACO)) selecting 5 features, combined with LOLIMOT, reached an error of 4.15 ± 0.46. Our final analysis indicated that longitudinal motor measures (MDS-UPDRS-III years 0 and 1) were highly significant predictors of motor outcome.ConclusionsWe demonstrate excellent prediction of motor outcome in PD patients by employing automated hyperparameter tuning and optimal utilization of FSSAs and predictor algorithms.  相似文献   

16.
PurposeTo demonstrate unique information potential of a powerful multivariate data processing method, principal component analysis (PCA), in detecting complex interrelationships between diverse patient, disease and treatment variables and in prognostication of therapy's outcome and response of patients after mastectomy.Patients and MethodsOne hundred-forty-two patients with breast cancer were retrospectively evaluated. The patients were selected from a group of 201 patients who had been treated and observed in the same oncology ward. The selection was based on availability of complete set of information describing each patient. The set consisted of 60 specific data. A matrix of 142 × 60 data points was subjected to PCA using a professional, statistical software (commercially available) and a personal computer.ResultsTwo principal components, PC1 and PC2, were extracted. They accounted for 26% of total data variance. Projections of 60 variables and 142 patients were made on a plane determined by PC1 and PC2. A clear clustering of the variables and of the patients was observed. It was discussed in terms of similarity (dissimilarity) of the variables and the patients, respectively. A strikingly clear separation was demonstrated to exist between the group of patients living over 7 years after mastectomy and the group of deceased patients.ConclusionPCA offers a new promising alternative of statistical analysis of multivariable data on cancer patients. Using the PCA, potentially useful information on both the factors affecting treatment outcome and general prognosis, may be extracted from large data sets.  相似文献   

17.
《Endocrine practice》2009,15(6):521-527
ObjectiveTo determine whether positron emission tomography/computed tomography (PET/CT) and indium In 111 pentetreotide, individually or collectively, predict the outcome of patients with neuroendocrine tumors (NETs).MethodsBetween July 31, 2002, and May 4, 2007, 29 patients with previously diagnosed NETs underwent both PET/CT and indium In 111 pentetreotide imaging at our institution. The images were evaluated for the presence of abnormalities. Clinical outcomes were classified as survival without major morbidities, survival with severe complications of disease, or death. Time to outcome was measured in months from the imaging date to outcome. Kaplan-Meier survival curves were calculated in which patient outcome was compared with results on PET/CT and indium In 111 pentetreotide imaging.ResultsOf the 29 patients, 9 had abnormalities on both PET/CT and indium In 111 pentetreotide imaging. Two patients had abnormal findings on PET/CT but normal findings on pentetreotide imaging. In 5 patients, findings were normal on PET/CT but abnormal on pentetreotide imaging. In 13 patients, normal findings were noted on both PET/CT and pentetreotide imaging. Kaplan-Meier analysis demonstrated a significant survival advantage for patients who had normal findings on PET/CT in comparison with abnormal PET/CT findings (P = .01). Patients with normal findings on indium In 111 pentetreotide imaging had a higher but insignificant survival advantage over those with abnormal results on pentetreotide imaging (P = .08).ConclusionFor evaluation of NETs, PET/CT and indium In 111 pentetreotide are complementary. Increased metabolic activity in tumor cells is reflected by abnormalities on PET/CT. Patients who had abnormal PET/CT findings had a generally poorer prognosis and a more rapid clinical deterioration than those with normal PET/CT findings. (Endocr Pract. 2009;15:521-527)  相似文献   

18.
BackgroundThis study aimed to identify a series of prognostically relevant immune features by immunophenoscore. Immune features were explored using MRI radiomics features to prediction the overall survival (OS) of lower-grade glioma (LGG) patients and their response to immune checkpoints.MethodLGG data were retrieved from TCGA and categorized into training and internal validation datasets. Patients attending the First Affiliated Hospital of Harbin Medical University were included in an external validation cohort. An immunophenoscore-based signature was built to predict malignant potential and response to immune checkpoint inhibitors in LGG patients. In addition, a deep learning neural network prediction model was built for validation of the immunophenoscore-based signature.ResultsImmunophenotype-associated mRNA signatures (IMriskScore) for outcome prediction and ICB therapeutic effects in LGG patients were constructed. Deep learning of neural networks based on radiomics showed that MRI radiomic features determined IMriskScore. Enrichment analysis and ssGSEA correlation analysis were performed. Mutations in CIC significantly improved the prognosis of patients in the high IMriskScore group. Therefore, CIC is a potential therapeutic target for patients in the high IMriskScore group. Moreover, IMriskScore is an independent risk factor that can be used clinically to predict LGG patient outcomes.ConclusionsThe IMriskScore model consisting of a sets of biomarkers, can independently predict the prognosis of LGG patients and provides a basis for the development of personalized immunotherapy strategies. In addition, IMriskScore features were predicted by MRI radiomics using a deep learning approach using neural networks. Therefore, they can be used for the prognosis of LGG patients.  相似文献   

19.
Positron emission tomography (PET) allows a monitoring and recording of the spatial and temporal distribution of molecular/cellular processes for diagnostic and therapeutic applications.The aim of this review is to describe the current applications and to explore the role of PET in prostate cancer management, mainly in the radiation therapy (RT) scenario.The state-of-the art of PET for prostate cancer will be presented together with the impact of new specific PET tracers and technological developments aiming at obtaining better imaging quality, increased tumor detectability and more accurate volume delineation.An increased number of studies have been focusing on PET quantification methods as predictive biomarkers capable of guiding individualized treatment and improving patient outcome; the sophisticated advanced intensity modulated and imaged guided radiation therapy techniques (IMRT/IGRT) are capable of boosting more radioresistant tumor (sub)volumes.The use of advanced feature analyses of PET images is an approach that holds great promise with regard to several oncological diseases, but needs further validation in managing prostate diseases.  相似文献   

20.
《Médecine Nucléaire》2020,44(1):18-25
IntroductionIn the current context of personalized medicine, textural analysis promises to be an accurate approach of cancer prognosis. The lack of standardization and the multitude of textural indices limited radiomics studies reproducibility as an obstacle of introduction of textual analysis into clinical practice. Our study assessed the prognostic value of entropy in 18F-FDG PET/CT in locally advanced non-small cell lung cancer (NSCLC).MethodPatients who performed 18F-FDG PET/CT for lung cancer staging between September 2015 and April 2017 in 2 hospitals were included for conventional and textural PET parameters extraction. A retrospective analysis of patient was performed over 24 months to determine the progression-free survival and overall survival.ResultsForty-two patients were included. Progression-free survival was significantly correlated with entropy on multivariate regression (cut-off at 8.4) with a hazard ratio of 3.04 (95 % CI 1.13–8.16) (P = 0.03), as MTV (P < 0.001). Neither conventional PET parameters nor entropy was a significant association with overall survival.ConclusionThese results confirmed the external validity and robustness of FDG PET entropy as an independent prognostic factor of progression-free survival in patients with locally advanced NSCLC, in addition to Conventional PET.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号