首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   67篇
  免费   8篇
  国内免费   4篇
  2024年   1篇
  2023年   1篇
  2022年   3篇
  2021年   6篇
  2020年   3篇
  2019年   4篇
  2018年   4篇
  2017年   4篇
  2016年   3篇
  2013年   3篇
  2012年   2篇
  2011年   6篇
  2010年   3篇
  2009年   2篇
  2008年   2篇
  2007年   3篇
  2006年   5篇
  2005年   3篇
  2004年   3篇
  2003年   3篇
  2002年   1篇
  2000年   2篇
  1999年   1篇
  1996年   3篇
  1995年   1篇
  1993年   1篇
  1992年   1篇
  1991年   1篇
  1987年   2篇
  1985年   2篇
排序方式: 共有79条查询结果,搜索用时 593 毫秒
1.
For an r × ctable with ordinal responses, odds ratios are commonly used to describe the relationship between the row and column variables. This article shows two types of ordinal odds ratios where local‐global odds ratios are used to compare several groups on a c‐category ordinal response and a global odds ratio is used to measure the global association between a pair of ordinal responses. When there is a stratification factor, we consider Mantel‐Haenszel (MH) type estimators of these odds ratios to summarize the association from several strata. Like the ordinary MH estimator of the common odds ratio for several 2 × 2 contingency tables, the estimators are used when the association is not expected to vary drastically among the strata. Also, the estimators are consistent under the ordinary asymptotic framework in which the number of strata is fixed and also under sparse asymptotics in which the number of strata grows with the sample size. Compared to the maximum likelihood estimators, simulations find that the MH type estimators perform better especially when each stratum has few observations. This article provides variances and covariances formulae for the local‐global odds ratios estimators and applies the bootstrap method to obtain a standard error for the global odds ratio estimator. At the end, we discuss possible ways of testing the homogeneity assumption.  相似文献   
2.
3.
4.
《IRBM》2022,43(6):561-572
ObjectivesCerebrovascular disease is a serious threat to human health. Because of its high mortality and disability rate, early diagnosis and prevention are very important. The performance of existing cerebrovascular segmentation methods based on deep learning depends on the integrity of labels. However, manual labels are usually of low quality and poor connectivity at small blood vessels, which directly affects the cerebrovascular segmentation results.Material and methodIn this paper, we propose a new segmentation network to segment cerebral vessels from MRA images by using sparse labels. The long-distance dependence between vascular structures is captured by the global vascular context module, and the topology is constrained by the hybrid loss function to segment the cerebral vessels with good connectivity.ResultExperiments show that our method performed with a sensitivity, precision, dice similarity coefficient, intersection over union and centerline dice similarity coefficient of 61.24%, 75.58%, 67.66%, 51.13% and 83.79% respectively.ConclusionThe obtained results reveal that the proposed cerebrovascular segmentation network has better segmentation performance for cerebrovascular segmentation under sparse labels, and can suppress the noise of background to a certain extent.  相似文献   
5.
Yu C  Zelterman D 《Biometrics》2002,58(3):481-491
In many epidemiologic studies, the first indication of an environmental or genetic contribution to the disease is the way in which the diseased cases cluster within the same family units. The concept of clustering is contrasted with incidence. We assume that all individuals are exchangeable except for their disease status. This assumption is used to provide an exact test of the initial hypothesis of no familial link with the disease, conditional on the number of diseased cases and the distribution of the sizes of the various family units. New parametric generalizations of binomial sampling models are described to provide measures of the effect size of the disease clustering. We consider models and an example that takes covariates into account. Ascertainment bias is described and the appropriate sampling distribution is demonstrated. Four numerical examples with real data illustrate these methods.  相似文献   
6.
7.
Inferential structure determination uses Bayesian theory to combine experimental data with prior structural knowledge into a posterior probability distribution over protein conformational space. The posterior distribution encodes everything one can say objectively about the native structure in the light of the available data and additional prior assumptions and can be searched for structural representatives. Here an analogy is drawn between the posterior distribution and the canonical ensemble of statistical physics. A statistical mechanics analysis assesses the complexity of a structure calculation globally in terms of ensemble properties. Analogs of the free energy and density of states are introduced; partition functions evaluate the consistency of prior assumptions with data. Critical behavior is observed with dwindling restraint density, which impairs structure determination with too sparse data. However, prior distributions with improved realism ameliorate the situation by lowering the critical number of observations. An in-depth analysis of various experimentally accessible structural parameters and force field terms will facilitate a statistical approach to protein structure determination with sparse data that avoids bias as much as possible.  相似文献   
8.
In this paper, we focus discussion on testing the homogeneity of risk difference for sparse data, in which we have few patients in each stratum, but a moderate or large number of strata. When the number of patients per treatment within strata is small (2 to 5 patients), none of test procedures proposed previously for testing the homogeneity of risk difference for sparse data can really perform well. On the basis of bootstrap methods, we develop a simple test procedure that can improve the power of the previous test procedures. Using Monte Carlo simulations, we demonstrate that the test procedure developed here can perform reasonable well with respect to Type I error even when the number of patients per stratum for each treatment is as small as two patients. We evaluate and study the power of the proposed test procedure in a variety of situations. We also include a comparison of the performance between the test statistics proposed elsewhere and the test procedure developed here. Finally, we briefly discuss the limitation of using the proposed test procedure. We use the data comparing two chemotherapy treatments in patients with multiple myeloma to illustrate the use of the proposed test procedure.  相似文献   
9.
The receiver operating characteristic curve is a popular tool to characterize the capabilities of diagnostic tests with continuous or ordinal responses. One common design for assessing the accuracy of diagnostic tests involves multiple readers and multiple tests, in which all readers read all test results from the same patients. This design is most commonly used in a radiology setting, where the results of diagnostic tests depend on a radiologist's subjective interpretation. The most widely used approach for analyzing data from such a study is the Dorfman-Berbaum-Metz (DBM) method (Dorfman et al., 1992) which utilizes a standard analysis of variance (ANOVA) model for the jackknife pseudovalues of the area under the ROC curves (AUCs). Although the DBM method has performed well in published simulation studies, there is no clear theoretical basis for this approach. In this paper, focusing on continuous outcomes, we investigate its theoretical basis. Our result indicates that the DBM method does not satisfy the regular assumptions for standard ANOVA models, and thus might lead to erroneous inference. We then propose a marginal model approach based on the AUCs which can adjust for covariates as well. Consistent and asymptotically normal estimators are derived for regression coefficients. We compare our approach with the DBM method via simulation and by an application to data from a breast cancer study. The simulation results show that both our method and the DBM method perform well when the accuracy of tests under the study is the same and that our method outperforms the DBM method for inference on individual AUCs when the accuracy of tests is not the same. The marginal model approach can be easily extended to ordinal outcomes.  相似文献   
10.
Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G's B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号