首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
医疗健康大数据发展现状研究   总被引:1,自引:0,他引:1  
通过介绍国内外医疗健康大数据发展计划、学术组织、标准化组织、研究领域、研究项目和开放数据资源等相关主题,反映了医疗健康大数据的发展现状与动态,可为医疗健康大数据应用和研究提供参考。  相似文献   

2.
医疗数据开放可为患者、医技人员和医学科研人员等提供所需医疗信息。医疗数据开放有助于提升数据价值,改善信息透明度。文章介绍了国内外医疗数据开放研究和应用现状,在辨析共享数据、公开数据和开放数据三者区别与联系的基础上,阐述了医疗数据开放的方法,并对如何开放医疗数据提出相关对策与建议。  相似文献   

3.
目的 分析综合医院对于大数据应用的内在需求,为医院的大数据研发与应用提供导向和依据。方法 采用德尔菲法自制医院大数据应用需求调查问卷,随机抽取中国研究型医院学会医疗分会64家会员单位进行调查,获得有效问卷104份,有效回收率为94.55%。结果 精准医疗(4.31±0.42)分,精益管理(4.23±0.56)分,科学研究(4.19±0.52)分,健康管理(4.16±0.52)分,数字医疗(4.06±0.60)分,教育培训(3.69±0.69)分。不同性别、年龄、职称、岗位组间的需求差异有统计学意义(P<0.05)。多元线性回归分析结果显示,医学人工智能(b=0.324,P=0.000)和互联网+医疗(b=0.161,P=0.047)的需求程度会对医院大数据应用前景态度产生显著的正向影响关系。结论 综合性医院对大数据具有较强的、多样化的应用需求,应以实际需求为导向,重点推进精准医疗、医学人工智能和互联网+医疗等相关应用的研发。  相似文献   

4.
大数据时代的公共卫生面临新的机遇和挑战。为了推动公共卫生大数据的应用,准确把握其内涵,开发针对性的解决方案,达到改善人们健康状况的目的,基于此对公共卫生大数据的现状进行了分析和论证。研究表明:通过对多个不同来源公共卫生数据进行收集和整理,能够形成公共卫生大数据,通过深入挖掘和分析,能够获取重大疾病影响因素、流行病的传播规律等信息,帮助医疗卫生人员和相关机构进行预测和评估,以便采取有效的管理手段和措施,保护人民健康,减少医疗花费。发现通过同生物信息技术相结合,公共卫生数据的获取、管理、分析、安全和应用方面都会有很大的发展空间。认为计算机技术的进一步应用,针对性的大数据挖掘方法开发,以及新型公共卫生人才培养,是发展这一领域的关键因素。  相似文献   

5.
分析了医疗大数据的价值与教学之间的关系,探讨面向数据价值的眼科临床教学模式。通过建立基于大数据技术的临床、教学、科研、管理过程中产生的数据进行优化组合的眼科临床教学体系,发挥大数据价值,重建眼科临床教学体系,将信息时代的医疗、科研电子数据作为眼科教学的主体,拓宽眼科教学途径,简化教学流程,更大限度提供给眼科医学生自主学习所需的各种资源。  相似文献   

6.
精准医疗是应用现代分子生物学、分子病理学、分子遗传学、分子影像技术、生物信息技术以及目前火热的大数据技术、智能化技术等,结合患者生活环境和临床数据,实现精准的疾病分类和诊断,制定具有个性化的疾病预防和诊疗方案,包括对风险的精确预测、疾病精确诊断、疾病精确分类、药物精确应用、疗效精确评估、疗后精确预测等。精准医疗是医学自身发展的客观必然,是人民群众对健康新需求的使然。精准医疗的核心价值是造福于患者,造福于人类,尤其是在当今中国,人民生活水平普遍得到改善和提高,人民对健康的追求达到了一个新的高度。  相似文献   

7.
<正>1概述1.1精准医疗精准医疗被誉为医疗的未来,它基于人们在基因、环境、工作与生活方式的个体差异,应用现代分子生物学、分子病理学、分子遗传学、分子影像技术、生物信息技术以及目前热门的大数据技术、人工智能技术等,通过对患者的生活环境和临床数据等进行分析,实现精准的疾病分类和诊断,制订具有个性化的疾病预防和诊疗方案。精准医疗包括对风险的精确预测、疾病精确诊断、疾病精确分类、药物精确应用、疗效精确评估与疗后精确预测。精准医疗是科学技术及医学自身发展的客观必然,也是广大民众摆脱贫困以后对健康、教育、生活质量追求的必然。因  相似文献   

8.
左权  靖伟德 《古生物学报》1995,34(6):777-779
应用医疗CT扫描法观察恐龙蛋化石,可清晰地分辨出蛋形、蛋壳、卵蛋白、蛋黄及胚盘等结构,并可测出各部分具体数据,为恐龙蛋化石和其它生物化石的研究开辟了一个新的途径。  相似文献   

9.
由于重庆地区的医疗水平参差不齐,所以建设重庆地区区域医疗影像信息化系统,实现各类影像信息共享,对提供高效的医疗卫生服务具有非常重要的意义。目前,建设重庆地区区域医疗影像交互平台面临着整体设计集成程度和扩展性不高、医学影像标准不统一、数据安全与质量标准不完整、平台没有整合大数据分析挖掘方法等问题。为此,首先对重庆地区区域影像平台的整体结构进行了设计,提出了区域影像交互平台的五层体系结构;其次在五层体系结构基础上制定和加强了影像数据标准和信息安全体系;并且在区域影像中心的基础上,嵌入了医学影像大数据分析工具。应用结果证明,交互平台具有较高的集成程度和扩展性,平台标准的制定和加强能有效的提高交互效率,加强数据安全和质量管理,并且平台可对海量数据进行有效的分析。  相似文献   

10.
生态环境大数据面临的机遇与挑战   总被引:2,自引:0,他引:2  
刘丽香  张丽云  赵芬  赵苗苗  赵海凤  邵蕊  徐明 《生态学报》2017,37(14):4896-4904
随着大数据时代的到来和大数据技术的迅猛发展,生态环境大数据的建设和应用已初露端倪。为了全面推进生态环境大数据的建设和应用,综述了生态环境大数据在解决生态环境问题中的机遇和优势,并分析了生态环境大数据在应用中所面临的挑战。总结和概括了大数据的概念与特征,又结合生态环境领域的特点,分析了生态环境大数据的特殊性和复杂性。重点阐述了生态环境大数据在减缓环境污染、生态退化和气候变化中的机遇,主要从数据存储、处理、分析、解释和展示等方面阐述生态环境大数据相较于传统数据的优势,通过这些优势说明生态环境大数据将有助于全面提高生态环境治理的综合决策水平。虽然生态环境大数据的应用前景广阔,但也面临着重重挑战,在数据共享和开放、应用创新、数据管理、技术创新和落地、专业人才培养和资金投入等方面还存在着许多问题和困难。在以上分析的基础上,提出了生态环境大数据未来的发展方向,包括各类生态环境数据的标准化、建设生态环境大数据存储与处理分析平台和推动国内外生态环境大数据平台的对接。  相似文献   

11.
Many biomedical studies have identified important imaging biomarkers that are associated with both repeated clinical measures and a survival outcome. The functional joint model (FJM) framework, proposed by Li and Luo in 2017, investigates the association between repeated clinical measures and survival data, while adjusting for both high-dimensional images and low-dimensional covariates based on the functional principal component analysis (FPCA). In this paper, we propose a novel algorithm for the estimation of FJM based on the functional partial least squares (FPLS). Our numerical studies demonstrate that, compared to FPCA, the proposed FPLS algorithm can yield more accurate and robust estimation and prediction performance in many important scenarios. We apply the proposed FPLS algorithm to a neuroimaging study. Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database.  相似文献   

12.
Ecologists are increasingly asking large‐scale and/or broad‐scope questions that require vast datasets. In response, various top‐down efforts and incentives have been implemented to encourage data sharing and integration. However, despite general consensus on the critical need for more open ecological data, several roadblocks still discourage compliance and participation in these projects; as a result, ecological data remain largely unavailable. Grassroots initiatives (i.e. efforts initiated and led by cohesive groups of scientists focused on specific goals) have thus far been overlooked as a powerful means to meet these challenges. These bottom‐up collaborative data integration projects can play a crucial role in making high quality datasets available because they tackle the heterogeneity of ecological data at a scale where it is still manageable, all the while offering the support and structure to do so. These initiatives foster best practices in data management and provide tangible rewards to researchers who choose to invest time in sound data stewardship. By maintaining proximity between data generators and data users, grassroots initiatives improve data interpretation and ensure high‐quality data integration while providing fair acknowledgement to data generators. We encourage researchers to formalize existing collaborations and to engage in local activities that improve the availability and distribution of ecological data. By fostering communication and interaction among scientists, we are convinced that grassroots initiatives can significantly support the development of global‐scale data repositories. In doing so, these projects help address important ecological questions and support policy decisions.  相似文献   

13.
Data independent acquisition (DIA) proteomics techniques have matured enormously in recent years, thanks to multiple technical developments in, for example, instrumentation and data analysis approaches. However, there are many improvements that are still possible for DIA data in the area of the FAIR (Findability, Accessibility, Interoperability and Reusability) data principles. These include more tailored data sharing practices and open data standards since public databases and data standards for proteomics were mostly designed with DDA data in mind. Here we first describe the current state of the art in the context of FAIR data for proteomics in general, and for DIA approaches in particular. For improving the current situation for DIA data, we make the following recommendations for the future: (i) development of an open data standard for spectral libraries; (ii) make mandatory the availability of the spectral libraries used in DIA experiments in ProteomeXchange resources; (iii) improve the support for DIA data in the data standards developed by the Proteomics Standards Initiative; and (iv) improve the support for DIA datasets in ProteomeXchange resources, including more tailored metadata requirements.  相似文献   

14.
Summary   This paper explores data compatibility issues arising from the assessment of remnant native vegetation condition using satellite remote sensing and field-based data. Space-borne passive remote sensing is increasingly used as a way of providing a total sample and synoptic overview of the spectral and spatial characteristics of native vegetation canopies at a regional scale. However, integrating field-collected data often not designed for integration with remotely sensed data can lead to data compatibility issues. Subsequent problems associated with the integration of unsuited datasets can contribute to data uncertainty and result in inconclusive findings. It is these types of problems (and potential solutions) that form the basis of this paper. In other words, how can field surveys be designed to support and improve compatibility with remotely sensed total surveys? Key criteria were identified for consideration when designing field-based surveys of native vegetation condition (and other similar applications) with the intent to incorporate remotely sensed data. The criteria include recommendations for the siting of plots, the need for reference location plots, the number of sample sites and plot size and distribution, within a study area. The difficulties associated with successfully integrating these data are illustrated using real examples taken from a study of the vegetation in the Little River Catchment, New South Wales, Australia.  相似文献   

15.
Rosner B  Glynn RJ  Lee ML 《Biometrics》2006,62(1):185-192
The Wilcoxon signed rank test is a frequently used nonparametric test for paired data (e.g., consisting of pre- and posttreatment measurements) based on independent units of analysis. This test cannot be used for paired comparisons arising from clustered data (e.g., if paired comparisons are available for each of two eyes of an individual). To incorporate clustering, a generalization of the randomization test formulation for the signed rank test is proposed, where the unit of randomization is at the cluster level (e.g., person), while the individual paired units of analysis are at the subunit within cluster level (e.g., eye within person). An adjusted variance estimate of the signed rank test statistic is then derived, which can be used for either balanced (same number of subunits per cluster) or unbalanced (different number of subunits per cluster) data, with an exchangeable correlation structure, with or without tied values. The resulting test statistic is shown to be asymptotically normal as the number of clusters becomes large, if the cluster size is bounded. Simulation studies are performed based on simulating correlated ranked data from a signed log-normal distribution. These studies indicate appropriate type I error for data sets with > or =20 clusters and a superior power profile compared with either the ordinary signed rank test based on the average cluster difference score or the multivariate signed rank test of Puri and Sen. Finally, the methods are illustrated with two data sets, (i) an ophthalmologic data set involving a comparison of electroretinogram (ERG) data in retinitis pigmentosa (RP) patients before and after undergoing an experimental surgical procedure, and (ii) a nutritional data set based on a randomized prospective study of nutritional supplements in RP patients where vitamin E intake outside of study capsules is compared before and after randomization to monitor compliance with nutritional protocols.  相似文献   

16.
17.
There is an increasing need for life cycle data for bio‐based products, which becomes particularly evident with the recent drive for greenhouse gas reporting and carbon footprinting studies. Meeting this need is challenging given that many bio‐products have not yet been studied by life cycle assessment (LCA), and those that have are specific and limited to certain geographic regions. In an attempt to bridge data gaps for bio‐based products, LCA practitioners can use either proxy data sets (e.g., use existing environmental data for apples to represent pears) or extrapolated data (e.g., derive new data for pears by modifying data for apples considering pear‐specific production characteristics). This article explores the challenges and consequences of using these two approaches. Several case studies are used to illustrate the trade‐offs between uncertainty and the ease of application, with carbon footprinting as an example. As shown, the use of proxy data sets is the quickest and easiest solution for bridging data gaps but also has the highest uncertainty. In contrast, data extrapolation methods may require extensive expert knowledge and are thus harder to use but give more robust results in bridging data gaps. They can also provide a sound basis for understanding variability in bio‐based product data. If resources (time, budget, and expertise) are limited, the use of averaged proxy data may be an acceptable compromise for initial or screening assessments. Overall, the article highlights the need for further research on the development and validation of different approaches to bridging data gaps for bio‐based products.  相似文献   

18.
19.
The improved accessibility to data that can be used in human health risk assessment (HHRA) necessitates advanced methods to optimally incorporate them in HHRA analyses. This article investigates the application of data fusion methods to handling multiple sources of data in HHRA and its components. This application can be performed at two levels, first, as an integrative framework that incorporates various pieces of information with knowledge bases to build an improved knowledge about an entity and its behavior, and second, in a more specific manner, to combine multiple values for a state of a certain feature or variable (e.g., toxicity) into a single estimation. This work first reviews data fusion formalisms in terms of architectures and techniques that correspond to each of the two mentioned levels. Then, by handling several data fusion problems related to HHRA components, it illustrates the benefits and challenges in their application.  相似文献   

20.
Data integration is key to functional and comparative genomics because integration allows diverse data types to be evaluated in new contexts. To achieve data integration in a scalable and sensible way, semantic standards are needed, both for naming things (standardized nomenclatures, use of key words) and also for knowledge representation. The Mouse Genome Informatics database and other model organism databases help to close the gap between information and understanding of biological processes because these resources enforce well-defined nomenclature and knowledge representation standards. Model organism databases have a critical role to play in ensuring that diverse kinds of data, especially genome-scale data sets and information, remain useful to the biological community in the long-term. The efforts of model organism database groups ensure not only that organism-specific data are integrated, curated and accessible but also that the information is structured in such a way that comparison of biological knowledge across model organisms is facilitated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号