首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Quantitative Imaging Network (QIN), supported by the National Cancer Institute, is designed to promote research and development of quantitative imaging methods and candidate biomarkers for the measurement of tumor response in clinical trial settings. An integral aspect of the QIN mission is to facilitate collaborative activities that seek to develop best practices for the analysis of cancer imaging data. The QIN working groups and teams are developing new algorithms for image analysis and novel biomarkers for the assessment of response to therapy. To validate these algorithms and biomarkers and translate theminto clinical practice, algorithms need to be compared and evaluated on large and diverse data sets. Analysis competitions, or “challenges,” are being conducted within the QIN as a means to accomplish this goal. The QIN has demonstrated, through its leveraging of The Cancer Imaging Archive (TCIA), that data sharing of clinical images across multiple sites is feasible and that it can enable and support these challenges. In addition to Digital Imaging and Communications in Medicine (DICOM) imaging data, many TCIA collections provide linked clinical, pathology, and “ground truth” data generated by readers that could be used for further challenges. The TCIA-QIN partnership is a successful model that provides resources for multisite sharing of clinical imaging data and the implementation of challenges to support algorithm and biomarker validation.  相似文献   

2.
Chronic wounds, including pressure ulcers, compromise the health of 6.5 million Americans and pose an annual estimated burden of $25 billion to the U.S. health care system. When treating chronic wounds, clinicians must use meticulous documentation to determine wound severity and to monitor healing progress over time. Yet, current wound documentation practices using digital photography are often cumbersome and labor intensive. The process of transferring photos into Electronic Medical Records (EMRs) requires many steps and can take several days. Newer smartphone and tablet-based solutions, such as Epic Haiku, have reduced EMR upload time. However, issues still exist involving patient positioning, image-capture technique, and patient identification. In this paper, we present the development and assessment of the SnapCap System for chronic wound photography. Through leveraging the sensor capabilities of Google Glass, SnapCap enables hands-free digital image capture, and the tagging and transfer of images to a patient’s EMR. In a pilot study with wound care nurses at Stanford Hospital (n=16), we (i) examined feature preferences for hands-free digital image capture and documentation, and (ii) compared SnapCap to the state of the art in digital wound care photography, the Epic Haiku application. We used the Wilcoxon Signed-ranks test to evaluate differences in mean ranks between preference options. Preferred hands-free navigation features include barcode scanning for patient identification, Z(15) = -3.873, p < 0.001, r = 0.71, and double-blinking to take photographs, Z(13) = -3.606, p < 0.001, r = 0.71. In the comparison between SnapCap and Epic Haiku, the SnapCap System was preferred for sterile image-capture technique, Z(16) = -3.873, p < 0.001, r = 0.68. Responses were divided with respect to image quality and overall ease of use. The study’s results have contributed to the future implementation of new features aimed at enhancing mobile hands-free digital photography for chronic wound care.  相似文献   

3.
Summary This article introduces new methods for performing classification of complex, high‐dimensional functional data using the functional mixed model (FMM) framework. The FMM relates a functional response to a set of predictors through functional fixed and random effects, which allows it to account for various factors and between‐function correlations. The methods include training and prediction steps. In the training steps we train the FMM model by treating class designation as one of the fixed effects, and in the prediction steps we classify the new objects using posterior predictive probabilities of class. Through a Bayesian scheme, we are able to adjust for factors affecting both the functions and the class designations. While the methods can be used in any FMM framework, we provide details for two specific Bayesian approaches: the Gaussian, wavelet‐based FMM (G‐WFMM) and the robust, wavelet‐based FMM (R‐WFMM). Both methods perform modeling in the wavelet space, which yields parsimonious representations for the functions, and can naturally adapt to local features and complex nonstationarities in the functions. The R‐WFMM allows potentially heavier tails for features of the functions indexed by particular wavelet coefficients, leading to a down‐weighting of outliers that makes the method robust to outlying functions or regions of functions. The models are applied to a pancreatic cancer mass spectroscopy data set and compared with other recently developed functional classification methods.  相似文献   

4.
Ultra-low-field (ULF) MRI (B 0 = 10–100 µT) typically suffers from a low signal-to-noise ratio (SNR). While SNR can be improved by pre-polarization and signal detection using highly sensitive superconducting quantum interference device (SQUID) sensors, we propose to use the inter-dependency of the k-space data from highly parallel detection with up to tens of sensors readily available in the ULF MRI in order to suppress the noise. Furthermore, the prior information that an image can be sparsely represented can be integrated with this data consistency constraint to further improve the SNR. Simulations and experimental data using 47 SQUID sensors demonstrate the effectiveness of this data consistency constraint and sparsity prior in ULF-MRI reconstruction.  相似文献   

5.
6.
Protein phosphorylation acts as an efficient switch controlling deregulated key signaling pathway in cancer. Computational biology aims to address the complexity of reconstructed networks but overrepresents well‐known proteins and lacks information on less‐studied proteins. A bioinformatic tool to reconstruct and select relatively small networks that connect signaling proteins to their targets in specific contexts is developed. It enables to propose and validate new signaling axes of the Syk kinase. To validate the potency of the tool, it is applied to two phosphoproteomic studies on oncogenic mutants of the well‐known phosphatidyl‐inositol 3‐kinase (PIK3CA) and the unfamiliar Src‐related tyrosine kinase lacking C‐terminal regulatory tyrosine and N‐terminal myristoylation sites (SRMS) kinase. By combining network reconstruction and signal propagation, comprehensive signaling networks from large‐scale experimental data are built and multiple molecular paths from these kinases to their targets are extracted. Specific paths from two distinct PIK3CA mutants are retrieved, and their differential impact on the HER3 receptor kinase is explained. In addition, to address the missing connectivities of the SRMS kinase to its targets in interaction pathway databases, phospho‐tyrosine and phospho‐serine/threonine proteomic data are integrated. The resulting SRMS‐signaling network comprises casein kinase 2, thereby validating its currently suggested role downstream of SRMS. The computational pipeline is publicly available, and contains a user‐friendly graphical interface ( http://doi.org/10.5281/zenodo.3333687 ).  相似文献   

7.
DNA origami provides a versatile platform for conducting ‘architecture-function’ analysis to determine how the nanoscale organization of multiple copies of a protein component within a multi-protein machine affects its overall function. Such analysis requires that the copy number of protein molecules bound to the origami scaffold exactly matches the desired number, and that it is uniform over an entire scaffold population. This requirement is challenging to satisfy for origami scaffolds with many protein hybridization sites, because it requires the successful completion of multiple, independent hybridization reactions. Here, we show that a cleavable dimerization domain on the hybridizing protein can be used to multiplex hybridization reactions on an origami scaffold. This strategy yields nearly 100% hybridization efficiency on a 6-site scaffold even when using low protein concentration and short incubation time. It can also be developed further to enable reliable patterning of a large number of molecules on DNA origami for architecture-function analysis.  相似文献   

8.
9.
The construction and analysis of networks is increasingly widespread in biological research. We have developed esyN (“easy networks”) as a free and open source tool to facilitate the exchange of biological network models between researchers. esyN acts as a searchable database of user-created networks from any field. We have developed a simple companion web tool that enables users to view and edit networks using data from publicly available databases. Both normal interaction networks (graphs) and Petri nets can be created. In addition to its basic tools, esyN contains a number of logical templates that can be used to create models more easily. The ability to use previously published models as building blocks makes esyN a powerful tool for the construction of models and network graphs. Users are able to save their own projects online and share them either publicly or with a list of collaborators. The latter can be given the ability to edit the network themselves, allowing online collaboration on network construction. esyN is designed to facilitate unrestricted exchange of this increasingly important type of biological information. Ultimately, the aim of esyN is to bring the advantages of Open Source software development to the construction of biological networks.  相似文献   

10.
The incorporation of data sharing into the research lifecycle is an important part of modern scholarly debate. In this study, the DataONE Usability and Assessment working group addresses two primary goals: To examine the current state of data sharing and reuse perceptions and practices among research scientists as they compare to the 2009/2010 baseline study, and to examine differences in practices and perceptions across age groups, geographic regions, and subject disciplines. We distributed surveys to a multinational sample of scientific researchers at two different time periods (October 2009 to July 2010 and October 2013 to March 2014) to observe current states of data sharing and to see what, if any, changes have occurred in the past 3–4 years. We also looked at differences across age, geographic, and discipline-based groups as they currently exist in the 2013/2014 survey. Results point to increased acceptance of and willingness to engage in data sharing, as well as an increase in actual data sharing behaviors. However, there is also increased perceived risk associated with data sharing, and specific barriers to data sharing persist. There are also differences across age groups, with younger respondents feeling more favorably toward data sharing and reuse, yet making less of their data available than older respondents. Geographic differences exist as well, which can in part be understood in terms of collectivist and individualist cultural differences. An examination of subject disciplines shows that the constraints and enablers of data sharing and reuse manifest differently across disciplines. Implications of these findings include the continued need to build infrastructure that promotes data sharing while recognizing the needs of different research communities. Moving into the future, organizations such as DataONE will continue to assess, monitor, educate, and provide the infrastructure necessary to support such complex grand science challenges.  相似文献   

11.
目的:Microsoft Excel的内置控制语言是VBA(visual basic for application)。它可以极大地增强Excel的数据处理能力。本文通过一个简单的例子说明如何利用VBA自动分析大量共聚焦线扫描图像数据并图示分析结果。方法与结果:文中首先描述了取自共聚焦线扫描图像的实验数据的结构及处理要求。然后具体说明宏程序(用VBA编写)的录制、修改和使用的详细方法。宏程序代码很接近自然语言,较好理解,而且在大多数情况下可通过“录制宏”功能自动生成,把编程的工作减至最少。结论:与手工使用Excel一步步进行数据处理相比,使用Excel中的VBA处理数据可少花时间、少犯错误、减少大量单调重复的劳动..这些可极大地提高数据处理效率,使研究者可把更多的时间用于数据处理方案的设计和完善上。特别在处理量大而复杂的实验数据时更需要如此。这样,数据中蕴含的有用信息才能更好地被有效而准确地提取出来并加以显示。  相似文献   

12.
13.
Reconstructing biological networks using high-throughput technologies has the potential to produce condition-specific interactomes. But are these reconstructed networks a reliable source of biological interactions? Do some network inference methods offer dramatically improved performance on certain types of networks? To facilitate the use of network inference methods in systems biology, we report a large-scale simulation study comparing the ability of Markov chain Monte Carlo (MCMC) samplers to reverse engineer Bayesian networks. The MCMC samplers we investigated included foundational and state-of-the-art Metropolis–Hastings and Gibbs sampling approaches, as well as novel samplers we have designed. To enable a comprehensive comparison, we simulated gene expression and genetics data from known network structures under a range of biologically plausible scenarios. We examine the overall quality of network inference via different methods, as well as how their performance is affected by network characteristics. Our simulations reveal that network size, edge density, and strength of gene-to-gene signaling are major parameters that differentiate the performance of various samplers. Specifically, more recent samplers including our novel methods outperform traditional samplers for highly interconnected large networks with strong gene-to-gene signaling. Our newly developed samplers show comparable or superior performance to the top existing methods. Moreover, this performance gain is strongest in networks with biologically oriented topology, which indicates that our novel samplers are suitable for inferring biological networks. The performance of MCMC samplers in this simulation framework can guide the choice of methods for network reconstruction using systems genetics data.  相似文献   

14.

Background

Human populations are structured by social networks, in which individuals tend to form relationships based on shared attributes. Certain attributes that are ambiguous, stigmatized or illegal can create a ÔhiddenÕ population, so-called because its members are difficult to identify. Many hidden populations are also at an elevated risk of exposure to infectious diseases. Consequently, public health agencies are presently adopting modern survey techniques that traverse social networks in hidden populations by soliciting individuals to recruit their peers, e.g., respondent-driven sampling (RDS). The concomitant accumulation of network-based epidemiological data, however, is rapidly outpacing the development of computational methods for analysis. Moreover, current analytical models rely on unrealistic assumptions, e.g., that the traversal of social networks can be modeled by a Markov chain rather than a branching process.

Methodology/Principal Findings

Here, we develop a new methodology based on stochastic context-free grammars (SCFGs), which are well-suited to modeling tree-like structure of the RDS recruitment process. We apply this methodology to an RDS case study of injection drug users (IDUs) in Tijuana, México, a hidden population at high risk of blood-borne and sexually-transmitted infections (i.e., HIV, hepatitis C virus, syphilis). Survey data were encoded as text strings that were parsed using our custom implementation of the inside-outside algorithm in a publicly-available software package (HyPhy), which uses either expectation maximization or direct optimization methods and permits constraints on model parameters for hypothesis testing. We identified significant latent variability in the recruitment process that violates assumptions of Markov chain-based methods for RDS analysis: firstly, IDUs tended to emulate the recruitment behavior of their own recruiter; and secondly, the recruitment of like peers (homophily) was dependent on the number of recruits.

Conclusions

SCFGs provide a rich probabilistic language that can articulate complex latent structure in survey data derived from the traversal of social networks. Such structure that has no representation in Markov chain-based models can interfere with the estimation of the composition of hidden populations if left unaccounted for, raising critical implications for the prevention and control of infectious disease epidemics.  相似文献   

15.
An important problem in systems biology is to reconstruct gene regulatory networks (GRNs) from experimental data and other a priori information. The DREAM project offers some types of experimental data, such as knockout data, knockdown data, time series data, etc. Among them, multifactorial perturbation data are easier and less expensive to obtain than other types of experimental data and are thus more common in practice. In this article, a new algorithm is presented for the inference of GRNs using the DREAM4 multifactorial perturbation data. The GRN inference problem among genes is decomposed into different regression problems. In each of the regression problems, the expression level of a target gene is predicted solely from the expression level of a potential regulation gene. For different potential regulation genes, different weights for a specific target gene are constructed by using the sum of squared residuals and the Pearson correlation coefficient. Then these weights are normalized to reflect effort differences of regulating distinct genes. By appropriately choosing the parameters of the power law, we constructe a 0–1 integer programming problem. By solving this problem, direct regulation genes for an arbitrary gene can be estimated. And, the normalized weight of a gene is modified, on the basis of the estimation results about the existence of direct regulations to it. These normalized and modified weights are used in queuing the possibility of the existence of a corresponding direct regulation. Computation results with the DREAM4 In Silico Size 100 Multifactorial subchallenge show that estimation performances of the suggested algorithm can even outperform the best team. Using the real data provided by the DREAM5 Network Inference Challenge, estimation performances can be ranked third. Furthermore, the high precision of the obtained most reliable predictions shows the suggested algorithm may be helpful in guiding biological experiment designs.  相似文献   

16.
In the social sciences, there is a longstanding tension between data collection methods that facilitate quantification and those that are open to unanticipated information. Advances in technology now enable new, hybrid methods that combine some of the benefits of both approaches. Drawing inspiration from online information aggregation systems like Wikipedia and from traditional survey research, we propose a new class of research instruments called wiki surveys. Just as Wikipedia evolves over time based on contributions from participants, we envision an evolving survey driven by contributions from respondents. We develop three general principles that underlie wiki surveys: they should be greedy, collaborative, and adaptive. Building on these principles, we develop methods for data collection and data analysis for one type of wiki survey, a pairwise wiki survey. Using two proof-of-concept case studies involving our free and open-source website www.allourideas.org, we show that pairwise wiki surveys can yield insights that would be difficult to obtain with other methods.  相似文献   

17.
ABSTRACT Out of precaution, opportunism, and a general tendency towards thoroughness, researchers studying wildlife often collect multiple, sometimes highly correlated measurements or samples. Although such redundancy has its benefits in terms of quality control, increased resolution, and unforeseen future utility, it also comes at a cost if animal welfare (e.g., duration of handling) or time and resource limitation are a concern. Using principle components analysis and bootstrapping, we analyzed sets of morphometric measurements collected on 171 brown bears in Sweden during a long-term monitoring study (1984–2006). We show that of 11 measurements, 7 were so similar in terms of their predictive power for an overall size index that each individual measurement provided little additional information. We argue that when multiple research objectives or data collection goals compete for a limited amount of time or resources, it is advisable to critically evaluate the amount of additional information contributed by extra measurements. We recommend that wildlife researchers look critically at the data they collect not just in terms of quality but also in terms of need.  相似文献   

18.
The plant phloem is essential for the long-distance transport of (photo-) assimilates as well as of signals conveying biotic or abiotic stress. It contains sugars, amino acids, proteins, RNA, lipids and other metabolites. While there is a large interest in understanding the composition and function of the phloem, the role of many of these molecules and thus, their importance in plant development and stress response has yet to be determined. One barrier to phloem analysis lies in the fact that the phloem seals itself upon wounding. As a result, the number of plants from which phloem sap can be obtained is limited. One method that allows collection of phloem exudates from several plant species without added equipment is the EDTA-facilitated phloem exudate collection described here. While it is easy to use, it does lead to the wounding of cells and care has to be taken to remove contents of damaged cells. In addition, several controls to prove purity of the exudate are necessary. Because it is an exudation rather than a direct collection of the phloem sap (not possible in many species) only relative quantification of its contents can occur. The advantage of this method over others is that it can be used in many herbaceous or woody plant species (Perilla, Arabidopsis, poplar, etc.) and requires minimal equipment and training. It leads to reasonably large amounts of exudates that can be used for subsequent analysis of proteins, sugars, lipids, RNA, viruses and metabolites. It is simple enough that it can be used in both a research as well as in a teaching laboratory.  相似文献   

19.
20.
The use of species data versus environmental surrogates used in lieu of species data in systematic reserve site selection is still highly debated. We analyse in a case study whether and how the results of reserve network selection are affected by the use of species data versus habitat surrogates (habitat models) for qualitative (presence/absence) and quantitative (population size/habitat quality) information. In a model region, the post-mining landscape south of Leipzig/Germany, we used iterative algorithms to select a network for 29 animal target species from a basic set of 127 sites. The network results differ markedly for the two information types: depending on the representation goal, 18–45% of the selected sites chosen in response to one information type do not appear in the results for the other type. Given the availability of quantitative and hence deeper information, evaluation rules can be used to filter out the best habitats and the largest populations. In our model study, 0–40% less suitable areas were selected when instead of quantitative details only qualitative data were used. In view of various advantages and limitations of the two information types, we propose improving the methodological approach to the selection of networks for animal species by combining different information types.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号