首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The benefits to medical practitioners of using the Internet are growing rapidly as the Internet becomes easier to use and ever more biomedical resources become available on line. The Internet is the largest computer network in the world; it is also a virtual community, larger than many nation states, with its own rules of behaviour or "netiquette." There are several types of Internet connection and various ways of acquiring a connection. Once connected, you can obtain, free of charge, programs that allow easy use of the Internet''s resources and help on how to use these resources; you can access many of these resources through the hypertext references in the on line version of this series (go to http:@www.bmj.com/bmj/ to reach the electronic version). You can then explore the various methods for accessing, manipulating, or disseminating data on the Internet, such as electronic mail, telnet, file transfer protocol, and the world wide web. Results from a search of the world wide web for information on the rare condition of Recklinghausen''s neurofibromatosis illustrate the breadth of medical information available on the Internet.  相似文献   

2.
OBJECTIVE: To assess the reliability of healthcare information on the world wide web and therefore how it may help lay people cope with common health problems. METHODS: Systematic search by means of two search engines, Yahoo and Excite, of parent oriented web pages relating to home management of feverish children. Reliability of information on the web sites was checked by comparison with published guidelines. MAIN OUTCOME MEASURES: Minimum temperature of child that should be considered as fever, optimal sites for measuring temperature, pharmacological and physical treatment of fever, conditions that may warrant a doctor''s visit. RESULTS: 41 web pages were retrieved and considered. 28 web pages gave a temperature above which a child is feverish; 26 pages indicated the optimal site for taking temperature, most recommending rectal measurement; 31 of the 34 pages that mentioned drug treatment recommended paracetamol as an antipyretic; 38 pages recommended non-drug measures, most commonly tepid sponging, dressing lightly, and increasing fluid intake; and 36 pages gave some indication of when a doctor should be called. Only four web pages adhered closely to the main recommendations in the guidelines. The largest deviations were in sponging procedures and how to take a child''s temperature, whereas there was a general agreement in the use of paracetamol. CONCLUSIONS: Only a few web sites provided complete and accurate information for this common and widely discussed condition. This suggests an urgent need to check public oriented healthcare information on the internet for accuracy, completeness, and consistency.  相似文献   

3.

Background

The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public''s information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health.

Methods

Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics.

Results

We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time.

Conclusion

Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations.  相似文献   

4.
5.
We describe a web server, which provides easy access to the SLoop database of loop conformations connecting elements of protein secondary structure. The loops are classified according to their length, the type of bounding secondary structures and the conformation of the mainchain. The current release of the database consists of over 8000 loops of up to 20 residues in length. A loop prediction method, which selects conformers on the basis of the sequence and the positions of the elements of secondary structure, is also implemented. These web pages are freely accessible over the internet at http://www-cryst.bioc.cam.ac.uk/ approximately sloop.  相似文献   

6.
PHProteomicDB is a PHP-written module to help researchers in proteomics to share two-dimenslonal gel electrophoresis data using personal web sites. No technical or PHP knowledge is necessary except a few basics about web site management. PHProteomicDB has a user-friendly administration interface to enter and update data. It creates web pages on the fly displaying gel characteristics, gel pictures, and numbered gel spots with their related identifications pointing to their reference pages in protein databanks. The module is freely available at http://www.huvec.com/index.php3?rub=Download.  相似文献   

7.
INCLUSive is a suite of algorithms and tools for the analysis of gene expression data and the discovery of cis-regulatory sequence elements. The tools allow normalization, filtering and clustering of microarray data, functional scoring of gene clusters, sequence retrieval, and detection of known and unknown regulatory elements using probabilistic sequence models and Gibbs sampling. All tools are available via different web pages and as web services. The web pages are connected and integrated to reflect a methodology and facilitate complex analysis using different tools. The web services can be invoked using standard SOAP messaging. Example clients are available for download to invoke the services from a remote computer or to be integrated with other applications. All services are catalogued and described in a web service registry. The INCLUSive web portal is available for academic purposes at http://www.esat.kuleuven.ac.be/inclusive.  相似文献   

8.
9.
MOTIVATION: Phylogenetic trees are omnipresent in evolutionary biology and the comparison of trees plays a central role there. Tree congruence statistics are based on the null hypothesis that two given trees are not more congruent (topologically similar) than expected by chance. Usually, one searches for the most parsimonious evolutionary scenario relating two trees and then one tests the null hypothesis by generating a high number of random trees and comparing these to the one between the observed trees. However, this approach requires a lot of computational work (human and machine) and the results depend on the evolutionary assumptions made. RESULTS: We propose an index, I(cong), for testing the topological congruence between trees with any number of leaves, based on maximum agreement subtrees (MAST). This index is straightforward, simple to use, does not rely on parametrizing the likelihood of evolutionary events, and provides an associated confidence level. AVAILABILITY: A web site has been created that allows rapid and easy online computation of this index and of the associated P-value at http://www.ese.u-psud.fr/bases/upresa/pages/devienne/index.html  相似文献   

10.
Tibetan web pages appear enormously. It is meaningful that the information processing technology is utilized to find the useful knowledge from the Tibetan web information. Tibetan semantic ontology can enrich the Tibetan digital resource and is helpful to improve the information processing performance. In this paper, semantic classification of Tibetan network corpus is studied. Firstly Tibetan web pages are collected. Secondly preprocessing is conducted to extract the useful information from Web pages. Thirdly the word segmentation and text representation are introduced. Finally the text similarity classification algorithm is proposed to classify the text. During the experiment, the comparison between semantic classification and non semantic classification is conducted. The results show that the semantic classification performance is obviously superior to non semantic classification. This means that making full use of ontology semantic relationship can greatly enhance the classification accuracy. The research is useful and helpful to the study of Tibetan semantic information processing.  相似文献   

11.
12.
13.
14.
The HUGO Gene Nomenclature Committee (HGNC) assigns approved gene symbols to human loci. There are currently over 33,000 approved gene symbols, the majority of which represent protein-coding genes, but we also name other locus types such as non-coding RNAs, pseudogenes and phenotypic loci. Where relevant, the HGNC organise these genes into gene families and groups. The HGNC website http://www.genenames.org/ is an online repository of HGNC-approved gene nomenclature and associated resources for human genes, and includes links to genomic, proteomic and phenotypic information. In addition to this, we also have dedicated gene family web pages and are currently expanding and generating more of these pages using data curated by the HGNC and from information derived from external resources that focus on particular gene families. Here, we review our current online resources with a particular focus on our gene family data, using it to highlight our new Gene Symbol Report and gene family data downloads.  相似文献   

15.
The SSFA-GPHR (Sequence-Structure-Function-Analysis of Glycoprotein Hormone Receptors) database provides a comprehensive set of mutation data for the glycoprotein hormone receptors (covering the lutropin, the FSH, and the TSH receptors). Moreover, it provides a platform for comparison and investigation of these homologous receptors and helps in understanding protein malfunctions associated with several diseases. Besides extending the data set (> 1100 mutations), the database has been completely redesigned and several novel features and analysis tools have been added to the web site. These tools allow the focused extraction of semiquantitative mutant data from the GPHR subtypes and different experimental approaches. Functional and structural data of the GPHRs are now linked interactively at the web interface, and new tools for data visualization (on three-dimensional protein structures) are provided. The interpretation of functional findings is supported by receptor morphings simulating intramolecular changes during the activation process, which thus help to trace the potential function of each amino acid and provide clues to the local structural environment, including potentially relocated spatial counterpart residues. Furthermore, double and triple mutations are newly included to allow the analysis of their functional effects related to their spatial interrelationship in structures or homology models. A new important feature is the search option and data visualization by interactive and user-defined snake-plots. These new tools allow fast and easy searches for specific functional data and thereby give deeper insights in the mechanisms of hormone binding, signal transduction, and signaling regulation. The web application "Sequence-Structure-Function-Analysis of GPHRs" is accessible on the internet at http://www.ssfa-gphr.de/.  相似文献   

16.
The Mouse Tumor Biology (MTB) Database serves as a curated, integrated resource for information about tumor genetics and pathology in genetically defined strains of mice (i.e., inbred, transgenic and targeted mutation strains). Sources of information for the database include the published scientific literature and direct data submissions by the scientific community. Researchers access MTB using Web-based query forms and can use the database to answer such questions as 'What tumors have been reported in transgenic mice created on a C57BL/6J background?', 'What tumors in mice are associated with mutations in the Trp53 gene?' and 'What pathology images are available for tumors of the mammary gland regardless of genetic background?'. MTB has been available on the Web since 1998 from the Mouse Genome Informatics web site (http://www.informatics.jax.org). We have recently implemented a number of enhancements to MTB including new query options, redesigned query forms and results pages for pathology and genetic data, and the addition of an electronic data submission and annotation tool for pathology data.  相似文献   

17.
In this work we are focusing on reducing response time and bandwidth requirements for high performance web server. Many researches have been done in order to improve web server performance by modifying the web server architecture. In contrast to these approaches, we take a different point of view, in which we consider the web server performance in OS perspective rather than web server architecture itself. To achieve these purposes we are exploring two different approaches. The first is running web server within OS kernel. We use kHTTPd as our basis for implementation. But it has a several drawbacks such as copying data redundantly, synchronous write, and processing only static data. We propose some techniques to improve these flaws. The second approach is caching dynamic data. Dynamic data can seriously reduce the performance of web servers. Caching dynamic data has been thought difficult to cache because it often change a lot more frequently than static pages and because web server needs to access database to provide service with dynamic data. To this end, we propose a solution for higher performance web service by caching dynamic data using content separation between static and dynamic portions. Benchmark results using WebStone show that our architecture can improve server performance by up to 18 percent and can reduce user’s perceived latency significantly.  相似文献   

18.
19.
Over the years, we have seen a significant number of integration techniques for data warehouses to support web integrated data. However, the existing works focus extensively on the design concept. In this paper, we focus on the performance of a web database application such as an integrated web data warehousing using a well-defined and uniform structure to deal with web information sources including semi-structured data such as XML data, and documents such as HTML in a web data warehouse system. By using a case study, our implementation of the prototype is a web manipulation concept for both incoming sources and result outputs. Thus, the system not only can be operated through the web, it can also handle the integration of web data sources and structured data sources. Our main contribution is the performance evaluation of an integrated web data warehouse application which includes two tasks. Task one is to perform a verification of the correctness of integrated data based on the result set that is retrieved from the web integrated data warehouse system using complex and OLAP queries. The result set is checked against the result set that is retrieved from the existing independent data source systems. Task two is to measure the performance of OLAP or complex query by investigating source operation functions used by these queries to retrieve the data. The information of source operation functions used by each query is obtained using the TKPROF utility.
David TaniarEmail:
  相似文献   

20.

Background

Estimates of the sensitivity and specificity for new diagnostic tests based on evaluation against a known gold standard are imprecise when the accuracy of the gold standard is imperfect. Bayesian latent class models (LCMs) can be helpful under these circumstances, but the necessary analysis requires expertise in computational programming. Here, we describe open-access web-based applications that allow non-experts to apply Bayesian LCMs to their own data sets via a user-friendly interface.

Methods/Principal Findings

Applications for Bayesian LCMs were constructed on a web server using R and WinBUGS programs. The models provided (http://mice.tropmedres.ac) include two Bayesian LCMs: the two-tests in two-population model (Hui and Walter model) and the three-tests in one-population model (Walter and Irwig model). Both models are available with simplified and advanced interfaces. In the former, all settings for Bayesian statistics are fixed as defaults. Users input their data set into a table provided on the webpage. Disease prevalence and accuracy of diagnostic tests are then estimated using the Bayesian LCM, and provided on the web page within a few minutes. With the advanced interfaces, experienced researchers can modify all settings in the models as needed. These settings include correlation among diagnostic test results and prior distributions for all unknown parameters. The web pages provide worked examples with both models using the original data sets presented by Hui and Walter in 1980, and by Walter and Irwig in 1988. We also illustrate the utility of the advanced interface using the Walter and Irwig model on a data set from a recent melioidosis study. The results obtained from the web-based applications were comparable to those published previously.

Conclusions

The newly developed web-based applications are open-access and provide an important new resource for researchers worldwide to evaluate new diagnostic tests.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号