首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Artificial intelligence-guided analysis of cytologic data   总被引:1,自引:0,他引:1  
A design for the integration of artificial intelligence (AI) technology with large databases of clinical and objective cytologic data, such as are on file at the University of Chicago, is presented. Among the key features of this approach are the use of a knowledge representation structure based upon an associative network, the use of a Bayesian belief network as a method of managing uncertainty in the system, and the use of neural networks and unsupervised learning algorithms as a means of discovering patterns within this database. Such an automated approach is necessary, given the complexity and interdependence of these data, to gain an understanding of their dependence structure and to assist in their exploration and analysis.  相似文献   

3.
In reasoning systems, uncertainty plays a crucial part, especially for those fields in which judgements are essential, as in pathology. Uncertainty has several aspects, such as prevalence of diseases, occurrence of findings and the sensitivity and predictive value of findings. For the functioning of a reasoning system, two aspects are crucial: (1) the internal representation of the uncertainty and (2) the way in which the uncertainty is propagated in the reasoning process when combining formal statements. Five well-known reasoning strategies (Bayes' probability theory, MYCIN's certainty factor model, fuzzy set theory, the theory of Dempster-Shafer and Pathfinder's scoring mechanism) are compared, with particular attention to: (1) Under what conditions will the model function? In particular, what information is to be specified a priori to the system? (2) Can the different aspects of uncertainty be dealt with as separate entities? (3) How are unknown uncertainties dealt with? (4) How is evidence in favor of a hypothesis combined with evidence against it? (5) How does the model treat the simultaneous occurrence of more than one disorder, that is, how does the model support reasoning with compound hypotheses? It is preliminarily concluded that the different aspects of uncertainty are expressed as separate entities only in Pathfinder and probability theory. Hence, the other models do not accurately represent uncertain knowledge. Also, such theoretically attractive models as the Bayes, MYCIN and Dempster-Shafer theory can only function properly under the tight condition of mutual exclusiveness of hypotheses, which is not always suited for broader areas of pathology. They may, however, be suited for smaller areas, with a limited number of defined diseases and a limited number of features. All models but the Bayes model lack a predictable performance since there is no (or only a partial) underlying theory to guarantee minimization of the overall error.  相似文献   

4.
Environmental status assessment and monitoring can be performed by the integration of multi-source datasets at continental and global scales. We propose a methodology for the development of a new anomaly indicator (AI) which can highlight the occurrence of anomalous conditions in a synthetic fashion by analysis of a set of spatial input data. Anomalous conditions are defined relative to long-term average assumed as normal or reference status of the vegetated land surface. The indicator is defined according to fuzzy set theory which is a powerful means of handling uncertain and imprecise knowledge of environmental systems. The indicator integrates, in an innovative way, the anomaly scores of a set of contributing factors extracted from the analysis of historical time series, mainly of Earth observations data. These time series are used to automatically derive the fuzzy membership functions that quantify the contribution of each factor to the final indicator. No reference data and expert knowledge are strictly required for the implementation of the AI although the methodology allows customization where this type of information is available. The method was tested over the African continent for the period 1996–2002; monthly AI values were derived with input datasets of vegetation phenology and rainfall estimates. The output AI continental maps bring new information by integrating multiple factors and they highlight patterns of anomalous conditions of the status of the environment. The analysis of the correlation with the El Niño Southern Oscillation (ENSO) shows that the AI is able to identify the effects of this phenomenon and its spatio-temporal dynamics. The 1997–1998 and 2000–2001 ENSO events are clearly highlighted by the highest AI values in specific regions of the continent. The indicator proposed is a valuable tool which can help guide in depth and detailed investigations of environmental conditions at local scale.  相似文献   

5.
6.
Points of View: Combining Local and Scientific Knowledge   总被引:1,自引:0,他引:1  
A change in attitude is urgently required to provide credibility to, and to devise methods for, combining and utilizing non-scientific information (local knowledge) together with more typical scientific data. In the midst of vast uncertainty about fish stocks, the climate is right for this change in attitude. Expert systems offer one tool to combine different sources of information in a meaningful way. We believe that through the simple communication required to gather knowledge for an expert system, the development of mutual respect will foster cooperation and responsibility of resource users, scientists and managers, thus providing the basis for improved and more responsible management.  相似文献   

7.
Presently, an urgent problem is the application of artificial intellect (AI) methods as a complex way of controlling and optimizing technological processes. The paper presents major principles of a bystage application of AI in biotechnology as knowledge extraction from data obtained by decomposing the sign space to clusters.  相似文献   

8.
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.  相似文献   

9.
Angiotensin I (AI) and angiotensin II/III (AII/III) were detected by radioimmunoassay in homogenates of isolated liver granulomas from mice infected for 8 wk with Schistosoma mansoni. Angiotensin I converting enzyme (ACE) activity, which could be completely inhibited by captopril, a specific ACE inhibitor, was also present as determined by radioassay. Spontaneous angiotensin I-generating activity was detected in homogenates that received supplemental angiotensinogen (protein renin substrate). This activity was partly inhibited by pepstatin, an acid protease inhibitor, indicating the presence of angiotensinogenase(s). Trypsinization of homogenates resulted in some AI generation, which suggests that homogenates had AI precursor. Treatment of infected mice with MK421, another specific ACE inhibitor, decreased granuloma ACE activity and AII content and size. AII, and to a lesser extent AIII, inhibited mouse peritoneal macrophage migration in an in vitro assay. These data support the contention that components of the angiotensin system are in the granuloma and may serve a function in regulation of the inflammation.  相似文献   

10.
Modeling has become an indispensable tool for scientific research. However, models generate great uncertainty when they are used to predict or forecast ecosystem responses to global change. This uncertainty is partly due to parameterization, which is an essential procedure for model specification via defining parameter values for a model. The classic doctrine of parameterization is that a parameter is constant. However, it is commonly known from modeling practice that a model that is well calibrated for its parameters at one site may not simulate well at another site unless its parameters are tuned again. This common practice implies that parameter values have to vary with sites. Indeed, parameter values that are estimated using a statistically rigorous approach, that is, data assimilation, vary with time, space, and treatments in global change experiments. This paper illustrates that varying parameters is to account for both processes at unresolved scales and changing properties of evolving systems. A model, no matter how complex it is, could not represent all the processes of one system at resolved scales. Interactions of processes at unresolved scales with those at resolved scales should be reflected in model parameters. Meanwhile, it is pervasively observed that properties of ecosystems change over time, space, and environmental conditions. Parameters, which represent properties of a system under study, should change as well. Tuning has been practiced for many decades to change parameter values. Yet this activity, unfortunately, did not contribute to our knowledge on model parameterization at all. Data assimilation makes it possible to rigorously estimate parameter values and, consequently, offers an approach to understand which, how, how much, and why parameters vary. To fully understand those issues, extensive research is required. Nonetheless, it is clear that changes in parameter values lead to different model predictions even if the model structure is the same.  相似文献   

11.
The impact of artificial intelligence (AI) on the environment is the subject of discourse, with arguments for both positive and negative effects. There is a fine line between AI for good and AI for environmental degradation. Today, companies want to seize the benefits of AI, which distinctively involves reducing the company's carbon footprint. However, AI's carbon emissions differ as per the techniques involved in training it. As the saying goes, a coin always has two sides. Therefore, it cannot be denied that AI can be an effective tool for combating climate change, but its role in contributing to carbon emissions cannot be ignored. Multiple studies indicate that AI could be the game-changer in staving off anthropogenic climatic changes due to the deterioration of the environment and global warming. This double-edged relationship and interdependency of AI and carbon emissions are represented through a system of systems (SoS) approach. SoS states that a plan is created through multiple smaller systems, creating complexity in the design and vice versa. A complex system can be assumed as the world in general, where two individual independent systems AI and carbon emissions, when in interaction, create a complex complementary and contradictory relation, adding to the convolution of the system. This connection is demonstrated by conducting a network analysis and calculating the carbon emissions of six machine learning (ML) algorithms and deep learning (DL) models with different datasets but the same hyperparameters on a carbon emission calculator created through AI algorithms. The primary idea of this study is to encourage the AI society to create efficient AI models that may be used without compromising environmental issues. The focus should be on practicing sustainable AI, that is, sustainability from data collection to model deployment, throughout the lifecycle of AI.  相似文献   

12.
A voltage-gated potassium channel Kv10.2 is expressed in the nervous system, but its functions and involvement in the development of human disease remain poorly understood. Mutant forms of the Kv10.2 channel were found in patients with epileptic encephalopathy and autism. Molecular modeling of the channel spatial structure is an important tool for gaining knowledge about the molecular aspects of the channel functioning and mechanisms responsible for pathogenesis. In the present work, molecular modeling of the helical fragment of the human Kv10.2 (hEAG2) C-terminal domain in dimeric, trimeric, and tetrameric forms was performed. The stability of all forms was confirmed by molecular dynamics simulation. Contacts and interactions, stabilizing the structure, were identified.  相似文献   

13.
Watabe T  Kishino H  Okuhara Y  Kitazoe Y 《Genetics》2006,172(3):1385-1396
The third hypervariable (V3) region of the HIV-1 gp120 protein is responsible for many aspects of viral infectivity. The tertiary structure of the V3 loop seems to influence the coreceptor usage of the virus, which is an important determinant of HIV pathogenesis. Hence, the information about preferred conformations of the V3-loop region and its flexibility could be a crucial tool for understanding the mechanisms of progression from an initial infection to AIDS. Taking into account the uncertainty of the loop structure, we predicted the structural flexibility, diversity, and sequence fitness to the V3-loop structure for each of the sequences serially sampled during an asymptomatic period. Structural diversity correlated with sequence diversity. The predicted crown structure usage implied that structural flexibility depended on the patient and that the antigenic character of the virus might be almost uniform in a patient whose immune system is strong. Furthermore, the predicted structural ensemble suggested that toward the end of the asymptomatic period there was a change in the V3-loop structure or in the environment surrounding the V3 loop, possibly because of its proximity to the gp120 core.  相似文献   

14.
This article discusses the prospects and limitations of the scientific basis for offering personalized nutrition advice based upon individual genetic information. Two divergent scientific positions are presented, with an ethical comment. The crucial question is whether the current knowledge base is sufficiently strong for taking an ethically responsible decision to offer personalized nutrition advice based upon gene–diet–health interaction. According to the first position, the evidence base for translating the outcomes of nutrigenomics research into personalized nutritional advice is as yet immature. There is also limited evidence that genotype-based dietary advice will motivate appropriate behavior changes. Filling the gaps in our knowledge will require larger and better randomized controlled trials. According to the second position, personalized nutrition must be evaluated in relation to generally accepted standard dietary advice—partly derived from epidemiological observations and usually not proven by clinical trials. With personalized nutrition, we cannot demand stronger evidence. In several specific cases of gene–diet interaction, it may be more beneficial for individuals with specific genotypes to follow personalized advice rather than general dietary recommendations. The ethical comment, finally, considers the ethical aspects of deciding how to proceed in the face of such uncertainty. Two approaches for an ethically responsible way forward are proposed. Arguing from a precautionary approach, it is suggested that personalized dietary advice should be offered only when there is strong scientific evidence for health effects, followed by stepwise evaluation of unforeseen behavioral and psychological effects. Arguing from theoretical and applied ethics as well as psychology, it is also suggested that personalized advice should avoid paternalism and instead focus on supporting the autonomous choice of each person.  相似文献   

15.
Gene expression is a dynamic process where thousands of components interact dynamically in a complex way. A major goal in systems biology/medicine is to reconstruct the network of components from microarray data. Here, we address two key aspects of network reconstruction: (i) ergodicity supports the interpretation of the measured data as time averages and (ii) confounding is an important aspect of network reconstruction. To elucidate these aspects, we explore a data set of 214 lymphoma patients with translocated or normal MYC gene. MYC (c-Myc) translocations to immunoglobulin heavy-chain (IGH@) or light-chain (IGK@, IGL@) loci lead to c-Myc overexpression and are widely believed to be the crucial initiating oncogenic events. There is a rich body of knowledge on the biological implications of the different translocations. In the context of these data, the article reflects the relationship between the biological knowledge and the results of formal statistical estimates of gene interaction networks. The article identifies key steps to provide a trustworthy biological feature validation: (i) analysing a medium-sized network as a subnet of a more extensive environment to avoid bias by confounding, (ii) the use of external data to demonstrate the stability and reproducibility of the derived structures, (iii) a systematic literature review on the relevant issue, (iv) use of structured knowledge from databases to support the derived findings and (v) a strategy for biological experiments derived from the findings in steps (i-iv).  相似文献   

16.
Risk assessments inevitably extrapolate from the known to the unknown. The resulting calculation of risk involves two fundamental kinds of uncertainty: uncertainty owing to intrinsically unpredictable (random) components of the future events, and uncertainty owing to imperfect prediction formulas (parameter uncertainty and error in model structure) that are used to predict the component that we think is predictable. Both types of uncertainty weigh heavily both in health and ecological risk assessments. Our first responsibility in conducting risk assessments is to ensure that the reported risks correctly reflect our actual level of uncertainty (of both types). The statistical methods that lend themselves to correct quantification of the uncertainty are also effective for combining different sources of information. One way to reduce uncertainty is to use all the available data. To further sharpen future risk assessments, it is useful to partition the uncertainty between the random component and the component due to parameter uncertainty, so that we can quantify the expected reduction in uncertainty that can be achieved by investing in a given amount of future data. An example is developed to illustrate the potential for use of comparative data, from toxicity testing on other species or other chemicals, to improve the estimates of low-effect concentration in a particular case with sparse case-specific data.  相似文献   

17.
18.
19.
20.
In the following presentation, the concept of the CESAR Cytology system (CE11 Screen and Analysing Routine system) is introduced, which unlike conventional systems also comprehensively integrates human empiric knowledge in the evaluation and interpretation of the measurement results. For this purpose CESAR is especially equipped with an object-oriented data base. Due to its special structure graphical presentations of the measurement or classification results (histograms, scatterplots) can be used as directories of a cell image data base. In this way the numerical results can be ‘translated’ into cell images with a similar content of information, but which are far better suited to be compared with the empirical knowledge of the human brain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号