首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper discusses a number of aspects of using grid computing methods in support of molecular simulations, with examples drawn from the eMinerals project. A number of components for a useful grid infrastructure are discussed, including the integration of compute and data grids, automatic metadata capture from simulation studies, interoperability of data between simulation codes, management of data and data accessibility, management of jobs and workflow, and tools to support collaboration. Use of a grid infrastructure also brings certain challenges, which are discussed. These include making use of boundless computing resources, the necessary changes, and the need to be able to manage experimentation.  相似文献   

2.
Distributed systems based on cluster of workstation are more and more difficult to manage due to the increasing number of processors involved, and the complexity of associated applications. Such systems need efficient and flexible monitoring mechanisms to fulfill administration services requirements. In this paper, we present PHOENIX a distributed platform supporting both applications and operating system monitoring with a variable granularity. The granularity is defined using logical expressions to specify complex monitoring conditions. These conditions can be dynamically modified during the application execution. Observation techniques, based on an automatic probe insertion combined with a system agent to minimize the PHOENIX execution time overhead. The platform extensibility offers a suitable environment to design distributed value added services (performance monitoring, load balancing, accounting, cluster management, etc.).  相似文献   

3.
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.  相似文献   

4.
In these days, the creative user really enjoys creating digital items and sharing their works with other people on the Web. Most users, who create digital contents, want to make secured packages of their works and distribute them with the attachment of valid licenses. Current digital rights management (DRM) systems, however, do not provide the functionality that supports the requirement of the creative users who are considered as just consumers by the current available DRM systems. To make the user-centric DRM functionality possible, we found that license management should be more intelligent to enable users make appropriate licenses for the secured distribution of their created works. In this paper, we define the semantic-based rights expression and management model for the user generated content. Each user created content can have one or more licenses with different types, reproduction, distribution, and usage. Based on our semantic license model and big data analytics, we can support a new business model in which users can sell and buy their created digital items in a secure environment.  相似文献   

5.
The managerial and organization practices required by an increasingly dynamic competitive manufacturing, business, and industrial environment include the formation of “virtual enterprises.” A major concern in the management of virtual enterprises is the integration and coordination of business processes contributed by partner enterprises. The traditional methods of process modeling currently used for the design of business processes do not fully support the needs of the virtual enterprise. The design of these virtual enterprises imposes requirements that make it more complex than conventional intraorganizational business process design. This paper first describes an architecture that assists in the design of the virtual enterprise. Then it discusses business process reengineering (BPR) as a methodology for modeling and designing virtual organizations. While BPR presents many useful tools, the approach itself and the modeling tools commonly used for redesign have fundamental shortcomings when dealing with the virtual enterprise. However, several innovative modeling approaches provide promise for this problem. The paper discusses some of these innovative modeling approaches, such as object-oriented modeling of business processes, agent modeling of organizational players, and the use of ontological modeling to capture and manipulate knowledge about the players and processes. The paper concludes with a conceptual modeling methodology that combines these approaches under the enterprise architecture for the design of virtual enterprises.  相似文献   

6.
Systems theory has long been used in psychology, biology, and sociology. This paper applies newer methods of control systems modeling for assessing system stability in health and disease. Control systems can be characterized as open or closed systems with feedback loops. Feedback produces oscillatory activity, and the complexity of naturally occurring oscillatory patterns reflects the multiplicity of feedback mechanisms, such that many mechanisms operate simultaneously to control the system. Unstable systems, often associated with poor health, are characterized by absence of oscillation, random noise, or a very simple pattern of oscillation. This modeling approach can be applied to a diverse range of phenomena, including cardiovascular and brain activity, mood and thermal regulation, and social system stability. External system stressors such as disease, psychological stress, injury, or interpersonal conflict may perturb a system, yet simultaneously stimulate oscillatory processes and exercise control mechanisms. Resonance can occur in systems with negative feedback loops, causing high-amplitude oscillations at a single frequency. Resonance effects can be used to strengthen modulatory oscillations, but may obscure other information and control mechanisms, and weaken system stability. Positive as well as negative feedback loops are important for system function and stability. Examples are presented of oscillatory processes in heart rate variability, and regulation of autonomic, thermal, pancreatic and central nervous system processes, as well as in social/organizational systems such as marriages and business organizations. Resonance in negative feedback loops can help stimulate oscillations and exercise control reflexes, but also can deprive the system of important information. Empirical hypotheses derived from this approach are presented, including that moderate stress may enhance health and functioning.  相似文献   

7.
Systems biology is based on computational modelling and simulation of large networks of interacting components. Models may be intended to capture processes, mechanisms, components and interactions at different levels of fidelity. Input data are often large and geographically disperse, and may require the computation to be moved to the data, not vice versa. In addition, complex system-level problems require collaboration across institutions and disciplines. Grid computing can offer robust, scaleable solutions for distributed data, compute and expertise. We illustrate some of the range of computational and data requirements in systems biology with three case studies: one requiring large computation but small data (orthologue mapping in comparative genomics), a second involving complex terabyte data (the Visible Cell project) and a third that is both computationally and data-intensive (simulations at multiple temporal and spatial scales). Authentication, authorisation and audit systems are currently not well scalable and may present bottlenecks for distributed collaboration particularly where outcomes may be commercialised. Challenges remain in providing lightweight standards to facilitate the penetration of robust, scalable grid-type computing into diverse user communities to meet the evolving demands of systems biology.  相似文献   

8.
Many-task computing aims to bridge the gap between two computing paradigms, high throughput computing and high performance computing. Many-task computing denotes high-performance computations comprising multiple distinct activities, coupled via file system operations. The aggregate number of tasks, quantity of computing, and volumes of data may be extremely large. Traditional techniques found in production systems in the scientific community to support many-task computing do not scale to today’s largest systems, due to issues in local resource manager scalability and granularity, efficient utilization of the raw hardware, long wait queue times, and shared/parallel file system contention and scalability. To address these limitations, we adopted a “top-down” approach to building a middleware called Falkon, to support the most demanding many-task computing applications at the largest scales. Falkon (Fast and Light-weight tasK executiON framework) integrates (1) multi-level scheduling to enable dynamic resource provisioning and minimize wait queue times, (2) a streamlined task dispatcher able to achieve orders-of-magnitude higher task dispatch rates than conventional schedulers, and (3) data diffusion which performs data caching and uses a data-aware scheduler to co-locate computational and storage resources. Micro-benchmarks have shown Falkon to achieve over 15K+ tasks/s throughputs, scale to hundreds of thousands of processors and to millions of queued tasks, and execute billions of tasks per day. Data diffusion has also shown to improve applications scalability and performance, with its ability to achieve hundreds of Gb/s I/O rates on modest sized clusters, with Tb/s I/O rates on the horizon. Falkon has shown orders of magnitude improvements in performance and scalability than traditional approaches to resource management across many diverse workloads and applications at scales of billions of tasks on hundreds of thousands of processors across clusters, specialized systems, Grids, and supercomputers. Falkon’s performance and scalability have enabled a new class of applications called Many-Task Computing to operate at previously so-believed impossible scales with high efficiency.  相似文献   

9.
With the rapid development of uncertain artificial intelligent and the arrival of big data era, conventional clustering analysis and granular computing fail to satisfy the requirements of intelligent information processing in this new case. There is the essential relationship between granular computing and clustering analysis, so some researchers try to combine granular computing with clustering analysis. In the idea of granularity, the researchers expand the researches in clustering analysis and look for the best clustering results with the help of the basic theories and methods of granular computing. Granularity clustering method which is proposed and studied has attracted more and more attention. This paper firstly summarizes the background of granularity clustering and the intrinsic connection between granular computing and clustering analysis, and then mainly reviews the research status and various methods of granularity clustering. Finally, we analyze existing problem and propose further research.  相似文献   

10.
H. D. Frinking 《Grana》2013,52(2):481-485
In many, mostly temperate regions of the world, crops are cultivated in completely or partly closed environments. Problems concerning the dissemination of fungal spores or bacterial cells in these systems are comparable to those in open systems, but there are many supplementary problems, which have hardly been investigated up to now.

Now the application of chemicals in agriculture is strongly discussed worldwide, it will be of the utmost importance to know more about the aerobiological principles of spore dissemination in closed, more or less conditioned environments, to be able to create better disease management programs for plant diseases on crops cultivated in those conditioned environments.

In this paper problems concerning the dissemination phase of the fungal infection cycle developing in “closed” spaces will be discussed.  相似文献   

11.
Oestrus detection remains a problem in the dairy cattle industry. Therefore, automatic detection systems have been developed to detect specific behavioural changes at oestrus. Vocal behaviour has not been considered in such automatic oestrus detection systems in cattle, though the vocalisation rate is known to increase during oestrus. The main challenge in using vocalisation to detect oestrus is correctly identifying the calling individual when animals are moving freely in large groups, as oestrus needs to be detected at an individual level. Therefore, we aimed to automate vocalisation recording and caller identification in group-housed dairy cows. This paper first presents the details of such a system and then presents the results of a pilot study validating its functionality, in which the automatic detection of calls from individual heifers was compared to video-based assessment of these calls by a trained human observer, a technique that has, until now, been considered the ‘gold standard’. We developed a collar-based cattle call monitor (CCM) with structure-borne and airborne sound microphones and a recording unit and developed a postprocessing algorithm to identify the caller by matching the information from both microphones. Five group-housed heifers, each in the perioestrus or oestrus period, were equipped with a CCM prototype for 5 days. The recorded audio data were subsequently analysed and compared with audiovisual recordings. Overall, 1404 vocalisations from the focus heifers and 721 vocalisations from group mates were obtained. Vocalisations during collar changes or malfunctions of the CCM were omitted from the evaluation. The results showed that the CCM had a sensitivity of 87% and a specificity of 94%. The negative and positive predictive values were 80% and 96%, respectively. These results show that the detection of individual vocalisations and the correct identification of callers are possible, even in freely moving group-housed cattle. The results are promising for the future use of vocalisation in automatic oestrus detection systems.  相似文献   

12.
To address the need for more holistic approaches to ecological management and restoration, we examine ecosystem interventions through the lens of systems thinking and in reference to systems archetypes, as developed in relation to organizational management in the business world. Systems thinking is a holistic approach to analysis that focuses on how a system's constituent parts interrelate and how systems work over time and within the context of larger systems. Systems archetypes represent patterns of behavior that have been observed repeatedly. These archetypes help relate commonly observed responses to environmental problems with their effect on important feedback processes to better anticipate connections between actions and results. They highlight situations where perceived solutions actually result in worse or unintended consequences, and where changing goals may be either appropriate or inappropriate. The archetypes can be applied to practical examples, and can provide guidance to help make appropriate intervention decisions in similar circumstances. Their use requires stepping back from immediately obvious management decisions and taking a more systemic view of the situation. A catalog of archetypes that describe common patterns of systems behavior may inform management by helping to diagnose system dynamics earlier and identifying interactions among them.  相似文献   

13.
Cluster, consisting of a group of computers, is to act as a whole system to provide users with computer resources. Each computer is a node of this cluster. Cluster computer refers to a system consisting of a complete set of computers connected to each other. With the rapid development of computer technology, cluster computing technique with high performance–cost ratio has been widely applied in distributed parallel computing. For the large-scale close data in group enterprise, a heterogeneous data integration model was built under cluster environment based on cluster computing, XML technology and ontology theory. Such model could provide users unified and transparent access interfaces. Based on cluster computing, the work has solved the heterogeneous data integration problems by means of Ontology and XML technology. Furthermore, good application effect has been achieved compared with traditional data integration model. Furthermore, it was proved that this model improved the computing capacity of system, with high performance–cost ratio. Thus, it is hoped to provide support for decision-making of enterprise managers.  相似文献   

14.
Although business firms have improved their environmental performance, a variety of forces are pushing businesses toward adopting environmental management throughout the entire life cycle of their products and processes. In this article we discuss the information systems elements of an environmental management approach we call "life-cycle-oriented environmental management" (LCOEM).This approach requires the firm to manage the effects of its processes from the creation of inputs to the final disposal of outputs, that is, from cradle to grave. We present a framework of the classes of information systems needed, describe their use in an LCOEM setting and define their inter relationships. We conclude with a discussion of the implications of LCOEM information systems.  相似文献   

15.
Cloud computing is an emerging technology and is being widely considered for resource utilization in various research areas. One of the main advantages of cloud computing is its flexibility in computing resource allocations. Many computing cycles can be ready in very short time and can be smoothly reallocated between tasks. Because of this, there are many private companies entering the new business of reselling their idle computing cycles. Research institutes have also started building their own cloud systems for their various research purposes. In this paper, we introduce a framework for virtual cluster system called vcluster which is capable of utilizing computing resources from heterogeneous clouds and provides a uniform view in computing resource management. vcluster is an IaaS (Infrastructure as a Service) based cloud resource management system. It distributes batch jobs to multiple clouds depending on the status of queue and system pool. The main design philosophy behind vcluster is cloud and batch system agnostic and it is achieved through plugins. This feature mitigates the complexity of integrating heterogeneous clouds. In the pilot system development, we use FermiCloud and Amazon EC2, which are a private and a public cloud system, respectively. In this paper, we also discuss the features and functionalities that must be considered in virtual cluster systems.  相似文献   

16.
Cloud computing environment came about in order to effectively manage and use enormous amount of data that have become available with the development of the Internet. Cloud computing service is widely used not only to manage the users’ IT resources, but also to use enterprise IT resources in an effective manner. Various security threats have occurred while using cloud computing and plans for reaction are much needed, since they will eventually elevate to security threats to enterprise information. Plans to strengthen the security of enterprise information by using cloud security will be proposed in this research. These cloud computing security measures must be supported by the governmental policies. Publications on guidelines to information protection will raise awareness among the users and service providers. System of reaction must be created in order to constantly monitor and to promptly respond to any security accident. Therefore, both technical countermeasures and governmental policy must be supported at the same time. Cloud computing service is expanding more than ever, thus active research on cloud computing security is expected.  相似文献   

17.
开展生态系统数字化、信息化、智能化管理,全面提升粤港澳大湾区生态环境质量,是建设国际一流湾区的必然趋势。以城市群生态系统智能化管理为目标,系统整合各类生态环境相关数据资源,形成生态系统管理数据和决策支撑体系,并以此为基础构建生态智能管理平台。研究以生态系统要素和功能管理逻辑为核心,构建了生态系统管理业务流程:(1)精准剖析生态环境问题,确定问题发生的尺度、范围并对其进行分类和定性;(2)确立生态管理目标,制定适宜的管理策略;(3)根据现状与基线进行生态系统服务权衡,通过生态管理恢复工程提升生态系统质量;(4)通过环境物联网监测生态系统变化,及时调整和改进生态系统管理计划。针对城市群生态系统多尺度、多层次、复杂化等特点,在制定管理决策时应充分权衡管理目标和生态服务,兼顾各类生态系统服务效益;需通过示范性生态工程印证管理方案的可行性、适用性和协同性;以趋善化理念为指导思想,不断优化调整生态管理目标;同时在管理活动实施的过程中不断积累、凝练、总结所获得的反馈信息和经验。面向生态管理体制和管理能力的现代化提升需求,融合大数据、地理信息系统(GIS)、全球广域网络(Web)等信息化技术,构建粤港澳大湾区生态管理智能平台,实现多主体信息共享,打破管理决策的"黑箱",为推进生态环境管理现代化提供可靠可行的方案。构建的生态系统管理业务流程和管理策略,将知识充分融入管理决策的制定流程,能服务于粤港澳大湾区的生态文明建设,推动可持续和高质量发展。  相似文献   

18.
Michael Conrad unveiled many of the fundamental characteristics of biological computing. Underlying the behavioral variability and the adaptability of biological systems are these characteristics, including the ability of biological information processing to exploit quantum features at the atomic level, the powerful 3-D pattern recognition capabilities of macromolecules, the computational efficiency, and the ability to support biological function. Among many other things, Conrad formalized and explicated the underlying principles of biological adaptability, characterized the differences between biological and digital computing in terms of a fundamental tradeoff between adaptability and programmability of information processing, and discussed the challenges of interfacing digital computers and human society. This paper is about the encounter of biological and digital computing. The focus is on the nature of the biological information processing infrastructure of organizations and how it can be extended effectively with digital computing. In order to achieve this goal effectively, however, we need to embed properly digital computing into the information processing aspects of human and social behavior and intelligence, which are fundamentally biological. Conrad's legacy provides a firm, strong, and inspiring foundation for this endeavor.  相似文献   

19.
Goal, Scope and Background  Performing a life cycle assessment (LCA) has been a rather resource and time-consuming business. The method of data collection may be problematic, and the quality of the final results can be influenced by the reliability of the data. Therefore, it is helpful to utilize an on-line data gathering system to save time and to improve the reliability of the collected raw data. Main Features  We have developed an LCA software package for a steel company. The software consists of two major parts: an LCA tool kit and an interface program. The LCA tool kit is a user interface for handling an LCA database server. It has powerful functions to execute systematic analysis, not only for the amount of energy and raw materials, but also for the volume of pollutants generated by each component. The latter is an interface program between a data handling system and an on-line data gathering system. This interface program is linked with three enterprise database systems, such as enterprise resource planning (ERP), an environmental management system (EMS) and an energy server system (ESS). In this study, we compared three different ways of performing LCA. Two of them are on-line methods, and another is manual. Results and Discussion  Among the three methods, the best method was on-line LCA linked with ERP, EMS and ESS. Case studies in steel works have shown that the current method is superior to manual data gathering in terms of time and cost (man-month) savings, data reliability and other applications. Results of life cycle inventory and life cycle impact assessment for steel products have shown monthly fluctuations due to fuel usage ratio, which have not been detected before using manual data gathering. Conclusions  An LCA can be performed quickly, if one is to employ the on-line data gathering system we have developed. The system consists of an LCA software package including the interface program and LCA tool kit, and the enterprise database systems. Case studies for LCA with the on-line system have shown superior performance to that carried out using the manual data entry method. Recommendations and Perspective  This system enables an enterprise to take Type III and conduct benchmarking to other companies or societies within a short time. Also, combining this tool with an environmental performance evaluation or accounting system can allow one to achieve a more progressive environmental management.  相似文献   

20.
This research is to discuss management strategies selection and enterprise life cycle periods referencing fuzzy proximity from fuzzy set theory and based on the current development situation of enterprises in China. First, this paper measures the degree of proximity for eight kinds of management strategies and different enterprise life-cycle periods (the pioneering period, growth period, maturity period and recession period) using a fuzzy proximity vector. The eight management strategies include management idea innovation, management organization innovation, management method innovation, management culture innovation, management institution innovation, market innovation, business model innovation and performance management innovation. Second, this paper analyzes management innovation strategies in different enterprise life cycle periods, and verifies the consequences using an example of the developmental history of one engine manufacturing enterprise since 1997. Several conclusions can be drawn from this research: (1) The frame model of management innovation strategies in the enterprise’s life cycle is both reasonable and convincing as a reference for management innovation strategies selection when enterprises are developing. (2) The fuzzy proximity method can be applied to research where a management innovation strategy during a particular life cycle needs to be selected. Therefore, this research extends the application and scope of the fuzzy proximity method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号