首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The goal of LCA is to identify the environmental impacts resulting from a product, process, or activity. While LCA is useful for evaluating environmental attributes, it stops short of providing information that business managers routinely utilize for decision-making — i.e., dollars. Thus, decisions regarding the processes used for manufacturing products and the materials comprising those products can be enhanced by weaving cost and environmental information into the decision-making process. Various approaches have been used during the past decade to supplement environmental information with cost information. One of these tools is environmental accounting, the identification, analysis, reporting, and use of environmental information, including environmental cost data. Environmental cost accounting provides information necessary for identifying the true costs of products and processes and for evaluating opportunities to minimize those costs. As demonstrated through two case studies, many companies are incorporating environmental cost information into their accounting systems to prioritize investments in new technologies and products.  相似文献   

2.
Children now have considerable exposure to new information technologies (IT) such as desktop computers. A reported association between computer use and discomfort in children has prompted concerns about the musculoskeletal stresses associated with computer use. There were no detailed data on children reading and writing, nor any evidence on the variability of postures and muscle activity whilst children use IT.Twenty-four children (10–12 years old; 12 male) performed a reading and writing task using new IT (computer/keyboard/mouse with high display and mid height display) and old IT (book/paper/pen). Spinal and upper limb 3D posture and muscle activity were recorded and estimates of mean and variation calculated.The mean postures for children reading and writing with computers were more neutral than when they read and wrote with old IT. Similarly, mean muscle activity levels were lower during computer use than old IT use. However, new IT use also resulted in less variable, more monotonous postures and muscle activities. Moderate differences in computer display height had little effect on posture and muscle activity variation.Variation in musculoskeletal stresses is considered an important component of the risk of musculoskeletal disorders. Children should therefore be encouraged to ensure task variety when using new IT to offset the greater posture and muscle activity monotony.  相似文献   

3.
Realizing personalized medicine requires integrating diverse data types with bioinformatics. The most vital data are genomic information for individuals that are from advanced next-generation sequencing (NGS) technologies at present. The technologies continue to advance in terms of both decreasing cost and sequencing speed with concomitant increase in the amount and complexity of the data. The prodigious data together with the requisite computational pipelines for data analysis and interpretation are stressors to IT infrastructure and the scientists conducting the work alike. Bioinformatics is increasingly becoming the rate-limiting step with numerous challenges to be overcome for translating NGS data for personalized medicine. We review some key bioinformatics tasks, issues, and challenges in contexts of IT requirements, data quality, analysis tools and pipelines, and validation of biomarkers.  相似文献   

4.
As business imperatives change and new high-capability information technologies (IT) appear, organizations recognize the need to remain at the forefront of change by reengineering their business processes and implementing enabling responsive IT infrastructures. However, experience in this context indicates a lack of comprehension of essential elements and their mutual relationships that can contribute to the success of business-process change-implementation efforts. This article proposes a framework for managing IT for effective business-process redesign (BPR) implementation. After establishing BPR principles, components, and the relationship of BPR to some organizational and technological approaches, it presents the role and benefits of IT in BPR. The article then discusses in detail the core elements of the framework. Its theme is that an IT infrastructure that covers issues of BPR strategy development, IT strategic alignment, IT infrastructure development, IT sourcing, legacy systems reengineering, IS integration, and IS function competence is essential and critical for effective implementation.  相似文献   

5.
6.
Climate change is impacting species and ecosystems globally. Many existing templates to identify the most important areas to conserve terrestrial biodiversity at the global scale neglect the future impacts of climate change. Unstable climatic conditions are predicted to undermine conservation investments in the future. This paper presents an approach to developing a resource allocation algorithm for conservation investment that incorporates the ecological stability of ecoregions under climate change. We discover that allocating funds in this way changes the optimal schedule of global investments both spatially and temporally. This allocation reduces the biodiversity loss of terrestrial endemic species from protected areas due to climate change by 22% for the period of 2002-2052, when compared to allocations that do not consider climate change. To maximize the resilience of global biodiversity to climate change we recommend that funding be increased in ecoregions located in the tropics and/or mid-elevation habitats, where climatic conditions are predicted to remain relatively stable. Accounting for the ecological stability of ecoregions provides a realistic approach to incorporating climate change into global conservation planning, with potential to save more species from extinction in the long term.  相似文献   

7.

Background

Evidence-based quality improvement models for depression have not been fully implemented in routine primary care settings. To date, few studies have examined the organizational factors associated with depression management in real-world primary care practice. To successfully implement quality improvement models for depression, there must be a better understanding of the relevant organizational structure and processes of the primary care setting. The objective of this study is to describe these organizational features of routine primary care practice, and the organization of depression care, using survey questions derived from an evidence-based framework.

Methods

We used this framework to implement a survey of 27 practices comprised of 49 unique offices within a large primary care practice network in western Pennsylvania. Survey questions addressed practice structure (e.g., human resources, leadership, information technology (IT) infrastructure, and external incentives) and process features (e.g., staff performance, degree of integrated depression care, and IT performance).

Results

The results of our survey demonstrated substantial variation across the practice network of organizational factors pertinent to implementation of evidence-based depression management. Notably, quality improvement capability and IT infrastructure were widespread, but specific application to depression care differed between practices, as did coordination and communication tasks surrounding depression treatment.

Conclusions

The primary care practices in the network that we surveyed are at differing stages in their organization and implementation of evidence-based depression management. Practical surveys such as this may serve to better direct implementation of these quality improvement strategies for depression by improving understanding of the organizational barriers and facilitators that exist within both practices and practice networks. In addition, survey information can inform efforts of individual primary care practices in customizing intervention strategies to improve depression management.
  相似文献   

8.

Background

Primary care staffing decisions are often made unsystematically, potentially leading to increased costs, dissatisfaction, turnover, and reduced quality of care. This article aims to (1) catalogue the domain of primary care tasks, (2) explore the complexity associated with these tasks, and (3) examine how tasks performed by different job titles differ in function and complexity, using Functional Job Analysis to develop a new tool for making evidence-based staffing decisions.

Methods

Seventy-seven primary care personnel from six US Department of Veterans Affairs (VA) Medical Centers, representing six job titles, participated in two-day focus groups to generate 243 unique task statements describing the content of VA primary care. Certified job analysts rated tasks on ten dimensions representing task complexity, skills, autonomy, and error consequence. Two hundred and twenty-four primary care personnel from the same clinics then completed a survey indicating whether they performed each task. Tasks were catalogued using an adaptation of an existing classification scheme; complexity differences were tested via analysis of variance.

Results

Objective one : Task statements were categorized into four functions: service delivery (65%), administrative duties (15%), logistic support (9%), and workforce management (11%). Objective two : Consistent with expectations, 80% of tasks received ratings at or below the mid-scale value on all ten scales. Objective three : Service delivery and workforce management tasks received higher ratings on eight of ten scales (multiple functional complexity dimensions, autonomy, human error consequence) than administrative and logistic support tasks. Similarly, tasks performed by more highly trained job titles received higher ratings on six of ten scales than tasks performed by lower trained job titles. Contrary to expectations, the distribution of tasks across functions did not significantly vary by job title.

Conclusion

Primary care personnel are not being utilized to the extent of their training; most personnel perform many tasks that could reasonably be performed by personnel with less training. Primary care clinics should use evidence-based information to optimize job-person fit, adjusting clinic staff mix and allocation of work across staff to enhance efficiency and effectiveness.  相似文献   

9.
A daily diary and two experience sampling studies were carried out to investigate curvilinearity of the within-person relationship between state neuroticism and task performance, as well as the moderating effects of within-person variation in momentary job demands (i.e., work pressure and task complexity). In one, results showed that under high work pressure, the state neuroticism–task performance relationship was best described by an exponentially decreasing curve, whereas an inverted U-shaped curve was found for tasks low in work pressure, while in another study, a similar trend was visible for task complexity. In the final study, the state neuroticism–momentary task performance relationship was a linear one, and this relationship was moderated by momentary task complexity. Together, results from all three studies showed that it is important to take into account the moderating effects of momentary job demands because within-person variation in job demands affects the way in which state neuroticism relates to momentary levels of task performance. Specifically, we found that experiencing low levels of state neuroticism may be most beneficial in high demanding tasks, whereas more moderate levels of state neuroticism are optimal under low momentary job demands.  相似文献   

10.
Protected area managers need reliable information to detect spatial and temporal trends of the species they intend to protect. This information is crucial for population monitoring, understanding ecological processes, and evaluating the effectiveness of management and conservation policies. In under-funded protected areas, managers often prioritize ungulates and carnivores for monitoring given their socio-economic value and sensitivity to human disturbance. Aircraft-based surveys are typically utilized for monitoring ungulates because they can cover large areas regardless of the terrain, but such work is expensive and subject to bias. Recently, unmanned aerial vehicles have shown great promise for ungulate monitoring, but these technologies are not yet widely available and are subject to many of the same analytical challenges associated with traditional aircraft-based surveys. Here, we explore use of inexpensive and robust distance sampling methods in Kafue National Park (KNP) (22,400 km2), carried out by government-employed game scouts. Ground-based surveys spanning 101, 5-km transects resulted in 369 ungulate group detections from 20 species. Using generalized linear models and distance sampling, we determined the environmental and anthropogenic variables influencing ungulate species richness, density, and distribution. Species richness was positively associated with permanent water and percent cover of closed woodland vegetation. Distance to permanent water had the strongest overall effect on ungulate densities, but the magnitude and direction of this effect varied by species. This ground-based approach provided a more cost-effective, unbiased, and repeatable method than aerial surveys in KNP, and could be widely implemented by local personnel across under-funded protected areas in Africa.  相似文献   

11.
The delivery of scalable, rich multimedia applications and services on the Internet requires sophisticated technologies for transcoding, distributing, and streaming content. Cloud computing provides an infrastructure for such technologies, but specific challenges still remain in the areas of task management, load balancing, and fault tolerance. To address these issues, we propose a cloud-based distributed multimedia streaming service (CloudDMSS), which is designed to run on all major cloud computing services. CloudDMSS is highly adapted to the structure and policies of Hadoop, thus it has additional capacities for transcoding, task distribution, load balancing, and content replication and distribution. To satisfy the design requirements of our service architecture, we propose four important algorithms: content replication, system recovery for Hadoop distributed multimedia streaming, management for cloud multimedia management, and streaming resource-based connection (SRC) for streaming job distribution. To evaluate the proposed system, we conducted several different performance tests on a local testbed: transcoding, streaming job distribution using SRC, streaming service deployment and robustness to data node and task failures. In addition, we performed three different tests in an actual cloud computing environment, Cloudit 2.0: transcoding, streaming job distribution using SRC, and streaming service deployment.  相似文献   

12.
Recently, the video data has very huge volume, taking one city for example, thousands of cameras are built of which each collects high-definition video over 24–48 GB every day with the rapidly growth; secondly, data collected includes variety of formats involving multimedia, images and other unstructured data; furthermore the valuable information contains in only a few frames called key frames of massive video data; and the last problem caused is how to improve the processing velocity of a large amount of original video with computers, so as to enhance the crime prediction and detection effectiveness of police and users. In this paper, we conclude a novel architecture for next generation public security system, and the “front + back” pattern is adopted to address the problems brought by the redundant construction of current public security information systems which realizes the resource consolidation of multiple IT resources, and provides unified computing and storage environment for more complex data analysis and applications such as data mining and semantic reasoning. Under the architecture, we introduce cloud computing technologies such as distributed storage and computing, data retrieval of huge and heterogeneous data, provide multiple optimized strategies to enhance the utilization of resources and efficiency of tasks. This paper also presents a novel strategy to generate a super-resolution image via multi-stage dictionaries which are trained by a cascade training process. Extensive experiments on image super-resolution validate that the proposed solution can get much better results than some state-of-the-arts ones.  相似文献   

13.
The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows® operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh®, or Linux® operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.  相似文献   

14.
IT outsourcing allows a business to reduce the cost of IT service delivery and improve the quality of IT service by taking advantage of the service provider’s economics of scale and technical expertise. However, the successful outsourcing of IT service is hampered by lack of guidance on how to design incentive contracts to encourage performance of the service provider, especially in the presence of information asymmetry and incentive divergence. In this article, we identify and characterize two asymmetric information factors: asymmetric effort information and asymmetric capability information. Depending on whether the service provider’s effort information and capability information is symmetric or not, we consider three information scenarios and characterize optimal incentive contracts for each scenario. We also introduce the concept of information value to quantify the adverse effects of the two asymmetric information factors. The results provide theoretical support for designing incentive contracts that mitigate the adverse effects of asymmetric information, and recommend effective guidance for activities so as to reduce the degree of information asymmetry.  相似文献   

15.
The lack of empirical support for the positive economic impact of information technology (IT) has been called the IT productivity paradox. Even though output measurement problems have often been held responsible for the paradox, we conjecture that modeling limitations in production-economics-based studies and input measurement also might have contributed to the paucity of systematic evidence regarding the impact of IT. We take the position that output measurement is slightly less problematic in manufacturing than in the service sector and that there is sound a priori rationale to expect substantial productivity gains from IT investments in manufacturing and production management. We revisit the IT productivity paradox to highlight some potential limitations of earlier research and obtain empirical support for these conjectures. We apply a theoretical framework involving explicit modeling of a strategic business unit's (SBU)1 input choices to a secondary data set in the manufacturing sector. A widely cited study by Loveman (1994) with the same dataset showed that the marginal contribution of IT to productivity was negative. However, our analysis reveals a significant positive impact of IT investment on SBU output. We show that Loveman's negative results can be attributed to the deflator used for the IT capital. Further, modeling issues such as a firm's choice of inputs like IT, non-IT, and labor lead to major differences in the IT productivity estimates. The question as to whether firms actually achieved economic benefits from IT investments in the past decade has been raised in the literature, and our results provide evidence of sizable productivity gains by large successful corporations in the manufacturing sector during the same time period.  相似文献   

16.
Increasing power consumption of IT infrastructures and growing electricity prices have led to the development of several energy-saving techniques in the last couple of years. Virtualization and consolidation of services is one of the key technologies in data centers to reduce overprovisioning and therefore increase energy savings. This paper shows that the energy-optimal allocation of virtualized services in a heterogeneous server infrastructure is NP-hard and can be modeled as a variant of the multidimensional vector packing problem. Furthermore, it proposes a model to predict the performance degradation of a service when it is consolidated with other services. The model allows considering the tradeoff between power consumption and service performance during service allocation. Finally, the paper presents two heuristics that approximate the energy-optimal and performance-aware resource allocation problem and shows that the allocations determined by the proposed heuristics are more energy-efficient than the widely applied maximum-density consolidation.  相似文献   

17.
The biomedical sciences are a rapidly changing discipline that have adapted to innovative technological advances. Despite these many advances, we face two major challenges: a) the number of experts in the field is vastly outnumbered by the number of students, many of whom are separated geographically or temporally and b) the teaching methods used to instruct students and learners have not changed. Today's students have adapted to technology--they use the web as a source of information and communicate via email and chat rooms. Teaching in the biomedical sciences should adopt these new information technologies (IT), but has thus far failed to capitalize on technological opportunity. Creating a "digital textbook" of the traditional learning material is not sufficient for dynamic processes such as cellular physiology. This paper describes innovative teaching techniques that incorporate familiar IT and high-quality interactive learning content with user-centric instruction design models. The Virtual Labs Project from Stanford University has created effective interactive online teaching modules in physiology (simPHYSIO) and delivered them over broadband networks to their undergraduate and medical students. Evaluation results of the modules are given as a measure of success of such innovative teaching method. This learning media strategically merges IT innovations with pedagogy to produce user-driven animations of processes and engaging interactive simulations.  相似文献   

18.
This paper presents an autonomic system in which two managers with different responsibilities collaborate to achieve an overall objective within a cluster of server computers. The first, a node group manager, uses modeling and optimization algorithms to allocate server processes and individual requests among a set of server machines grouped into node groups, and also estimates its ability to fulfill its service-level objectives as a function of the number of server machines available in each node group. The second, a provisioning manager, consumes these estimates from one or more node group managers, and uses them to allocate machines to node groups over a longer timescale. We describe the operation of both managers and the information that flows between them, and present the results of some experiments demonstrating the effectiveness of our technique. Furthermore, we relate our architecture to a general autonomic computing architecture based on self-managing resources and patterns of inter-resource collaboration, and to emerging standards in the area of distributed manageability. We also discuss some of the issues involved in incorporating our implementation into existing products in the short term, and describe a number of further directions for this research.  相似文献   

19.
The probability that protected areas will deliver their potential for maintaining or enhancing biodiversity is likely to be maximised if they are appropriately and effectively managed. As a result, governments and conservation agencies are devoting much attention to the management of protected areas. In the U.K., the demand for performance accountability has resulted in Public Service Agreements (PSA) that set out targets for government departments to deliver results in return for investments being made. One such target for England is to ensure that all nationally important wildlife sites are in favourable condition by 2010. Here, we tested the hypothesis, of potential strategic importance, that the ecological condition of these sites is predictable from relationships with a range of physical, environmental and demographic variables. We used binary logistic regression to investigate these relationships, using the results of English Nature’s 1997–2003 condition assessment exercise. Generally, sites in unfavourable condition tend to be larger in area, located at higher elevations, but with higher human population density and are more spatially isolated from units of the same habitat. However, despite the range of different parameters included in our models, the extent to which the condition of any given site could be predicted was low. Our results have implications for the delivery of PSA targets, funding allocation, and the location of new protected areas.  相似文献   

20.
The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号