首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a new technique of measuring user similarity in collaborative filtering using electric circuit analysis. Electric circuit analysis is used to measure the potential differences between nodes on an electric circuit. In this paper, by applying this method to transaction networks comprising users and items, i.e., user–item matrix, and by using the full information about the relationship structure of users in the perspective of item adoption, we overcome the limitations of one-to-one similarity calculation approach, such as the Pearson correlation, Tanimoto coefficient, and Hamming distance, in collaborative filtering. We found that electric circuit analysis can be successfully incorporated into recommender systems and has the potential to significantly enhance predictability, especially when combined with user-based collaborative filtering. We also propose four types of hybrid algorithms that combine the Pearson correlation method and electric circuit analysis. One of the algorithms exceeds the performance of the traditional collaborative filtering by 37.5% at most. This work opens new opportunities for interdisciplinary research between physics and computer science and the development of new recommendation systems  相似文献   

2.
Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term.  相似文献   

3.
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors.  相似文献   

4.
The rapid expansion of Internet brings us overwhelming online information, which is impossible for an individual to go through all of it. Therefore, recommender systems were created to help people dig through this abundance of information. In networks composed by users and objects, recommender algorithms based on diffusion have been proven to be one of the best performing methods. Previous works considered the diffusion process from user to object, and from object to user to be equivalent. We show in this work that it is not the case and we improve the quality of the recommendation by taking into account the asymmetrical nature of this process. We apply this idea to modify the state-of-the-art recommendation methods. The simulation results show that the new methods can outperform these existing methods in both recommendation accuracy and diversity. Finally, this modification is checked to be able to improve the recommendation in a realistic case.  相似文献   

5.
6.
This paper provides a methodology for comparing global land cover maps that allows for differences in legend definitions between products to be taken into account. The legends of the two maps are first reconciled by creating a legend lookup table that shows how the legends map onto one another. Where there is overlap, the specific definitions for each legend class are used to calculate the degree of overlap between legend classes. In this way, one‐to‐many mappings are accounted for unlike in most methods where the legend definitions are often forced into place. Another advantage over previous map comparison methods is that application‐specific requirements are captured using expert input, whereby the user rates the importance of disagreement between different legend classes based on the needs of the application. This user‐defined matrix in conjunction with the degree of overlap between legend classes is applied on a pixel‐by‐pixel basis to create maps of spatial disagreement and uncertainty. The user can then highlight the areas of highest thematic uncertainty and disagreement between the different land cover maps allowing for areas that require further detailed examination to be readily identified. It would also be possible for several users to input their knowledge into the process, leading to a potentially more robust comparison of land cover products. The methodology of map comparison is illustrated using different land cover products including Global Land Cover 2000 (GLC‐2000) and the MODIS land cover data set. Two diverse applications are provided including the estimation of global forest cover and monitoring of agricultural land. In the case of global forest cover, an example was provided for Columbia, which showed that the MODIS land cover map overestimates forest cover in comparison with the GLC‐2000. The agricultural example, on the other hand, served to illustrate that for Sudan, MODIS tends to underestimate crop areas while GLC‐2000 overestimates them.  相似文献   

7.
This study proposed a method of developing an intelligent recommendation system for automotive parts assembly. The proposed system will display the detailed information and the list components which make up the relevant part that an user wants through the database using the ontology when selecting an automotive part that an user intends to learn or to be guided of. This study is to design task ontology based on Hierarchical Taxonomy so as to achieve productivity enhancement, cost reduction and outcome improvement through recommendations based on intelligence and personalization depending on the worker’s present situation or context of task in charge when assembly of automotive parts is conducted. For this, composing elements of an engine and upper/lower relationships were expressed using hierarchical structure Taxonomy. The intelligent recommendation system for parts is offered to users through determining the automatic recommendation order between parts using the weights. This study has experimented the principles of the recommendation system and the method of setting the weights by setting two scenarios.  相似文献   

8.
Support vector machine classification on the web   总被引:3,自引:0,他引:3  
The support vector machine (SVM) learning algorithm has been widely applied in bioinformatics. We have developed a simple web interface to our implementation of the SVM algorithm, called Gist. This interface allows novice or occasional users to apply a sophisticated machine learning algorithm easily to their data. More advanced users can download the software and source code for local installation. The availability of these tools will permit more widespread application of this powerful learning algorithm in bioinformatics.  相似文献   

9.
The rapid growth of published cloud services in the Internet makes the service selection and recommendation a challenging task for both users and service providers. In cloud environments, software re services collaborate with other complementary services to provide complete solutions to end users. The service selection is performed based on QoS requirements submitted by end users. Software providers alone cannot guarantee users’ QoS requirements. These requirements must be end-to-end, representing all collaborating services in a cloud solution. In this paper, we propose a prediction model to compute end-to-end QoS values for vertically composed services which are composed of three types of cloud services: software (SaaS), infrastructure (IaaS) and data (DaaS) services. These values can be used during the service selection and recommendation process. Our model exploits historical QoS values and cloud service and user information to predict unknown end-to-end QoS values of composite services. The experiments demonstrate that our proposed model outperforms other prediction models in terms of the prediction accuracy. We also study the impact of different parameters on the prediction results. In the experiments, we used real cloud services’ QoS data collected using our developed QoS monitoring and collecting system.  相似文献   

10.
Due to the exponential growth of information, recommender systems have been a widely exploited technique to solve the problem of information overload effectively. Collaborative filtering (CF) is the most successful and extensively employed recommendation approach. However, current CF methods recommend suitable items for users mainly by user-item matrix that contains the individual preference of users for items in a collection. So these methods suffer from such problems as the sparsity of the available data and low accuracy in predictions. To address these issues, borrowing the idea of cognition degree from cognitive psychology and employing the regularized matrix factorization (RMF) as the basic model, we propose a novel drifting cognition degree-based RMF collaborative filtering method named CogTime_RMF that incorporates both user-item matrix and users’ drifting cognition degree with time. Moreover, we conduct experiments on the real datasets MovieLens 1 M and MovieLens 100 k, and the method is compared with three similarity based methods and three other latest matrix factorization based methods. Empirical results demonstrate that our proposal can yield better performance over other methods in accuracy of recommendation. In addition, results show that CogTime_RMF can alleviate the data sparsity, particularly in the circumstance that few ratings are observed.  相似文献   

11.
OSCEs (Objective Structured Clinical Examinations) are widely used in health professions to assess clinical skills competence. Raters use standardized binary checklists (CL) or multi-dimensional global rating scales (GRS) to score candidates performing specific tasks. This study assessed the reliability of CL and GRS scores in the assessment of veterinary students, and is the first study to demonstrate the reliability of GRS within veterinary medical education. Twelve raters from two different schools (6 from University of Calgary [UCVM] and 6 from Royal (Dick) School of Veterinary Studies [R(D)SVS] were asked to score 12 students (6 from each school). All raters assessed all students (video recordings) during 4 OSCE stations (bovine haltering, gowning and gloving, equine bandaging and skin suturing). Raters scored students using a CL, followed by the GRS. Novice raters (6 R(D)SVS) were assessed independently of expert raters (6 UCVM). Generalizability theory (G theory), analysis of variance (ANOVA) and t-tests were used to determine the reliability of rater scores, assess any between school differences (by student, by rater), and determine if there were differences between CL and GRS scores. There was no significant difference in rater performance with use of the CL or the GRS. Scores from the CL were significantly higher than scores from the GRS. The reliability of checklist scores were .42 and .76 for novice and expert raters respectively. The reliability of the global rating scale scores were .7 and .86 for novice and expert raters respectively. A decision study (D-study) showed that once trained using CL, GRS could be utilized to reliably score clinical skills in veterinary medicine with both novice and experienced raters.  相似文献   

12.
Leaders in social networks, the Delicious case   总被引:4,自引:0,他引:4  
Lü L  Zhang YC  Yeung CH  Zhou T 《PloS one》2011,6(6):e21202
Finding pertinent information is not limited to search engines. Online communities can amplify the influence of a small number of power users for the benefit of all other users. Users' information foraging in depth and breadth can be greatly enhanced by choosing suitable leaders. For instance in delicious.com, users subscribe to leaders' collection which lead to a deeper and wider reach not achievable with search engines. To consolidate such collective search, it is essential to utilize the leadership topology and identify influential users. Google's PageRank, as a successful search algorithm in the World Wide Web, turns out to be less effective in networks of people. We thus devise an adaptive and parameter-free algorithm, the LeaderRank, to quantify user influence. We show that LeaderRank outperforms PageRank in terms of ranking effectiveness, as well as robustness against manipulations and noisy data. These results suggest that leaders who are aware of their clout may reinforce the development of social networks, and thus the power of collective search.  相似文献   

13.
14.

Non-orthogonal multiple access (NOMA) along with cognitive radio (CR) have been recently configured as potential solutions to fulfill the extraordinary demands of the fifth generation (5G) and beyond (B5G) networks and support the Internet of Thing (IoT) applications. Multiple users can be served within the same orthogonal domains in NOMA via power-domain multiplexing, whilst CR allows secondary users (SUs) to access the licensed spectrum frequency. This work investigates the possibility of combining orthogonal frequency division multiple access (OFDMA), NOMA, and CR, referred to as hybrid OFDMA-NOMA CR network. With this hybrid technology, the licensed frequency is divided into several channels, such as a group SUs is served in each channel based on NOMA technology. In particular, a rate-maximization framework is developed, at which user pairing at each channel, power allocations for each user, and secondary users activities are jointly considered to maximize the sum-rate of the hybrid OFDMA-NOMA CR network, while maintaining a set of relevant NOMA and CR constraints. The developed sum-rate maximization framework is NP-hard problem, and cannot be solved through classical approaches. Accordingly, we propose a two-stage approach; in the first stage, we propose a novel user pairing algorithm. With this, an iterative algorithm based on the sequential convex approximation is proposed to evaluate the solution of the non-convex rate-maximization problem, in the second stage. Results show that our proposed algorithm outperforms the existing schemes, and CR network features play a major role in deciding the overall network’s performance.

  相似文献   

15.
Single-cell and single-molecule measurements indicate the importance of stochastic phenomena in cell biology. Stochasticity creates spontaneous differences in the copy numbers of key macromolecules and the timing of reaction events between genetically-identical cells. Mathematical models are indispensable for the study of phenotypic stochasticity in cellular decision-making and cell survival. There is a demand for versatile, stochastic modeling environments with extensive, preprogrammed statistics functions and plotting capabilities that hide the mathematics from the novice users and offers low-level programming access to the experienced user. Here we present StochPy (Stochastic modeling in Python), which is a flexible software tool for stochastic simulation in cell biology. It provides various stochastic simulation algorithms, SBML support, analyses of the probability distributions of molecule copy numbers and event waiting times, analyses of stochastic time series, and a range of additional statistical functions and plotting facilities for stochastic simulations. We illustrate the functionality of StochPy with stochastic models of gene expression, cell division, and single-molecule enzyme kinetics. StochPy has been successfully tested against the SBML stochastic test suite, passing all tests. StochPy is a comprehensive software package for stochastic simulation of the molecular control networks of living cells. It allows novice and experienced users to study stochastic phenomena in cell biology. The integration with other Python software makes StochPy both a user-friendly and easily extendible simulation tool.  相似文献   

16.
Twitter has the potential to be a timely and cost-effective source of data for syndromic surveillance. When speaking of an illness, Twitter users often report a combination of symptoms, rather than a suspected or final diagnosis, using naïve, everyday language. We developed a minimally trained algorithm that exploits the abundance of health-related web pages to identify all jargon expressions related to a specific technical term. We then translated an influenza case definition into a Boolean query, each symptom being described by a technical term and all related jargon expressions, as identified by the algorithm. Subsequently, we monitored all tweets that reported a combination of symptoms satisfying the case definition query. In order to geolocalize messages, we defined 3 localization strategies based on codes associated with each tweet. We found a high correlation coefficient between the trend of our influenza-positive tweets and ILI trends identified by US traditional surveillance systems.  相似文献   

17.
In this paper, based on the coupled social networks (CSN), we propose a hybrid algorithm to nonlinearly integrate both social and behavior information of online users. Filtering algorithm, based on the coupled social networks, considers the effects of both social similarity and personalized preference. Experimental results based on two real datasets, Epinions and Friendfeed, show that the hybrid pattern can not only provide more accurate recommendations, but also enlarge the recommendation coverage while adopting global metric. Further empirical analyses demonstrate that the mutual reinforcement and rich-club phenomenon can also be found in coupled social networks where the identical individuals occupy the core position of the online system. This work may shed some light on the in-depth understanding of the structure and function of coupled social networks.  相似文献   

18.
Typical data visualizations result from linear pipelines that start by characterizing data using a model or algorithm to reduce the dimension and summarize structure, and end by displaying the data in a reduced dimensional form. Sensemaking may take place at the end of the pipeline when users have an opportunity to observe, digest, and internalize any information displayed. However, some visualizations mask meaningful data structures when model or algorithm constraints (e.g., parameter specifications) contradict information in the data. Yet, due to the linearity of the pipeline, users do not have a natural means to adjust the displays. In this paper, we present a framework for creating dynamic data displays that rely on both mechanistic data summaries and expert judgement. The key is that we develop both the theory and methods of a new human-data interaction to which we refer as “ Visual to Parametric Interaction” (V2PI). With V2PI, the pipeline becomes bi-directional in that users are embedded in the pipeline; users learn from visualizations and the visualizations adjust to expert judgement. We demonstrate the utility of V2PI and a bi-directional pipeline with two examples.  相似文献   

19.
At the UN in New York the Open Working Group created by the UN General Assembly proposed a set of global Sustainable Development Goals (SDGs) which comprises 17 goals and 169 targets. Further to that, a preliminary set of 330 indicators was introduced in March 2015. Some SDGs build on preceding Millennium Development Goals while others incorporate new ideas. A critical review has revealed that indicators of varied quality (in terms of the fulfilment certain criteria) have been proposed to assess sustainable development. Despite the fact that there is plenty of theoretical work on quality standards for indicators, in practice users cannot often be sure how adequately the indicators measure the monitored phenomena. Therefore we stress the need to operationalise the Sustainable Development Goals’ targets and evaluate the indicators’ relevance, the characteristic of utmost importance among the indicators’ quality traits. The current format of the proposed SDGs and their targets has laid a policy framework; however, without thorough expert and scientific follow up on their operationalisation the indicators may be ambiguous. Therefore we argue for the foundation of a conceptual framework for selecting appropriate indicators for targets from existing sets or formulating new ones. Experts should focus on the “indicator-indicated fact” relation to ensure the indicators’ relevance in order for clear, unambiguous messages to be conveyed to users (decision- and policy-makers and also the lay public). Finally we offer some recommendations for indicators providers in order to contribute to the tremendous amount of conceptual work needed to lay a strong foundation for the development of the final indicators framework.  相似文献   

20.
Due to the restrictions that most traditional scheduling strategies only cared about users’ quality of service (QoS) time or cost requirements, lacked the effective analysis of users’ real service demand and could not guarantee scheduling security, this paper added trust into workflow’s QoS target and proposed a novel customizable cloud workflow scheduling model. In order to better analyze different user’s service requirements and provide customizable services, the new model divided workflow scheduling into two stages: the macro multi-workflow scheduling as the unit of cloud user and the micro single workflow scheduling. It introduced trust mechanism into multi-workflow scheduling level. And in single workflow scheduling level, it classified workflows into time-sensitive, cost-sensitive and balance three types according to different workflow’s QoS demand parameters using fuzzy clustering method. Based on it, it customized different service strategies for different type. The simulation experiments show that the new schema has some advantages in shortening workflow’s final completion time, achieving relatively high execution success rate and user satisfaction compared to other kindred solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号