首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper analyzes the results of different strategies for selecting test products in category appraisals. The three strategies are random selection of products from the marketplace, selection on the basis of consumer sensory data, and selection on the basis of expert panel data. All three methods generate stable results for category appraisals. The stability of the results (e.g., in terms of 'drivers of liking') increases very quickly and then levels off, suggesting that the researcher does not have to work with a particularly large number of products in a category appraisal to understand the sensory-liking dynamics.
It appears in our case, that 18 products were an appropriate number to strongly reflect results for 50 products. Researchers need to be sensitive to the variation in each category. Some categories may require more or less than this number of products to cover all the sensory ranges within the specific category.  相似文献   

2.
There are two approaches to modeling key relations among variables when one tests products. S-R or stimulus-response modeling assumes that the researcher controls the antecedent physical variables (such as ingredients or processing), and that these physical variables are the primary cause of product-to-product differences. R-R or response-response modeling assumes that the researcher can measure co-varying physical measures of a food, but may or may not have control (or even knowledge) of the antecedent physical variables that generate product differences. S-R modeling allows for true optimization, in terms of defining the operations needed to maximize an attribute (e.g., acceptance). R-R modeling allows only a guess as to what particular combination of physical measures would correspond to a maximum level of the attribute. Often S-R and R-R modeling and optimization are confused with each other, leading to incorrect conclusions.  相似文献   

3.
Environmental factors control species distributions and abundances, but effectiveness of land use and disturbance variables for modeling species generally is unknown compared to climate, soil, and topography variables. Therefore, I used predictor variables from categories of 1) land use and disturbance, 2) climate, and 3) soil, topography, and wind speed to model the relative abundances (i.e., percentage of all trees) of 65 common tree species in the eastern United States, with a contrast to presence-absence models of species distributions. First, I modeled variables within each category to identify the five most important variables. Then, I combined variables from each category to isolate most important variables, based on five model combinations of input variables from each category, ranging from one (i.e., three total) to five (i.e., 15 total) variables. From the five models of combined categories for each tree species, I identified the model with the greatest R2 value. Overall, climate variables were most important for tree species models with one and two input variables from each category, but land use and disturbance variables were most important for models with three to five input variables from each category. Although a range of R2 values occurred by species and number of input model variables, 32 species had best models with greatest R2 values of 0.50 to 0.81. For all best species models, the most important variables were temperature of the warmest quarter, historical fire return interval for all fires, agricultural area during years 1850 to 1997, and precipitation of the driest month. Current land cover classes, which are accessible and the most commonly modeled land use variables, were not important for modeling tree species abundances or distributions. Climate variables were most important for modeling species distributions. Results support the concept that while climate sets soft boundaries on distributions, relative abundances within distributions are affected by other filters. Future modeling may establish other important land use and disturbance variables, or refinements within the important variables of historical fire return interval and agricultural area over time, advancing integration of both land use and climate variables into studies.  相似文献   

4.
The Sleeping Beauty (SB) transposon system provides the first random insertional mutagen available for germline genetic screens in mice. In preparation for a large scale project to create, map and manage up to 5000 SB insertions, we have developed the Mouse Transposon Insertion Database (MTID; http://mouse.ccgb.umn.edu/transposon/). Each insertion's genomic position, as well as the distance between the insertion and the nearest annotated gene, are determined by a sequence analysis pipeline. Users can search the database using a specified nucleotide or genetic map position to identify the nearest insertion. Mouse reports describe insertions carried, strain, genotype and dates of birth and death. Insertion reports describes chromosome, nucleotide and genetic map positions, as well as nearest gene data from Ensembl, NCBI and Celera. The flanking sequence used to map the insertion is also provided. Researchers will be able to identify insertions of interest and request mice or frozen sperm that carry the insertion.  相似文献   

5.
Researchers trained 24 black-capped (Poecile atricapillus) and 12 mountain (P. gambeli) chickadees in an operant conditioning task to determine if they use open-ended categorization to classify "chick-a-dee" calls, and whether black-capped chickadees that had experience with mountain chick-a-dee calls (sympatric group) would perform this task differently than inexperienced black-capped chickadees (allopatric group). All experimental birds learned to discriminate between species' call categories faster than within a category (Experiment 1), and subsequently classified novel and original between-category chick-a-dee calls in Experiments 2 and 3 following a change in the category contingency. These results suggest that regardless of previous experience, black-capped and mountain chickadees classify their own and the other species' calls into two distinct, yet open-ended, species-level categories.  相似文献   

6.
Researchers use the 13-lined ground squirrel for studies of hibernation biochemistry and physiology, as well as for modeling a variety of potential biomedical applications of hibernation physiology. It is currently necessary to capture research specimens from the wild; this presents a host of unknown variables, not least of which is the stress of captivity. Moreover, many investigators are unfamiliar with the husbandry of this species. The authors describe practical methods for their capture, year-round care (including hibernation), captive mating, and rearing of the young. These practices will allow the researcher to better standardize his or her population of research animals, optimizing the use of this interesting model organism.  相似文献   

7.
Several individual miRNAs (miRs) have been implicated as potent regulators of important processes during normal and malignant hematopoiesis. In addition, many miRs have been shown to fine-tune intricate molecular networks, in concert with other regulatory elements. In order to study hematopoietic networks as a whole, we first created a map of global miR expression during early murine hematopoiesis. Next, we determined the copy number per cell for each miR in each of the examined stem and progenitor cell types. As data is emerging indicating that miRs function robustly mainly when they are expressed above a certain threshold (∼100 copies per cell), our database provides a resource for determining which miRs are expressed at a potentially functional level in each cell type. Finally, we combine our miR expression map with matched mRNA expression data and external prediction algorithms, using a Bayesian modeling approach to create a global landscape of predicted miR-mRNA interactions within each of these hematopoietic stem and progenitor cell subsets. This approach implicates several interaction networks comprising a “stemness” signature in the most primitive hematopoietic stem cell (HSC) populations, as well as “myeloid” patterns associated with two branches of myeloid development.  相似文献   

8.
We present a detailed approach to create realistic silica pores for computer simulations especially molecular dynamics (MD) simulations. These pores are essential for all different kinds of simulations with liquids in silica confinements. Despite wide use of silica pores in simulations, a detailed documentation how to create these pores for simulations still lacks. This issue is of high significance because with the help of this paper every researcher can build own silica pores with desired geometries and is not stick to already existing pores. We discuss problems that might occur during the whole process and how to solve these problems. So far more than 3 different silica pores have been created with this method and used successfully as confinement material in MD simulations.  相似文献   

9.
Sonia Stephens 《Evolution》2012,5(4):603-618
Diagrams can be important tools for communicating about evolution. One of the most common visual metaphors that unites a variety of diagrams that describe macroevolution is a tree. Tree-based diagrams are designed to provide a phylogenetic framework for thinking about evolutionary pattern. As is the case with any other metaphor, however, misunderstandings about evolution may either arise from or be perpetuated by how we depict the tree of life. Researchers have tried various approaches to create tree-based diagrams that communicate evolution more accurately. This paper addresses the conceptual limitations of the tree as a visual metaphor for evolution and explores the ways we can use digital tools to extend our visual metaphors for evolution communication. The theory of distributed cognition provides a framework to aid in the analysis of the conceptual affordances and constraints of tree-based diagrams, and develop new ways to visualize evolution. By combining a new map-based visual metaphor for macroevolution with the interactive properties of digital technology, a new method of visualizing evolution called the dynamic evolutionary map is proposed. This paper concludes by comparing the metaphoric affordances and constraints of tree diagrams and the dynamic evolutionary map, and discussing the potential applications of the latter as an educational tool.  相似文献   

10.
Cancer is recognized to be a family of gene-based diseases whose causes are to be found in disruptions of basic biologic processes. An increasingly deep catalogue of canonical networks details the specific molecular interaction of genes and their products. However, mapping of disease phenotypes to alterations of these networks of interactions is accomplished indirectly and non-systematically. Here we objectively identify pathways associated with malignancy, staging, and outcome in cancer through application of an analytic approach that systematically evaluates differences in the activity and consistency of interactions within canonical biologic processes. Using large collections of publicly accessible genome-wide gene expression, we identify small, common sets of pathways - Trka Receptor, Apoptosis response to DNA Damage, Ceramide, Telomerase, CD40L and Calcineurin - whose differences robustly distinguish diverse tumor types from corresponding normal samples, predict tumor grade, and distinguish phenotypes such as estrogen receptor status and p53 mutation state. Pathways identified through this analysis perform as well or better than phenotypes used in the original studies in predicting cancer outcome. This approach provides a means to use genome-wide characterizations to map key biological processes to important clinical features in disease.  相似文献   

11.
The hermeneutics of ecological simulation   总被引:1,自引:0,他引:1  
Computer simulation has become important in ecological modeling, but there have been few assessments on how complex simulation models differ from more traditional analytic models. In Part I of this paper, I review the challenges faced in complex ecological modeling and how models have been used to gain theoretical purchase for understanding natural systems. I compare the use of traditional analytic simulation models and point how that the two methods require different kinds of practical engagement. I examine a case study of three models from the insect resistance literature in transgenic crops to illustrate and explore differences in analytic and computer simulation models. I argue that analyzing simulation models has been often inappropriately managed with expectations derived from handling analytic models. In Part II, I look at simulation as a hermeneutic practice. I argue that simulation models are a practice or techné. I the explore five aspects of philosophical hermeneutics that may be useful in complex ecological simulation: (1) an openness to multiple perspectives allowing multiple levels of scientific pluralism, (2) the hermeneutic circle, a back and forth in active communication among both modelers and ecologists; (3) the recognition of human factors and the nature of human practices as such, including recognizing the role of judgments and choices in the modeling enterprise; (4) the importance of play in modeling; (5) the non-closed nature of hermeneutic engagement, continued dialogue, and recognizing the situatedness, incompleteness, and tentative nature of simulation models.
Steven L. PeckEmail:
  相似文献   

12.
This is an interdisciplinary lesson designed for middle school students studying landforms and geological processes. Students create a two-dimensional topographic map from a three-dimensional landform that they create using clay. Students then use other groups’ topographic maps to re-create landforms. Following this, students explore some basic ideas about how landforms take shape and how they can change over time. As students work through three distinct learning-cycle phases of concept exploration, introduction, and application, they use art, language arts, and mathematical skills to strengthen or form new science and social studies concepts.  相似文献   

13.
High-level specification of how the brain represents and categorizes the causes of its sensory input allows to link "what is to be done" (perceptual task) with "how to do it" (neural network calculation). In this article, we describe how the variational framework, which encountered a large success in modeling computer vision tasks, has some interesting relationships, at a mesoscopic scale, with computational neuroscience. We focus on cortical map computations such that "what is to be done" can be represented as a variational approach, i.e., an optimization problem defined over a continuous functional space. In particular, generalizing some existing results, we show how a general variational approach can be solved by an analog neural network with a given architecture and conversely. Numerical experiments are provided as an illustration of this general framework, which is a promising framework for modeling macro-behaviors in computational neuroscience.  相似文献   

14.
Aim, Scope and background  Given the communication limitation of a damage-oriented approach, the question addressed in this paper is how normalisation can be developed instead. Normalisation of product service systems without value choices is, in accordance to ISO 14042, suitable for external communication. Reason normalisation approaches use a geographically-defined baseline year of emissions, optionally combined with politically established target emissions (Guinée 2002, Stranddorf et al. 2001). In contradiction to these approaches, this paper aims to draw up the general structure of an alternative normalisation procedure. The normalisation procedure suggested here is based on environmental quality objectives (EQO), in order to streamline the result to include as few output parameters as possible, without compromising the scientific robustness of the method. Main Features  This article describes a normalisation procedure based on environmental quality objectives. Comparison between this approach and a damage-oriented approach is conducted. The relevant working area concerning dose and effect is evaluated. Then a discussion is conducted focusing on the trade-off necessary to achieve an integrated category indicator, covering the following issues; model reliability, user applicability and the unambiguously of the result. Result  A damage-oriented approach will have to take into account all the defined consequences from all impact categories that affect the safeguards in parallel. In other words, each impact category indicator and its potential effects on all safeguards must be evaluated and accounted for. In the case where a single category indicator cannot be found without utilising value choices, a number of category indicators will then have to constitute an intermediate category indicator result, where weighting must be applied in order to streamline the result. In contrast to the above approach, the suggested normalisation procedure utilises the precautionary principle with respect to the essential EQO in order to achieve a category indicator result, called a critical load category indicator result. In practice, this means that the number of figures in an LCIA-profile based on critical load will always be the same as the number of impact categories. Conclusions  The suggested EQO normalisation procedure forms a set of critical loads per impact category, where each is defined by a critical load function where linearity is defined between a zero load and the critical load. This procedure will affect the temporal resolution and the field of application of the LCIA method. The positive aspect is that the suggested normalisation procedure renders the method applicable for long-lived products like, for example, buildings or other infrastructures. This aspect is gained by reducing the damage-oriented resolution. Consequently, for long-lived products where the main environmental loads will appear in the future, it is hard to assess by a damage-oriented LCIA method (if all boundary conditions are not assumed to be fixed). The EQO normalisation method will, in this respect, improve the overall reliability of the outcome of an LCA when long-lived products are assessed. For short-lived products, adequate boundary conditions can be achieved, and for this reason a damage-oriented approach will have the possibility to address current consequences. Nevertheless, a damage-oriented approach working area is not applicable beneath thresholds unlike the EQO normalisation procedure. The most effective decision support of short-lived products is therefore achieved when both approaches are applied. Outlook  A complementary paper will be produced where the described normalisation procedure is exemplified in a case study, with special interest on assessment of chemical substances.  相似文献   

15.
Protein expression profiling is increasingly being used to discover, validate and characterize biomarkers that can potentially be used for diagnostic purposes and to aid in pharmaceutical development. Correct analysis of data obtained from these experiments requires an understanding of the underlying analytic procedures used to obtain the data, statistical principles underlying high-dimensional data and clinical statistical tools used to determine the utility of the interpreted data. This review summarizes each of these steps, with the goal of providing the nonstatistician proteomics researcher with a working understanding of the various approaches that may be used by statisticians. Emphasis is placed on the process of mining high-dimensional data to identify a specific set of biomarkers that may be used in a diagnostic or other assay setting.  相似文献   

16.
Protein expression profiling is increasingly being used to discover, validate and characterize biomarkers that can potentially be used for diagnostic purposes and to aid in pharmaceutical development. Correct analysis of data obtained from these experiments requires an understanding of the underlying analytic procedures used to obtain the data, statistical principles underlying high-dimensional data and clinical statistical tools used to determine the utility of the interpreted data. This review summarizes each of these steps, with the goal of providing the nonstatistician proteomics researcher with a working understanding of the various approaches that may be used by statisticians. Emphasis is placed on the process of mining high-dimensional data to identify a specific set of biomarkers that may be used in a diagnostic or other assay setting.  相似文献   

17.
Mapping global cropland and field size   总被引:8,自引:0,他引:8       下载免费PDF全文
Steffen Fritz  Linda See  Ian McCallum  Liangzhi You  Andriy Bun  Elena Moltchanova  Martina Duerauer  Fransizka Albrecht  Christian Schill  Christoph Perger  Petr Havlik  Aline Mosnier  Philip Thornton  Ulrike Wood‐Sichra  Mario Herrero  Inbal Becker‐Reshef  Chris Justice  Matthew Hansen  Peng Gong  Sheta Abdel Aziz  Anna Cipriani  Renato Cumani  Giuliano Cecchi  Giulia Conchedda  Stefanus Ferreira  Adriana Gomez  Myriam Haffani  Francois Kayitakire  Jaiteh Malanding  Rick Mueller  Terence Newby  Andre Nonguierma  Adeaga Olusegun  Simone Ortner  D. Ram Rajak  Jansle Rocha  Dmitry Schepaschenko  Maria Schepaschenko  Alexey Terekhov  Alex Tiangwa  Christelle Vancutsem  Elodie Vintrou  Wu Wenbin  Marijn van der Velde  Antonia Dunwoody  Florian Kraxner  Michael Obersteiner 《Global Change Biology》2015,21(5):1980-1992
A new 1 km global IIASA‐IFPRI cropland percentage map for the baseline year 2005 has been developed which integrates a number of individual cropland maps at global to regional to national scales. The individual map products include existing global land cover maps such as GlobCover 2005 and MODIS v.5, regional maps such as AFRICOVER and national maps from mapping agencies and other organizations. The different products are ranked at the national level using crowdsourced data from Geo‐Wiki to create a map that reflects the likelihood of cropland. Calibration with national and subnational crop statistics was then undertaken to distribute the cropland within each country and subnational unit. The new IIASA‐IFPRI cropland product has been validated using very high‐resolution satellite imagery via Geo‐Wiki and has an overall accuracy of 82.4%. It has also been compared with the EarthStat cropland product and shows a lower root mean square error on an independent data set collected from Geo‐Wiki. The first ever global field size map was produced at the same resolution as the IIASA‐IFPRI cropland map based on interpolation of field size data collected via a Geo‐Wiki crowdsourcing campaign. A validation exercise of the global field size map revealed satisfactory agreement with control data, particularly given the relatively modest size of the field size data set used to create the map. Both are critical inputs to global agricultural monitoring in the frame of GEOGLAM and will serve the global land modelling and integrated assessment community, in particular for improving land use models that require baseline cropland information. These products are freely available for downloading from the http://cropland.geo-wiki.org website.  相似文献   

18.

Purpose

The purpose of this work is to identify and select safeguard subjects and state indicators that are suitable for sustainability assessment in product and production development, using an interpretation of the Brundtland definition of sustainable development. The purpose is also to investigate how indicators selected in this way differ from other selections in the literature.

Methods

We use a top-down approach, which starts with reviewing the Brundtland definition of sustainability and identifying the corresponding human basic needs to be satisfied. For each basic need, we identify relevant satisfiers, and for each satisfier, a number of safeguard subjects. The safeguard subjects represent critical resources for making satisfiers available. For each safeguard subject, a number of state indicators (=endpoint category indicators) are selected that are relevant for describing impacts from product life cycles on the safeguard subject.

Results and discussion

Ecosystem services, access to water, and abiotic resources are identified as environmental safeguard subjects. Technology for transports, environment, textiles, housing, food, information, and energy, together with income, are identified as economical safeguard subjects. Human health, land availability, peace, social security, continuity, knowledge, jobs/occupation, and culture are identified as social safeguard subjects. In comparison with the other selections of safeguard subjects in literature, our safeguard subjects are structured differently and delimited in scope, but there are also many similarities. The best agreement is on environmental issues, but we classify human health as a social issue. For social issues, we identify fewer safeguard subjects and state indicators than recommendations from UNEP/SETAC. For economic issues, we diverse from current LCC and approach UNECE measures of sustainability.

Conclusions

Identification and selection of safeguard subjects and state indicators benefit from a clear definition of sustainability, needs to be satisfied, and satisfiers. The interpretation of the sustainability concept has a large influence on which safeguard subjects that are included and which indicators that are needed to describe their state. Capacity building is an important sustainability indicator, which should be developed further for use in life cycle sustainability assessment. The top-down approach offers a good arena for a further research and discussions on how to structure and focus LCSA. Our results shall be seen as one example of which safeguard subject that may be identified with the top-down approach presented here.
  相似文献   

19.
Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of “sameness” among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16–17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item''s prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of “sameness.”  相似文献   

20.
Kosloff M  Kolodny R 《Proteins》2008,71(2):891-902
It is often assumed that in the Protein Data Bank (PDB), two proteins with similar sequences will also have similar structures. Accordingly, it has proved useful to develop subsets of the PDB from which "redundant" structures have been removed, based on a sequence-based criterion for similarity. Similarly, when predicting protein structure using homology modeling, if a template structure for modeling a target sequence is selected by sequence alone, this implicitly assumes that all sequence-similar templates are equivalent. Here, we show that this assumption is often not correct and that standard approaches to create subsets of the PDB can lead to the loss of structurally and functionally important information. We have carried out sequence-based structural superpositions and geometry-based structural alignments of a large number of protein pairs to determine the extent to which sequence similarity ensures structural similarity. We find many examples where two proteins that are similar in sequence have structures that differ significantly from one another. The source of the structural differences usually has a functional basis. The number of such proteins pairs that are identified and the magnitude of the dissimilarity depend on the approach that is used to calculate the differences; in particular sequence-based structure superpositioning will identify a larger number of structurally dissimilar pairs than geometry-based structural alignments. When two sequences can be aligned in a statistically meaningful way, sequence-based structural superpositioning provides a meaningful measure of structural differences. This approach and geometry-based structure alignments reveal somewhat different information and one or the other might be preferable in a given application. Our results suggest that in some cases, notably homology modeling, the common use of nonredundant datasets, culled from the PDB based on sequence, may mask important structural and functional information. We have established a data base of sequence-similar, structurally dissimilar protein pairs that will help address this problem (http://luna.bioc.columbia.edu/rachel/seqsimstrdiff.htm).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号