首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Every field of biology has its assumptions, but when they grow to be dogma, they can become constraining. This essay presents data-based challenges to several prominent assumptions of developmental physiologists. The ubiquity of allometry is such an assumption, yet animal development is characterized by rate changes that are counter to allometric predictions. Physiological complexity is assumed to increase with development, but examples are provided showing that complexity can be greatest at intermediate developmental stages. It is assumed that organs have functional equivalency in embryos and adults, yet embryonic structures can have quite different functions than inferred from adults. Another assumption challenged is the duality of neural control (typically sympathetic and parasympathetic), since one of these two regulatory mechanisms typically considerably precedes in development the appearance of the other. A final assumption challenged is the notion that divergent phylogeny creates divergent physiologies in embryos just as in adults, when in fact early in development disparate vertebrate taxa show great quantitative as well as qualitative similarity. Collectively, the inappropriateness of these prominent assumptions based on adult studies suggests that investigation of embryos, larvae and fetuses be conducted with appreciation for their potentially unique physiologies.  相似文献   

2.
There has been a strong tendency in structural and symbolic anthropology to assume that sex and aggression are of no concern to cultural symbol systems. Even when cultural beliefs, myths, or rituals are explicitly and preponderantly sexual or aggressive in content, they are typically interpreted as metaphors for social structural themes. This thesis is illustrated with respect to aggression by an analysis of Lévi-Strauss' interpretation of a Bororo myth, after which the assumptions that structural theory makes concerning the place of aggression in cultural symbol systems are contrasted with the opposing assumptions of psychoanalytic theory. [structuralism, psychoanalysis, cultural symbol systems, myth]  相似文献   

3.
This article identifies a set of assumptions that underlie culturalist approaches to ethnic nationalism and it assesses these assumptions from a particular instrumentalist point of view ‐ collective‐choice theory. It is argued that cultural approaches are structuralist, leaving little room for intentional explanations and, when agent‐centred explanations are used, they are typically embedded within a moral economic theory of groups. In contrast, collective‐choice theory is intentionalist and political‐economic in orientation. From the perspective of these different approaches, the article examines a common dilemma of mobilization in nationalist movements ‐ how popular support can be mobilized by activists who, for entrepreneurial or ideological reasons, have formed a nationalist organization. Empirical illustrations are drawn from interwar Brittany and contemporary Quebec.  相似文献   

4.
In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.  相似文献   

5.
Synthetic biology is often understood in terms of the pursuit for well-characterized biological parts to create synthetic wholes. Accordingly, it has typically been conceived of as an engineering dominated and application oriented field. We argue that the relationship of synthetic biology to engineering is far more nuanced than that and involves a sophisticated epistemic dimension, as shown by the recent practice of synthetic modeling. Synthetic models are engineered genetic networks that are implanted in a natural cell environment. Their construction is typically combined with experiments on model organisms as well as mathematical modeling and simulation. What is especially interesting about this combinational modeling practice is that, apart from greater integration between these different epistemic activities, it has also led to the questioning of some central assumptions and notions on which synthetic biology is based. As a result synthetic biology is in the process of becoming more “biology inspired.”  相似文献   

6.

Background

Mathematical models have been used to study the dynamics of infectious disease outbreaks and predict the effectiveness of potential mass vaccination campaigns. However, models depend on simplifying assumptions to be tractable, and the consequences of making such assumptions need to be studied. Two assumptions usually incorporated by mathematical models of vector-borne disease transmission is homogeneous mixing among the hosts and vectors and homogeneous distribution of the vectors.

Methodology/Principal Findings

We explored the effects of mosquito movement and distribution in an individual-based model of dengue transmission in which humans and mosquitoes are explicitly represented in a spatial environment. We found that the limited flight range of the vector in the model greatly reduced its ability to transmit dengue among humans. A model that does not assume a limited flight range could yield similar attack rates when transmissibility of dengue was reduced by 39%. A model in which mosquitoes are distributed uniformly across locations behaves similarly to one in which the number of mosquitoes per location is drawn from an exponential distribution with a slightly higher mean number of mosquitoes per location. When the models with different assumptions were calibrated to have similar human infection attack rates, mass vaccination had nearly identical effects.

Conclusions/Significance

Small changes in assumptions in a mathematical model of dengue transmission can greatly change its behavior, but estimates of the effectiveness of mass dengue vaccination are robust to some simplifying assumptions typically made in mathematical models of vector-borne disease.  相似文献   

7.
Neuroprosthetic devices such as a computer cursor can be controlled by the activity of cortical neurons when an appropriate algorithm is used to decode motor intention. Algorithms which have been proposed for this purpose range from the simple population vector algorithm (PVA) and optimal linear estimator (OLE) to various versions of Bayesian decoders. Although Bayesian decoders typically provide the most accurate off-line reconstructions, it is not known which model assumptions in these algorithms are critical for improving decoding performance. Furthermore, it is not necessarily true that improvements (or deficits) in off-line reconstruction will translate into improvements (or deficits) in on-line control, as the subject might compensate for the specifics of the decoder in use at the time. Here we show that by comparing the performance of nine decoders, assumptions about uniformly distributed preferred directions and the way the cursor trajectories are smoothed have the most impact on decoder performance in off-line reconstruction, while assumptions about tuning curve linearity and spike count variance play relatively minor roles. In on-line control, subjects compensate for directional biases caused by non-uniformly distributed preferred directions, leaving cursor smoothing differences as the largest single algorithmic difference driving decoder performance.  相似文献   

8.
Correlations between the amount of energy received by an assemblage and the number of species that it contains are very general, and at the macro-scale such species-energy relationships typically follow a monotonically increasing curve. Whilst the ecological literature contains frequent reports of such relationships, debate on their causal mechanisms is limited and typically focuses on the role of energy availability in controlling the number of individuals in an assemblage. Assemblages from high-energy areas may contain more individuals enabling species to maintain larger, more viable populations, whose lower extinction risk elevates species richness. Other mechanisms have, however, also been suggested. Here we identify and clarify nine principal mechanisms that may generate positive species-energy relationships at the macro-scale. We critically assess their assumptions and applicability over a range of spatial scales, derive predictions for each and assess the evidence that supports or refutes them. Our synthesis demonstrates that all mechanisms share at least one of their predictions with an alternative mechanism. Some previous studies of species-energy relationships appear not to have recognised the extent of shared predictions, and this may detract from their contribution to the debate on causal mechanisms. The combination of predictions and assumptions made by each mechanism is, however, unique, suggesting that, in principle, conclusive tests are possible. Sufficient testing of all mechanisms has yet to be conducted, and no single mechanism currently has unequivocal support. Each may contribute to species-energy relationships in some circumstances, but some mechanisms are unlikely to act simultaneously. Moreover, a limited number appear particularly likely to contribute frequently to species-energy relationships at the macro-scale. The increased population size, niche position and diversification rate mechanisms are particularly noteworthy in this context.  相似文献   

9.
Preparing the Supplement to the Surgeon General's Report on Mental Health proved controversial because the assignment, by its nature, challenged several forms of consensus that typically remain unexamined. They included disciplinary assumptions about theory and methods, sociopolitical assumptions about the relevance of history to contemporary circumstances of ethnic minority groups in America, the rigor and usefulness of cultural formulation, and whether the burden of proof rested with those who took for granted that sociocultural differences exist in theories of behavior, or those who took for granted the existence of universals. Preparation of the Supplement illustrates the uncertainty and tension that arise when unexamined boundaries and perspectives lose their capacity to serve as guides to scientific judgment and discourse.  相似文献   

10.
False positive peptide identifications are a major concern in the field of peptidecentric, mass spectrometry-driven gel-free proteomics. They occur in regions where the score distributions of true positives and true negatives overlap. Removal of these false positive identifications necessarily involves a trade-off between sensitivity and specificity. Existing postprocessing tools typically rely on a fixed or semifixed set of assumptions in their attempts to optimize both the sensitivity and the specificity of peptide and protein identification using MS/MS spectra. Because of the expanding diversity in available proteomics technologies, however, these postprocessing tools often struggle to adapt to emerging technology-specific peculiarity. Here we present a novel tool named Peptizer that solves this adaptability issue by making use of pluggable assumptions. This research-oriented postprocessing tool also includes a graphical user interface to perform efficient manual validation of suspect identifications for optimal sensitivity recovery. Peptizer is open source software under the Apache2 license and is written in Java.  相似文献   

11.
Gail MH  Kessler L  Midthune D  Scoppa S 《Biometrics》1999,55(4):1137-1144
Two approaches are described for estimating the prevalence of a disease that may have developed in a previous restricted age interval among persons of a given age at a particular calendar time. The prevalence for all those who ever developed disease is treated as a special case. The counting method (CM) obtains estimates of prevalence by dividing the estimated number of diseased persons by the total population size, taking loss to follow-up into account. The transition rate method (TRM) uses estimates of transition rates and competing risk calculations to estimate prevalence. Variance calculations are described for CM and TRM as well as for a variant of CM, called counting method times 10 (CM10), that is designed to yield more precise estimates than CM. We compare these three estimators in terms of precision and in terms of the underlying assumptions required to justify the methods. CM makes fewer assumptions but is typically ess precise than TRM or CM10. For common diseases such as breast cancer, CM may be preferred because its precision is excellent even though not as high as for TRM or CM10. For less common diseases, such as brain cancer, however, TRM or CM10 and other methods that make stabilizing assumptions may be preferred to CM.  相似文献   

12.
MOTIVATION: Numerical output of spotted microarrays displays censoring of pixel intensities at some software dependent threshold. This reduces the quality of gene expression data, because it seriously violates the linearity of expression with respect to signal intensity. Statistical methods based on typically available spot summaries together with some parametric assumptions can suggest ways to correct for this defect. RESULTS: A maximum likelihood approach is suggested together with a sensible approximation to the joint density of the mean, median and variance-which are typically available to the biological end-user. The method 'corrects' the gene expression values for pixel censoring. A by-product of our approach is a comparison between several two-parameter models for pixel intensity values. It suggests that pixels separated by one or two other pixels can be considered independent draws from a Lognormal or a Gamma distribution. AVAILABILITY: The R/S-Plus code is available at http://www.stats.gla.ac.uk/~microarray/software.  相似文献   

13.
14.
Recent increases in reported outbreaks of tick-borne diseases have led to increased interest in understanding and controlling epidemics involving these transmission vectors. Mathematical disease models typically assume constant population size and spatial homogeneity. For tick-borne diseases, these assumptions are not always valid. The disease model presented here incorporates non-constant population sizes and spatial heterogeneity utilizing a system of differential equations that may be applied to a variety of spatial patches. We present analytical results for the one patch version and find parameter restrictions under which the populations and infected densities reach equilibrium. We then numerically explore disease dynamics when parameters are allowed to vary spatially and temporally and consider the effectiveness of various tick-control strategies.  相似文献   

15.

Background  

Research involving expressed sequence tags (ESTs) is intricately coupled to the existence of large, well-annotated sequence repositories. Comparatively complete and satisfactory annotated public sequence libraries are, however, available only for a limited range of organisms, rendering the absence of sequences and gene structure information a tangible problem for those working with taxa lacking an EST or genome sequencing project. Paralogous genes belonging to the same gene family but distinguished by derived characteristics are particularly prone to misidentification and erroneous annotation; high but incomplete levels of sequence similarity are typically difficult to interpret and have formed the basis of many unsubstantiated assumptions of orthology.  相似文献   

16.
1. The disparity of the spatial domains used by predators and prey is a common feature of many terrestrial avian and mammalian predatory interactions, as predators are typically more mobile and have larger home ranges than their prey. 2. Incorporating these realistic behavioural features requires formulating spatial predator-prey models having local prey mortality due to predation and its spatial aggregation, in order to generate a numerical response at timescales longer than the local prey consumption. Coupling the population dynamics occurring at different spatial scales is far from intuitive, and involves making important behavioural and demographic assumptions. Previous spatial predator-prey models resorted to intuition to derive local functional responses from non-spatial equivalents, and often involve unrealistic biological assumptions that restrict their validity. 3. We propose a hierarchical framework for deriving generic models of spatial predator-prey interactions that explicitly considers the behavioural and demographic processes occurring at different spatial and temporal scales. 4. The proposed framework highlights the circumstances wherein static spatial patterns emerge and can be a stabilizing mechanism of consumer-resource interactions.  相似文献   

17.
When molecules and morphology produce incongruent hypotheses of primate interrelationships, the data are typically viewed as incompatible, and molecular hypotheses are often considered to be better indicators of phylogenetic history. However, it has been demonstrated that the choice of which taxa to include in cladistic analysis as well as assumptions about character weighting, character state transformation order, and outgroup choice all influence hypotheses of relationships and may positively influence tree topology, so that relationships between extant taxa are consistent with those found using molecular data. Thus, the source of incongruence between morphological and molecular trees may lie not in the morphological data themselves but in assumptions surrounding the ways characters evolve and their impact on cladistic analysis. In this study, we investigate the role that assumptions about character polarity and transformation order play in creating incongruence between primate phylogenies based on morphological data and those supported by multiple lines of molecular data. By releasing constraints imposed on published morphological analyses of primates from disparate clades and subjecting those data to parsimony analysis, we test the hypothesis that incongruence between morphology and molecules results from inherent flaws in morphological data. To quantify the difference between incongruent trees, we introduce a new method called branch slide distance (BSD). BSD mitigates many of the limitations attributed to other tree comparison methods, thus allowing for a more accurate measure of topological similarity. We find that releasing a priori constraints on character behavior often produces trees that are consistent with molecular trees. Case studies are presented that illustrate how congruence between molecules and unconstrained morphological data may provide insight into issues of polarity, transformation order, homology, and homoplasy.  相似文献   

18.
Stable-isotope analysis (SIA) can act as a powerful ecological tracer with which to examine diet, trophic position and movement, as well as more complex questions pertaining to community dynamics and feeding strategies or behaviour among aquatic organisms. With major advances in the understanding of the methodological approaches and assumptions of SIA through dedicated experimental work in the broader literature coupled with the inherent difficulty of studying typically large, highly mobile marine predators, SIA is increasingly being used to investigate the ecology of elasmobranchs (sharks, skates and rays). Here, the current state of SIA in elasmobranchs is reviewed, focusing on available tissues for analysis, methodological issues relating to the effects of lipid extraction and urea, the experimental dynamics of isotopic incorporation, diet-tissue discrimination factors, estimating trophic position, diet and mixing models and individual specialization and niche-width analyses. These areas are discussed in terms of assumptions made when applying SIA to the study of elasmobranch ecology and the requirement that investigators standardize analytical approaches. Recommendations are made for future SIA experimental work that would improve understanding of stable-isotope dynamics and advance their application in the study of sharks, skates and rays.  相似文献   

19.
The performance of blood-processing devices largely depends on the associated fluid dynamics, which hence represents a key aspect in their design and optimization. To this aim, two approaches are currently adopted: computational fluid-dynamics, which yields highly resolved three-dimensional data but relies on simplifying assumptions, and in vitro experiments, which typically involve the direct video-acquisition of the flow field and provide 2D data only. We propose a novel method that exploits space- and time-resolved magnetic resonance imaging (4D-flow) to quantify the complex 3D flow field in blood-processing devices and to overcome these limitations.We tested our method on a real device that integrates an oxygenator and a heat exchanger. A dedicated mock loop was implemented, and novel 4D-flow sequences with sub-millimetric spatial resolution and region-dependent velocity encodings were defined. Automated in house software was developed to quantify the complex 3D flow field within the different regions of the device: region-dependent flow rates, pressure drops, paths of the working fluid and wall shear stresses were computed.Our analysis highlighted the effects of fine geometrical features of the device on the local fluid-dynamics, which would be unlikely observed by current in vitro approaches. Also, the effects of non-idealities on the flow field distribution were captured, thanks to the absence of the simplifying assumptions that typically characterize numerical models.To the best of our knowledge, our approach is the first of its kind and could be extended to the analysis of a broad range of clinically relevant devices.  相似文献   

20.
We present a novel semiparametric method for quantitative trait loci (QTL) mapping in experimental crosses. Conventional genetic mapping methods typically assume parametric models with Gaussian errors and obtain parameter estimates through maximum-likelihood estimation. In contrast with univariate regression and interval-mapping methods, our model requires fewer assumptions and also accommodates various machine-learning algorithms. Estimation is performed with targeted maximum-likelihood learning methods. We demonstrate our semiparametric targeted learning approach in a simulation study and a well-studied barley data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号