首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Chemical concentrations and distributions in an aquatic food web were studied to quantify the relative importance of chemical properties versus food web processes in determining exposure dynamics of organic contaminants in aquatic ecosystems. Five organochlorines were measured (Pentachlorobenzene QCB, Hexachlorobenzene HCB, Octachlorostyrene OCS, Dichlorodiphenyldichloroethylene DDE and Polychlorinated Biphenyls PCBs) in the food web of Lake St. Clair. Levels of QCB in aquatic organisms ranged from 1.0 to 25 µg kg–1 lipid, and levels of HCB ranged from 10 to 410 µg kg–1 lipid. More elevated concentrations of OCS (13 to 392 µg kg–1 lipid), DDE (162 to 11 986 µg kg–1 lipid) and PCB (650 to 64 900 µg kg–1 lipid) were observed. Organism — water equilibrium ratios were calculated for all species sampled to quantify the importance of food web processes in regulating contaminant exposure dynamics. Correlations of organism — water equilibrium ratios with body size were not significant for QCB, HCB and OCS (P>0.1), but were found to be significant for DDE and PCB (P<0.01).Results support the conclusion that both chemical properties and food web dynamics regulate the distribution and concentration of organochlorines in aquatic ecosystems. Food web processes are important, however, for chemicals, that are not metabolized and have octanol — water partition coefficients (log K ow) greater than 5.5.  相似文献   

2.
Access to public data sets is important to the scientific community as a resource to develop new experiments or validate new data. Projects such as the PeptideAtlas, Ensembl and The Cancer Genome Atlas (TCGA) offer both access to public data and a repository to share their own data. Access to these data sets is often provided through a web page form and a web service API. Access technologies based on web protocols (e.g. http) have been in use for over a decade and are widely adopted across the industry for a variety of functions (e.g. search, commercial transactions, and social media). Each architecture adapts these technologies to provide users with tools to access and share data. Both commonly used web service technologies (e.g. REST and SOAP), and custom-built solutions over HTTP are utilized in providing access to research data. Providing multiple access points ensures that the community can access the data in the simplest and most effective manner for their particular needs. This article examines three common access mechanisms for web accessible data: BioMart, caBIG, and Google Data Sources. These are illustrated by implementing each over the PeptideAtlas repository and reviewed for their suitability based on specific usages common to research. BioMart, Google Data Sources, and caBIG are each suitable for certain uses. The tradeoffs made in the development of the technology are dependent on the uses each was designed for (e.g. security versus speed). This means that an understanding of specific requirements and tradeoffs is necessary before selecting the access technology.  相似文献   

3.
Over the years, we have seen a significant number of integration techniques for data warehouses to support web integrated data. However, the existing works focus extensively on the design concept. In this paper, we focus on the performance of a web database application such as an integrated web data warehousing using a well-defined and uniform structure to deal with web information sources including semi-structured data such as XML data, and documents such as HTML in a web data warehouse system. By using a case study, our implementation of the prototype is a web manipulation concept for both incoming sources and result outputs. Thus, the system not only can be operated through the web, it can also handle the integration of web data sources and structured data sources. Our main contribution is the performance evaluation of an integrated web data warehouse application which includes two tasks. Task one is to perform a verification of the correctness of integrated data based on the result set that is retrieved from the web integrated data warehouse system using complex and OLAP queries. The result set is checked against the result set that is retrieved from the existing independent data source systems. Task two is to measure the performance of OLAP or complex query by investigating source operation functions used by these queries to retrieve the data. The information of source operation functions used by each query is obtained using the TKPROF utility.
David TaniarEmail:
  相似文献   

4.
Individuals of the orb-weaving spider Nephila clavipesbuild complex webs with a region used for prey capture, the orb, and tangle webs opposite either face, the barrier webs. Barrier webs have been hypothesized to serve a variety of functions, including predator defense, and the primary function of the barrier web should be reflected in the relative size of the barrier to the orb under varying conditions of foraging success and predation risk. To investigate the effects of predation pressure and foraging success on barrier web structure, I conducted a comparative study in three disjunct populations that differed in predation risk and foraging success. Although both the orb web and the barrier webs are silk, there was no indication of a foraging-defense trade-off. Barrier web structure did not change during seasonal shifts in orb web size related to changes in preycapture rate, and barrier web silk density and orb radius were positively correlated. The hypothesis that the construction of barrier webs is in part a response to predation pressure was supported. Barrier webs do deflect attacks by some predators, and barrier webs built by small spiders, suffering frequent predation attempts, had a higher silk density than barrier webs built by larger individuals. Additionally, barrier web complexity decreased at a later age in areas with higher predation risk.  相似文献   

5.
Field experiments carried out on the nocturnal orb weaver spider, Neoscona crucifera (Aranea: Araneidae), found in deciduous hardwood forests suggest that lighted areas where prey densities are elevated provide cues used by the spiders to rank optimal foraging sites. Specifically, experiments were conducted to test whether spiders exhibited preferences for lighted areas where prey densities are high, maximizing their energy intake per unit of foraging time, and minimizing energy expended on web building. Incandescent light bulbs of 4–60 W were used to influence prey densities, and results indicate that when given a choice of brighter versus darker foraging areas, spiders seek lighted areas where prey densities are high. In addition, results support the hypothesis that the size and time of web construction are drastically reduced in brighter situations.  相似文献   

6.
Summary The spatial and temporal relationships between cytoplasmic filaments and the morphogenesis of the intestinal brush border were examined by transmission electron microscopy of normally developing tissue and of tissue exposed to a variety of experimental conditions in organ culture. Distinct stages in the development of the brush border were identified: (1) Irregular projections of the apical plasma membrane that contain a network of microfilaments are converted to uniform projections filled with a core bundle of straight microfilaments (7–11d of incubation). (2) Rootlets form by an elongation or aggregation of filaments (11–15d). (3) The terminal web forms first as a network of short filaments just below the apical plasma membrane, then secondarily stratifies into two layers (19d of incubation to 3d posthatching). (4) Core filaments elongate as microvilli achieve their maturity (21d of incubation to 5d posthatching). Microvillus formation was not perturbed by culturing 9d tissue in high concentrations of Ca++ or Mg++, either with or without the ionophore, A23187. Rootlet formation was stimulated by high Mg++, with or without A23187, and, for reasons unknown, by ethanol. Terminal web formation was not stimulated by Mg++ or Ca++, but the integrity of the terminal web was lost when 21d embryonic tissue was cultured with EGTA or cytochalasin B. After stratification, the terminal web could not be disrupted by EGTA, but instead was aggregated to the center of the apical end of the cell.  相似文献   

7.
Software DSMs can be categorized into homeless and home-based systems both have strengths and weaknesses when compared to each other. This paper introduces optimization methods to exploit advantages and offset disadvantages of the home-based protocol in the home-based software DSM JIAJIA. The first optimization reduces the overhead of writes to home pages through a lazy home page write detection scheme. The normal write detection scheme write-protects shared pages at the beginning of a synchronization interval, while the lazy home page write detection delays home page write-protecting until the page is first fetched in the interval so that home pages that are not cached by remote processors do not need to be write-protected. The second optimization avoids fetching the whole page on a page fault through dividing a page into blocks and fetching only those blocks that are dirty with respect to the faulting processor. A write vector table is maintained for each shared page in its home to record for each processor which block(s) has been modified since the processor fetched the page last time. The third optimization adaptively migrates home of a page to the processor most frequently writes to the page to reduce twin and diff overhead. Migration information is piggybacked on barrier messages and no additional communication is required for the migration. Performance evaluation with some well-accepted benchmarks and real applications shows that the above optimization methods can reduce page faults, message amounts, and diffs dramatically and consequently improve performance significantly.  相似文献   

8.
Arndt  Hartmut 《Hydrobiologia》1993,255(1):231-246
Recent investigations have shown that processes within the planktonic microbial web are of great significance for the functioning of limnetic ecosystems. However, the general importance of protozoans and bacteria as food sources for rotifers, a major component of planktonic habitats, has seldom been evaluated. Results of feeding experiments and the analysis of the food size spectrum of rotifers suggest that larger bacteria, heterotrophic flagellates and small ciliates should be a common part of the food of most rotifer species. About 10–40 per cent of rotifers' food can consist of heterotrophic organisms of the microbial web. Field experiments have indicated that rotifer grazing should generally play a minor role in bacteria consumption compared to feeding by coexisting protozoans. However, according to recent experiments regarding food selection, rotifers should be efficient predators on protozoans. Laboratory experiments have revealed that even nanophagous rotifers can feed on ciliates. Preliminary microcosm and chemostat experiments have indicated that rotifers, due to their relatively low community grazing rates compared to the growth rates of bacteria and protozoans, should generally not be able (in contrast to some cladocerans) to suppress the microbial web via grazing, though they may structure it. Filter-feeding nanophagous rotifers (e.g. brachionids) seem to be significant feeders on the smaller organisms of the microbial web (bacteria, flagellates, small ciliates), whereas grasping species (e.g. synchaetids and asplanchnids) seem to be efficient predators on larger organisms (esp. ciliates). Another important role of rotifers is their feedback effect on the microbial web. Rotifers provide degraded algae, bacteria and protozoans to the microbial web and may promote microbial activity. Additional experimental work is necessary for a better understanding of the function of rotifers in aquatic ecosystems.  相似文献   

9.
The transfer of processes for biotherapeutic products into finalmanufacturing facilities was frequently problematic during the 1980's and early 1990's, resulting in costly delays to licensure(Pisano 1997). While plant startups for this class of products can become chaotic affairs, this is not an inherent or intrinsic feature. Major classes of process startup problems have been identified andmechanisms have been developed to reduce their likelihood of occurrence. These classes of process startup problems and resolution mechanisms are the major topic of this article. With proper planning and sufficient staffing, the probably of a smooth process startup for a biopharmaceutical product can be very high – i.e., successful process performance will often beachieved within the first two full-scale process lots in the plant. The primary focus of this article is the role of the Process Development Group in helping to assure this high probability of success.  相似文献   

10.
In this work we are focusing on reducing response time and bandwidth requirements for high performance web server. Many researches have been done in order to improve web server performance by modifying the web server architecture. In contrast to these approaches, we take a different point of view, in which we consider the web server performance in OS perspective rather than web server architecture itself. To achieve these purposes we are exploring two different approaches. The first is running web server within OS kernel. We use kHTTPd as our basis for implementation. But it has a several drawbacks such as copying data redundantly, synchronous write, and processing only static data. We propose some techniques to improve these flaws. The second approach is caching dynamic data. Dynamic data can seriously reduce the performance of web servers. Caching dynamic data has been thought difficult to cache because it often change a lot more frequently than static pages and because web server needs to access database to provide service with dynamic data. To this end, we propose a solution for higher performance web service by caching dynamic data using content separation between static and dynamic portions. Benchmark results using WebStone show that our architecture can improve server performance by up to 18 percent and can reduce user’s perceived latency significantly.  相似文献   

11.
Advances in virtualization technology have focused mainly on strengthening the isolation barrier between virtual machines (VMs) that are co-resident within a single physical machine. At the same time, a large category of communication intensive distributed applications and software components exist, such as web services, high performance grid applications, transaction processing, and graphics rendering, that often wish to communicate across this isolation barrier with other endpoints on co-resident VMs. State of the art inter-VM communication mechanisms do not adequately address the requirements of such applications. TCP/UDP based network communication tends to perform poorly when used between co-resident VMs, but has the advantage of being transparent to user applications. Other solutions exploit inter-domain shared memory mechanisms to improve communication latency and bandwidth, but require applications or user libraries to be rewritten against customized APIs—something not practical for a large majority of distributed applications. In this paper, we present the design and implementation of a fully transparent and high performance inter-VM network loopback channel, called XenLoop, in the Xen virtual machine environment. XenLoop does not sacrifice user-level transparency and yet achieves high communication performance between co-resident guest VMs. XenLoop intercepts outgoing network packets beneath the network layer and shepherds the packets destined to co-resident VMs through a high-speed inter-VM shared memory channel that bypasses the virtualized network interface. Guest VMs using XenLoop can migrate transparently across machines without disrupting ongoing network communications, and seamlessly switch between the standard network path and the XenLoop channel. In our evaluation using a number of unmodified benchmarks, we observe that XenLoop can reduce the inter-VM round trip latency by up to a factor of 5 and increase bandwidth by a up to a factor of 6.
Kartik Gopalan (Corresponding author)Email:
  相似文献   

12.
Marriage Transactions: Labor, Property, Status   总被引:4,自引:0,他引:4  
Marriage transactions—bridewealth, dowry, indirect dowry, and so on—and the absence of transactions have been shown to have a patterned distribution worldwide. This article attempts to account for these patterns by looking at marriage transactions as mechanisms by which households provide for labor needs, distribute property, and maintain or enhance status. A major factor in determining type of marriage transaction is the presence and type of property controlled by the household. Bridewealth circulates property and women, while dowry and indirect dowry concentrate them. The former is found where property is limited, in tribal societies and among the landless poorer classes in traditional states, whereas the latter is found in property-owning classes of landed or commercial pastoral peoples. This article pays particular attention to dowry and indirect dowry, using ethnographic and historical data to explain their functions.  相似文献   

13.
Biodiversity decline causes a loss of functional diversity, which threatens ecosystems through a dangerous feedback loop: This loss may hamper ecosystems’ ability to buffer environmental changes, leading to further biodiversity losses. In this context, the increasing frequency of human‐induced excessive loading of nutrients causes major problems in aquatic systems. Previous studies investigating how functional diversity influences the response of food webs to disturbances have mainly considered systems with at most two functionally diverse trophic levels. We investigated the effects of functional diversity on the robustness, that is, resistance, resilience, and elasticity, using a tritrophic—and thus more realistic—plankton food web model. We compared a non‐adaptive food chain with no diversity within the individual trophic levels to a more diverse food web with three adaptive trophic levels. The species fitness differences were balanced through trade‐offs between defense/growth rate for prey and selectivity/half‐saturation constant for predators. We showed that the resistance, resilience, and elasticity of tritrophic food webs decreased with larger perturbation sizes and depended on the state of the system when the perturbation occurred. Importantly, we found that a more diverse food web was generally more resistant and resilient but its elasticity was context‐dependent. Particularly, functional diversity reduced the probability of a regime shift toward a non‐desirable alternative state. The basal‐intermediate interaction consistently determined the robustness against a nutrient pulse despite the complex influence of the shape and type of the dynamical attractors. This relationship was strongly influenced by the diversity present and the third trophic level. Overall, using a food web model of realistic complexity, this study confirms the destructive potential of the positive feedback loop between biodiversity loss and robustness, by uncovering mechanisms leading to a decrease in resistance, resilience, and potentially elasticity as functional diversity declines.  相似文献   

14.
Frequent itemset mining is widely used as a fundamental data mining technique. Recently, there have been proposed a number of MapReduce-based frequent itemset mining methods in order to overcome the limits on data size and speed of mining that sequential mining methods have. However, the existing MapReduce-based methods still do not have a good scalability due to high workload skewness, large intermediate data, and large network communication overhead. In this paper, we propose BIGMiner, a fast and scalable MapReduce-based frequent itemset mining method. BIGMiner generates equal-sized sub-databases called transaction chunks and performs support counting only based on transaction chunks and bitwise operations without generating and shuffling intermediate data. As a result, BIGMiner achieves very high scalability due to no workload skewness, no intermediate data, and small network communication overhead. Through extensive experiments using large-scale datasets of up to 6.5 billion transactions, we have shown that BIGMiner consistently and significantly outperforms the state-of-the-art methods without any memory problems.  相似文献   

15.
We have developed an open software platform called Neurokernel for collaborative development of comprehensive models of the brain of the fruit fly Drosophila melanogaster and their execution and testing on multiple Graphics Processing Units (GPUs). Neurokernel provides a programming model that capitalizes upon the structural organization of the fly brain into a fixed number of functional modules to distinguish between these modules’ local information processing capabilities and the connectivity patterns that link them. By defining mandatory communication interfaces that specify how data is transmitted between models of each of these modules regardless of their internal design, Neurokernel explicitly enables multiple researchers to collaboratively model the fruit fly’s entire brain by integration of their independently developed models of its constituent processing units. We demonstrate the power of Neurokernel’s model integration by combining independently developed models of the retina and lamina neuropils in the fly’s visual system and by demonstrating their neuroinformation processing capability. We also illustrate Neurokernel’s ability to take advantage of direct GPU-to-GPU data transfers with benchmarks that demonstrate scaling of Neurokernel’s communication performance both over the number of interface ports exposed by an emulation’s constituent modules and the total number of modules comprised by an emulation.  相似文献   

16.
Zero-lag synchronization between distant cortical areas has been observed in a diversity of experimental data sets and between many different regions of the brain. Several computational mechanisms have been proposed to account for such isochronous synchronization in the presence of long conduction delays: Of these, the phenomenon of “dynamical relaying” – a mechanism that relies on a specific network motif – has proven to be the most robust with respect to parameter mismatch and system noise. Surprisingly, despite a contrary belief in the community, the common driving motif is an unreliable means of establishing zero-lag synchrony. Although dynamical relaying has been validated in empirical and computational studies, the deeper dynamical mechanisms and comparison to dynamics on other motifs is lacking. By systematically comparing synchronization on a variety of small motifs, we establish that the presence of a single reciprocally connected pair – a “resonance pair” – plays a crucial role in disambiguating those motifs that foster zero-lag synchrony in the presence of conduction delays (such as dynamical relaying) from those that do not (such as the common driving triad). Remarkably, minor structural changes to the common driving motif that incorporate a reciprocal pair recover robust zero-lag synchrony. The findings are observed in computational models of spiking neurons, populations of spiking neurons and neural mass models, and arise whether the oscillatory systems are periodic, chaotic, noise-free or driven by stochastic inputs. The influence of the resonance pair is also robust to parameter mismatch and asymmetrical time delays amongst the elements of the motif. We call this manner of facilitating zero-lag synchrony resonance-induced synchronization, outline the conditions for its occurrence, and propose that it may be a general mechanism to promote zero-lag synchrony in the brain.  相似文献   

17.
Effects of creatine supplementation on performance and training adaptations   总被引:7,自引:0,他引:7  
Creatine has become a popular nutritional supplement among athletes. Recent research has also suggested that there may be a number of potential therapeutic uses of creatine. This paper reviews the available research that has examined the potential ergogenic value of creatine supplementation on exercise performance and training adaptations. Review of the literature indicates that over 500 research studies have evaluated the effects of creatine supplementation on muscle physiology and/or exercise capacity in healthy, trained, and various diseased populations. Short-term creatine supplementation (e.g. 20 g/day for 5–7 days) has typically been reported to increase total creatine content by 10–30% and phosphocreatine stores by 10–40%. Of the approximately 300 studies that have evaluated the potential ergogenic value of creatine supplementation, about 70% of these studies report statistically significant results while remaining studies generally report non-significant gains in performance. No study reports a statistically significant ergolytic effect. For example, short-term creatine supplementation has been reported to improve maximal power/strength (5–15%), work performed during sets of maximal effort muscle contractions (5–15%), single-effort sprint performance (1–5%), and work performed during repetitive sprint performance (5–15%). Moreover, creatine supplementation during training has been reported to promote significantly greater gains in strength, fat free mass, and performance primarily of high intensity exercise tasks. Although not all studies report significant results, the preponderance of scientific evidence indicates that creatine supplementation appears to be a generally effective nutritional ergogenic aid for a variety of exercise tasks in a number of athletic and clinical populations.  相似文献   

18.
With the ever-growing web traffic, cluster-based web server is becoming more and more important to the Internet's infrastructure. Making the best use of all the available resources in the cluster to achieve high performance is thus a significant research issue. In this paper, we introduce Cyclone, a cluster-based web server that can achieve nearly optimal throughput. Cyclone makes use of a novel network support mechanism called Socket Cloning (SC), together with the method of hot object replication, to obtain high performance. SC allows an opened socket to be moved efficiently between cluster nodes. With SC, the processing of HTTP requests can be migrated to the node that has a cached copy of the requested document, thus obviating the need for any cache transfer between cluster nodes. To achieve better load balancing, frequently accessed documents (hot objects) are replicated to other cluster nodes. Trace-driven benchmark tests using http_load show that Cyclone outperforms existing approaches and can achieve a throughput of 14575 requests/s (89.5 MBytes/s), which is 98% efficiency of the available network bandwidth, with eight web server nodes.  相似文献   

19.
Summary A literature search has been conducted to see to what extent steady-state kinetics studies in the period 1965–1976 have revealed deviations from Michaelis—Menten kinetics. It was found that over 800 enzymes have been reported as giving complex curves for a variety of reasons and a group by group classification of all these enzymes has been carried out listing all the types of variations reported and the authors' explanations. In addition, for highly complex curves, we have determined the minimum degree of the rate equation. There were very few determined attempts to demonstrate adherence to the Michaelis—Menten equation over a wide variety of experimental conditions and substrate concentration and almost invariably detailed experimental work revealed unsuspected complexities. For these reasons, it is concluded that the assumption that most enzymes follow the Michaelis—Menten equation can not be supported by an appeal to the literature.  相似文献   

20.
microRNAs (miRNA) are a class of non-protein coding functional RNAs that are thought to regulate expression of target genes by direct interaction with mRNAs. miRNAs have been identified through both experimental and computational methods in a variety of eukaryotic organisms. Though these approaches have been partially successful, there is a need to develop more tools for detection of these RNAs as they are also thought to be present in abundance in many genomes. In this report we describe a tool and a web server, named CID-miRNA, for identification of miRNA precursors in a given DNA sequence, utilising secondary structure-based filtering systems and an algorithm based on stochastic context free grammar trained on human miRNAs. CID-miRNA analyses a given sequence using a web interface, for presence of putative miRNA precursors and the generated output lists all the potential regions that can form miRNA-like structures. It can also scan large genomic sequences for the presence of potential miRNA precursors in its stand-alone form. The web server can be accessed at http://mirna.jnu.ac.in/cidmirna/.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号