首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Next generation optical networks will soon provide users the capability to request and obtain end-to-end all optical 10 Gbps channels on demand. Individual users will use these channels to exchange large amounts of data and support applications for scientific collaborative work. These new applications, which expect steady transfer rates in the order of Gbps, will very likely use either TCP or a new transport layer protocol as the end-to-end communication protocol. In this paper, we investigate the performance of TCP and newer TCP versions over High Bandwidth Delay Product Channels (HBDPC), such as the on demand optical channels described above. In addition, we investigate the performance of these new TCP versions over wireless networks and according to old issues such as fairness. This is particularly important to make adoption decisions. Using simulations, we show that (1) the window-based mechanism of current TCP implementations is not suitable to achieve high link utilization and (2) congestion control mechanisms, such as the one utilized by TCP Vegas and Westwood are more appropriate and provide better performance. We also show that new TCP proposals, although perform better than current TCP versions, they still perform worse than TCP Vegas. In addition, we found that even though these newer versions improve TCP's performance over their original counterparts in HBDPC, they still have performance problems in wireless networks and present worse fairness problems than their old counterparts. We conclude that all these versions are still based on TCP's AIMD strategy or similar and therefore continue to be fairly blind in the way they increase and decrease their transmission rates. TCP will not be able to utilize the foreseen optical infrastructure adequately and support future applications if not redesigned to scale.  相似文献   

2.
Cluster Computing - HTTP adaptive streaming of video content becomes an integrated part of the Internet and dominates other streaming protocols and solutions. The duration of creating video content...  相似文献   

3.
4.
Despite the increasing number of studies dealing with interaction networks in the last few years, there is still a lack of knowledge about how their structural organization are affected by changes in binary or weighted data. To fill this gap, we collected ants foraging on plants with extrafloral nectaries in 10 sites within the Brazilian Amazon to evaluate if the generality, vulnerability, nestedness, and modularity observed in these ant-plant networks could be affected by changes in data categories. Specifically, we used three matrices built by different data categories: (i) binary data (i.e., presence or absence of an interaction between a plant and an ant species); (ii) frequency data (i.e., number of times in which a plant species interacted with an ant species); and (iii) abundance data (i.e., number of workers of an ant species recorded foraging on a plant species). In general, when analyzing different matrix categories, we observed changes in the structural organization of the studied ant-plant interaction networks. Surprisingly, however, at the species level, both categories of weighted data (i.e., frequency and abundance data) seem to be equally appropriate for describing the role of ant species. Our results highlight the need to expand the discussion about data categories in ecological interaction studies to understand how different data categories may lead to different ecological interpretations.  相似文献   

5.
The Cavender-Felsenstein edge-length invariants for binary characters on 4-trees provide the starting point for the development of "customized" invariants for evaluating and comparing phylogenetic hypotheses. The binary character invariants may be generalized to k-valued characters without losing the quadratic nature of the invariants as functions of the theoretical frequencies f(UVXY) of observable character configurations (U at organism 1, V at 2, etc.). The key to the approach is that certain sets of these configurations constitute events which are probabilistically independent from other such sets, under the symmetric Markov change models studied. By introducing more complex sets of configurations, we find the quadratic invariants for 5-trees in the binary model and for individual edges in 6-trees or, indeed, in any size tree. The same technique allows us to formulate invariants for entire trees, but these are cubic functions for 6-trees and are higher-degree polynomials for larger trees. With k-valued characters and, especially, with large trees, the types of configuration sets (events) used in the simpler examples are too rare (i.e., their predicted frequencies are too low) to be useful, and the construction of meaningful pairs of independent events becomes an important and nontrivial task in designing invariants suited to testing specific hypotheses. In a very natural way, this approach fits in with well-known statistical methodology for contingency tables. We explore use of events such as "only transitions occur for character i (i.e., position i in a nucleic acid sequence) in subtree a" in analyzing a set of data on ribosomal RNA in the context of the controversy over the origins of archaebacteria, eubacteria, and eukaryotes.  相似文献   

6.
HiperLAN/2 (HIgh PErformance Radio Local Area Network) is a new standard from ETSI (European Telecommunications Standards Institute) for high-speed wireless LANs, interconnecting portable devices to each other and to broadband core networks, based on different networking technologies such as IP, ATM, IEEE 1394, and others. This paper introduces the basic features of the HiperLAN/2 MAC protocol. It presents performance evaluation results, specifically related to the mechanisms provided by HiperLAN/2 to manage bandwidth resource requests and granting. These results are assessed in terms of their flexibility and efficiency in supporting delay sensitive traffic, such as voice and Web data traffic, which are expected to be transported by broadband wireless LANs.  相似文献   

7.
Widespread multifactor interactions present a significant challenge in determining risk factors of complex diseases. Several combinatorial approaches, such as the multifactor dimensionality reduction (MDR) method, have emerged as a promising tool for better detecting gene-gene (G x G) and gene-environment (G x E) interactions. We recently developed a general combinatorial approach, namely the generalized multifactor dimensionality reduction (GMDR) method, which can entertain both qualitative and quantitative phenotypes and allows for both discrete and continuous covariates to detect G x G and G x E interactions in a sample of unrelated individuals. In this article, we report the development of an algorithm that can be used to study G x G and G x E interactions for family-based designs, called pedigree-based GMDR (PGMDR). Compared to the available method, our proposed method has several major improvements, including allowing for covariate adjustments and being applicable to arbitrary phenotypes, arbitrary pedigree structures, and arbitrary patterns of missing marker genotypes. Our Monte Carlo simulations provide evidence that the PGMDR method is superior in performance to identify epistatic loci compared to the MDR-pedigree disequilibrium test (PDT). Finally, we applied our proposed approach to a genetic data set on tobacco dependence and found a significant interaction between two taste receptor genes (i.e., TAS2R16 and TAS2R38) in affecting nicotine dependence.  相似文献   

8.
Yang XL  Robinson H  Gao YG  Wang AH 《Biochemistry》2000,39(36):10950-10957
The binding of a macrocyclic bisacridine and an antitumor intercalator ametantrone to DNA has been studied. We carried out X-ray diffraction analyses of the complexes between both intercalators and CGTACG. We have determined the crystal structure, by the multiple-wavelength anomalous diffraction (MAD) method, of bisacridine complexed with CGTA[br(5)C]G at 1.8 A resolution. The refined native crystal structure at 1.1 A resolution (space group C222, a = 29.58 A, b = 54.04 A, c = 40.22 A, and R-factor = 0.163) revealed that only one acridine of the bisacridine drug binds at the C5pG6 step of the DNA, with the other acridine plus both linkers completely disordered. Surprisingly, both terminal G.C base pairs are unraveled. The C1 nucleotide is disordered, and the G2 base is bridged to its own phosphate P2 through a hydrated Co(2+) ion. G12 is swung toward the minor groove with its base stacked over the backbone. The C7 nucleotide is flipped away from the duplex part and base paired to a 2-fold symmetry-related G6. The central four base pairs adopt the B-DNA conformation. An unusual intercalator platform is formed by bringing four complexes together (involving the 222 symmetry) such that the intercalator cavity is flanked by two sets of G x C base pairs (i.e., C5 x G8 and G6 x C7) on each side, joined together by G6 x G8 tertiary base pairing interactions. In the bisacridine-CGTACG complex, the intercalation platform is intercalated with two acridines, whereas in the ametantrone-CGTACG complex, only one ametantrone is bound. NMR titration of the bisacridine to AACGATCGTT suggests that the bisacridine prefers to bridge more than one DNA duplex by intercalating each acridine to different duplexes. The results may be relevant in understanding binding of certain intercalators to DNA structure associated with the quadruplet helix and Holliday junction.  相似文献   

9.
In wireless network research, simulation is the most imperative technique to investigate the network’s behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the errors of our proposed model.  相似文献   

10.
Clusters of Personal Computers (CoPs) offer excellent compute performance at a low price. Workstations with Gigabit to the Desktop can give workers access to a new game of multimedia applications. Networking PCs with their modest memory subsystem performance requires either extensive hardware acceleration for protocol processing or alternatively, a highly optimized software system to reach the full Gigabit/sec speeds in applications. So far this could not be achieved, since correctly defragmenting packets of the various communication protocols in hardware remains an extremely complex task and prevented a clean zero-copy solution in software. We propose and implement a defragmenting driver based on the same speculation techniques that are common to improve processor performance with instruction level parallelism. With a speculative implementation we are able to eliminate the last copy of a TCP/IP stack even on simple, existing Ethernet NIC hardware. We integrated our network interface driver into the Linux TCP/IP protocol stack and added the well known page remapping and fast buffer strategies to reach an overall zero-copy solution. An evaluation with measurement data indicates three trends: (1) for Gigabit Ethernet the CPU load of communication can be reduced processing significantly, (2) speculation will succeed in most cases, and (3) the performance for burst transfers can be improved by a factor of 1.5–2 over the standard communication software in Linux 2.2. Finally we can suggest simple hardware improvements to increase the speculation success rates based on our implementation.  相似文献   

11.
One of the principal characteristics of large scale wireless sensor networks is their distributed, multi-hop nature. Due to this characteristic, applications such as query propagation rely regularly on network-wide flooding for information dissemination. If the transmission radius is not set optimally, the flooded packet may be holding the transmission medium for longer periods than are necessary, reducing overall network throughput. We analyze the impact of the transmission radius on the average settling time—the time at which all nodes in the network finish transmitting the flooded packet. Our analytical model takes into account the behavior of the underlying contention-based MAC protocol, as well as edge effects and the size of the network. We show that for large wireless networks there exists an intermediate transmission radius which minimizes the settling time, corresponding to an optimal tradeoff between reception and contention times. We also explain how physical propagation models affect small wireless networks and why there is no intermediate optimal transmission radius observed in these cases. The mathematical analysis is supported and validated through extensive simulations.Marco Zuniga is currently a PhD student in the Department of Electrical Engineering at the University of Southern California. He received his Bachelors degree in Electrical Engineering from the Pontificia Universidad Catolica del Peru in 1998, and his Masters degree in Electrical Engineering from the University of Southern California in 2002. His interests are in the area of Wireless Sensor Networks in general, and more specifically in studying the interaction amongst different layers to improve the performance of these networks. He is a member of IEEE and the Phi Kappa Phi Honor society.Bhaskar Krishnamachari is an Assistant Professor in the Department of Electrical Engineering at the University of Southern California (USC), where he also holds a joint appointment in the Department of Computer Science. He received his Bachelors degree in Electrical Engineering with a four-year full-tuition scholarship from The Cooper Union for the Advancement of Science and Art in 1998. He received his Masters degree and his Ph.D. in Electrical Engineering from Cornell University in 1999 and 2002, under a four-year university graduate fellowship. Dr. Krishnamacharis previous research has included work on critical density thresholds in wireless networks, data centric routing in sensor networks, mobility management in cellular telephone systems, multicast flow control, heuristic global optimization, and constraint satisfaction. His current research is focused on the discovery of fundamental principles and the analysis and design of protocols for next generation wireless sensor networks. He is a member of IEEE, ACM and the Tau Beta Pi and Eta Kappa Nu Engineering Honor Societies  相似文献   

12.
This study compared physiological responses to 2 high-speed resistance training (RT) protocols in untrained adults. Both RT protocols included 12 repetitions for the same 6 exercises, only differing in continuous (1 x 12) or discontinuous (2 x 6) mode. For discontinuous mode, there was a 15-second rest interval between sets. We hypothesized that the 2 x 6 protocol was less physiologically demanding than the 1 x 12 protocol. Fifteen untrained adults randomly performed the protocols on 2 different days while heart rate (HR), blood lactate (BL), rate of perceived exertion (RPE), and concentric phase mean power (CPMP) were measured. Significantly lower values (mean +/- SE) were seen with the discontinuous protocol for exercise HR (119 +/- 5 vs. 124 +/- 5 b x min(-1)), BL (5.7 +/- 0.5 vs. 6.7 +/- 0.3 mMol/L), and RPE (5.4 +/- 0.3 vs. 5.8 +/- 0.4) (p < 0.05). CPMP tended to be higher in the discontinuous protocol, especially for the 2 last repetitions. The discontinuous protocol was significantly less physiologically demanding, although similar or higher CPMP values were obtained. These findings may help foster long-term adherence to RT in untrained individuals. However, future studies are needed to compare physiological adaptations induced by these 2 RT protocols.  相似文献   

13.
Particle tracking in living systems requires low light exposure and short exposure times to avoid phototoxicity and photobleaching and to fully capture particle motion with high-speed imaging. Low-excitation light comes at the expense of tracking accuracy. Image restoration methods based on deep learning dramatically improve the signal-to-noise ratio in low-exposure data sets, qualitatively improving the images. However, it is not clear whether images generated by these methods yield accurate quantitative measurements such as diffusion parameters in (single) particle tracking experiments. Here, we evaluate the performance of two popular deep learning denoising software packages for particle tracking, using synthetic data sets and movies of diffusing chromatin as biological examples. With synthetic data, both supervised and unsupervised deep learning restored particle motions with high accuracy in two-dimensional data sets, whereas artifacts were introduced by the denoisers in three-dimensional data sets. Experimentally, we found that, while both supervised and unsupervised approaches improved tracking results compared with the original noisy images, supervised learning generally outperformed the unsupervised approach. We find that nicer-looking image sequences are not synonymous with more precise tracking results and highlight that deep learning algorithms can produce deceiving artifacts with extremely noisy images. Finally, we address the challenge of selecting parameters to train convolutional neural networks by implementing a frugal Bayesian optimizer that rapidly explores multidimensional parameter spaces, identifying networks yielding optimal particle tracking accuracy. Our study provides quantitative outcome measures of image restoration using deep learning. We anticipate broad application of this approach to critically evaluate artificial intelligence solutions for quantitative microscopy.  相似文献   

14.
15.
The purpose of this study was to evaluate the early-phase muscular performance adaptations to 5 weeks of traditional (TRAD) and eccentric-enhanced (ECC+) progressive resistance training and to compare the acute postexercise total testosterone (TT), bioavailable testosterone (BT), growth hormone (GH), and lactate responses in TRAD- and ECC+-trained individuals. Twenty-two previously untrained men (22.1 +/- 0.8 years) completed 1 familiarization and 2 baseline bouts, 15 exercise bouts (i.e., 3 times per week for 5 weeks), and 2 postintervention testing bouts. Anthropometric and 1 repetition maximum (1RM) measurements (i.e., bench press and squat) were assessed during both baseline and postintervention testing. Following baseline testing, participants were randomized into TRAD (4 sets of 6 repetitions at 52.5% 1RM) or ECC+ (3 sets of 6 repetitions at 40% 1RM concentric and 100% 1RM eccentric) groups and completed the 5-week progressive resistance training protocols. During the final exercise bout, blood samples acquired at rest and following exercise were assessed for serum TT, BT, GH, and blood lactate. Both groups experienced similar increases in bench press (approximately 10%) and squat (approximately 22%) strength during the exercise intervention. At the conclusion of training, postexercise TT and BT concentrations increased (approximately 13% and 21%, respectively, p < 0.05) and GH concentrations increased (approximately 750-1200%, p < 0.05) acutely following exercise in both protocols. Postexercise lactate accumulation was similar between the TRAD (5.4 +/- 0.4) and ECC+ (5.6 +/- 0.4) groups; however, the ECC+ group's lactate concentrations were significantly lower than those of the TRAD group 30 to 60 minutes into recovery. In conclusion, TRAD training and ECC+ training appear to result in similar muscular strength adaptations and neuroendocrine responses, while postexercise lactate clearance is enhanced following ECC+ training.  相似文献   

16.
Fluorescence in situ hybridization (FISH) is a widely used method to detect environmental microorganisms. The standard protocol is typically conducted at a temperature of 46 degrees C and a hybridization time of 2 or 3 h, using the fluorescence signal intensity as the sole parameter to evaluate the performance of FISH. This paper reports our results for optimizing the conditions of FISH using rRNA-targeted oligonucleotide probes and flow cytometry and the application of these protocols to the detection of Escherichia coli in seawater spiked with E.coli culture. We obtained two types of optimized protocols for FISH, which showed rapid results with a hybridization time of less than 30 min, with performance equivalent to or better than the standard protocol in terms of the fluorescence signal intensity and the FISH hybridization efficiency (i.e., the percentage of hybridized cells giving satisfactory fluorescence intensity): (i) one-step FISH (hybridization is conducted at 60 to 75 degrees C for 30 min) and (ii) two-step FISH (pretreatment in a 90 degrees C water bath for 5 min and a hybridizing step at 50 to 55 degrees C for 15 to 20 min). We also found that satisfactory fluorescence signal intensity does not necessarily guarantee satisfactory hybridization efficiency and the tightness of the targeted population when analyzed with a flow cytometer. We subsequently successfully applied the optimized protocols to E. coli-spiked seawater samples, i.e., obtained flow cytometric signatures where the E. coli population was well separated from other particles carrying fluorescence from nonspecific binding to probes or from autofluorescence, and had a good recovery rate of the spiked E. coli cells (90%).  相似文献   

17.
Community genetics examines how genotypic variation within a species influences the associated ecological community. The inclusion of additional environmental and genotypic factors is a natural extension of the current community genetics framework. However, the extent to which the presence of and genetic variation in associated species influences interspecific interactions (i.e., genotype x genotype x environment [G x G x E] interactions) has been largely ignored. We used a community genetics approach to study the interaction of barley and aphids in the absence and presence of rhizosphere bacteria. We designed a matrix of aphid genotype and barley genotype combinations and found a significant G x G x E interaction, indicating that the barley-aphid interaction is dependent on the genotypes of the interacting species as well as the biotic environment. We discuss the consequences of the strong G x G x E interaction found in our study in relation to its impact on the study of species interactions in a community context.  相似文献   

18.
In this paper, we address the multiple peak alignment problem in sequential data analysis with an approach based on the Gaussian scale-space theory. We assume that multiple sets of detected peaks are the observed samples of a set of common peaks. We also assume that the locations of the observed peaks follow unimodal distributions (e.g., normal distribution) with their means equal to the corresponding locations of the common peaks and variances reflecting the extension of their variations. Under these assumptions, we convert the problem of estimating locations of the unknown number of common peaks from multiple sets of detected peaks into a much simpler problem of searching for local maxima in the scale-space representation. The optimization of the scale parameter is achieved using an energy minimization approach. We compare our approach with a hierarchical clustering method using both simulated data and real mass spectrometry data. We also demonstrate the merit of extending the binary peak detection method (i.e., a candidate is considered either as a peak or as a nonpeak) with a quantitative scoring measure-based approach (i.e., we assign to each candidate a possibility of being a peak).  相似文献   

19.
HF/6-31G** and molecular dynamics (MD) simulations were used to evaluate the performance of different atomic charge basis sets (i.e., Mulliken, Lowdin, and Electrostatic Potential Derived Charges--ESP) in heparin simulations. HF/3-21 G calculations were also used to study the NMR conformation of the IdoA residue. The results thus obtained indicated that ESP and Lowdin charges gave the better results in heparin simulations, followed by Mulliken charges, and that the minimum-energy conformation of IdoA can be different from that observed by NMR spectroscopy by less than 1 Angstrom. However, it was found that this small conformational modification is capable of inducing a change of almost 200 kJ/mol in the interactions of heparin with the surrounding environment, which is a meaningful amount of energy in the context of ligand-receptor interactions. This information can be potentially of great relevance in the design of heparin-derived antithrombotic compounds.  相似文献   

20.
Structural and functional changes ensue in cardiac cell networks when cells are guided by three-dimensional scaffold topography. We report enhanced synchronous pacemaking activity in association with slow diastolic rise in intracellular Ca2+ concentration ([Ca2+]i) in cell networks grown on microgrooved scaffolds. Topography-driven changes in cardiac electromechanics were characterized by the frequency dependence of [Ca2+]i in syncytial structures formed of ventricular myocytes cultured on microgrooved elastic scaffolds (G). Cells were electrically paced at 0.5-5 Hz, and [Ca2+]i was determined using microscale ratiometric (fura 2) fluorescence. Compared with flat (F) controls, the G networks exhibited elevated diastolic [Ca2+]i at higher frequencies, increased systolic [Ca2+]i across the entire frequency range, and steeper restitution of Ca2+ transient half-width (n = 15 and 7 for G and F, respectively, P < 0.02). Significant differences in the frequency response of force-related parameters were also found, e.g., overall larger total area under the Ca2+ transients and faster adaptation of relaxation time to pacing rate (P < 0.02). Altered [Ca2+]i dynamics were paralleled by higher occurrence of spontaneous Ca2+ release and increased sarcoplasmic reticulum load (P < 0.02), indirectly assessed by caffeine-triggered release. Electromechanical instabilities, i.e., Ca2+ and voltage alternans, were more often observed in G samples. Taken together, these findings 1) represent some of the first functional electromechanical data for this in vitro system and 2) demonstrate direct influence of the microstructure on cardiac function and susceptibility to arrhythmias via Ca(2+)-dependent mechanisms. Overall, our results substantiate the idea of guiding cellular phenotype by cellular microenvironment, e.g., scaffold design in the context of tissue engineering.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号