共查询到20条相似文献,搜索用时 15 毫秒
1.
Today the use of individually ventilated cage systems (IVC systems) is common, especially for housing transgenic rodents. Typically, in each cage a ventilation rate of 40 to 50 air changes per hour is applied, but in some systems even up to 120 air changes per hour is applied. To reach this rate, the air is blown into the cage at a relatively high speed. However, at the animal's level most systems ventilate with an air speed of approximately 0.2 m/s. In the present paper, two studies were conducted, one analysing whether an air speed below 0.2 m/s or just above 0.5 m/s affects the rats, and another study analysing whether air changes of 50, 80 and 120 times per hour affect the rats. In both studies, monitoring of preferences as well as physiological parameters such as heart rate and blood pressure, was used to show the ability of the animals to register the different parameters and to avoid them if possible. Air speeds inside the cage of as high as 0.5 m/s could not be shown to affect the rats, while the number of air changes in each cage should be kept below 80 times per hour to avoid impacts on physiology (heart rate and systolic blood pressure). Also the rats prefer cages with air changes below 80 times per hour if they have the opportunity of choosing, as shown in the preference test. 相似文献
2.
It is shown that the performance evaluation using a vector-valued objection function whose components are the product productivity, the product concentration, and the substrate conversion is quite useful in getting deeper insight into the development of new processes and in determining the operating point. Particular attention is focused on the ethanol fermentation using variety of systems such as the conventional chemostat system, multiple fermentor system, cell recycle system, extractive fermentor system, cell recycle system, extractive fermentor system, and immobilized cell system. The contour map and the projection of the noninferior set are used in investigating the performance improvement and the trade-offs among performance indexes. 相似文献
3.
Embolus transport simulations are performed to investigate the dependence of inferior vena cava (IVC) filter embolus-trapping performance on IVC anatomy. Simulations are performed using a resolved two-way coupled computational fluid dynamics/six-degree-of-freedom approach. Three IVC geometries are studied: a straight-tube IVC, a patient-averaged IVC, and a patient-specific IVC reconstructed from medical imaging data. Additionally, two sizes of spherical emboli (3 and 5 mm in diameter) and two IVC orientations (supine and upright) are considered. The embolus-trapping efficiency of the IVC filter is quantified for each combination of IVC geometry, embolus size, and IVC orientation by performing 2560 individual simulations. The predicted embolus-trapping efficiencies of the IVC filter range from 10 to 100%, and IVC anatomy is found to have a significant influence on the efficiency results ( \(P < 0.0001\)). In the upright IVC orientation, greater secondary flow in the patient-specific IVC geometry decreases the filter embolus-trapping efficiency by 22–30 percentage points compared with the efficiencies predicted in the idealized straight-tube or patient-averaged IVCs. In a supine orientation, the embolus-trapping efficiency of the filter in the idealized IVCs decreases by 21–90 percentage points compared with the upright orientation. In contrast, the embolus-trapping efficiency is insensitive to IVC orientation in the patient-specific IVC. In summary, simulations predict that anatomical features of the IVC that are often neglected in the idealized models used for benchtop testing, such as iliac vein compression and anteroposterior curvature, generate secondary flow and mixing in the IVC and influence the embolus-trapping efficiency of IVC filters. Accordingly, inter-subject variability studies and additional embolus transport investigations that consider patient-specific IVC anatomy are recommended for future work. 相似文献
4.
BackgroundPrevious studies show various results obtained from different motif finders for an identical dataset. This is largely due to the fact that these tools use different strategies and possess unique features for discovering the motifs. Hence, using multiple tools and methods has been suggested because the motifs commonly reported by them are more likely to be biologically significant.ResultsThe common significant motifs from multiple tools can be obtained by using MOTIFSIM tool. In this work, we evaluated the performance of MOTIFSIM in three aspects. First, we compared the pair-wise comparison technique of MOTIFSIM with the un-gapped Smith-Waterman algorithm and four common distance metrics: average Kullback-Leibler, average log-likelihood ratio, Chi-Square distance, and Pearson Correlation Coefficient. Second, we compared the performance of MOTIFSIM with RSAT Matrix-clustering tool for motif clustering. Lastly, we evaluated the performances of nineteen motif finders and the reliability of MOTIFSIM for identifying the common significant motifs from multiple tools.ConclusionsThe pair-wise comparison results reveal that MOTIFSIM attains better performance than the un-gapped Smith-Waterman algorithm and four distance metrics. The clustering results also demonstrate that MOTIFSIM achieves similar or even better performance than RSAT Matrix-clustering. Furthermore, the findings indicate if the motif detection does not require a special tool for detecting a specific type of motif then using multiple motif finders and combining with MOTIFSIM for obtaining the common significant motifs, it improved the results for DNA motif detection. 相似文献
5.
The use of individually ventilated cage (IVC) systems has become more common worldwide. The various systems are becoming more and more sealed in order to protect the animals against infections and the staff against allergens; which, however, may lead to problematic CO2 concentrations, if the cages are left unventilated. In this study it is shown that, depending on how tight the cage is and the number of animals housed in each cage, CO2 inside the cage within 2 h will increase to levels of between 2 and 8%. 相似文献
6.
In Catalonia (Spain), a variety of different systems have been built to naturally treat liquid residues from small communities. Some of these wastewater treatment plants (WWTPs) include constructed wetlands with horizontal subsurface flow (HSSF) as secondary treatment. The present study described and characterized the performance of 11 WWTPs with secondary HSSF constructed wetland systems after an initial operating period of 8 years. The effluent concentrations of Biochemical Oxygen Demand (BOD 5), Total Suspended Solids (TSS), Total Nitrogen (TN) and Total Phosphorous (TP) were statistically analyzed, and removal efficiencies for all WWTPs including all stages in treatment were calculated. The accumulated probability functions of those parameters were evaluated to determine the influence of two different types of polishing units on the overall performance: (a) only lagoon systems and (b) lagoon systems with HSSF. The statistical analysis indicates good performance for BOD 5 and TSS. In the first case, mean concentrations below 25 mg/L were found in 9 of the 11 plants analyzed and removal efficiencies between 78 and 96% were observed. In the second case, mean concentrations below 35 mg/L were found in 8 of the 11 plants, and removal efficiencies were between 65 and 88%. For the nutrients, the removal efficiency for TN and TP were in the range of 48-66% and 39-58%, respectively. Additionally, the analysis of the influence of the polishing units did not show a significant improvement ( α > 0.05) for any parameter in the wetland systems without a subsequent polishing unit. However, in the wetland systems with a polishing unit of HSSF, a significant improvement ( α < 0.05) was found for the effluent's BOD 5, TN and TP concentrations but with no significant contribution in TSS management. 相似文献
7.
This article treats several performance management decision problems in flexible manufacturing systems (FMSs). This work differs from a number of other studies in that we allow the processing rates at the machines to be varied, and the system has to meet a given throughput goal per unit time. The managerial decision options modeled here include part routing and allocation of tasks to machines, work-in-progress (WIP) levels, capacity expansions, tool-type selection, the setting of throughput goals, and multiperiod production planning. We discuss and explain the insights and implications, partly nonintuitive, gained from our investigations. Finally, extensive numerical evaluations are included to illustrate the economic and performance impact of the various performance management alternatives. These results demonstrate that substantial economic benefits can be achieved by careful tuning of the FMS operational parameters. 相似文献
11.
BACKGROUND: The performance of QuantiBRITE phycoerythrin (PE) beads to standardize quantitation in terms of antibodies bound per cell (ABC) was evaluated by measuring precision, variation across multiple instruments, and variation across time. METHODS: For CD4 quantitation, whole blood was stained with a two-color CD4 reagent using a no-wash/no-lyse format. For CD69 quantitation, whole blood was activated with either phorbol myristate acetate (PMA) or CD3 beads and then stained with a three-color CD69 reagent using a lyse-no-wash format. RESULTS: Across 20 normal donors, the mean CD4 ABC was 51,000. Within-assay precision on quantitation of CD4 ABC on T cells had a coefficient of variance (CV) of <1.0%. Across multiple flow cytometers, quantitation of CD4 ABC had a CV of <5.0%. Within-donor CV on CD4 ABC on 20 donors across 2 months ranged from 1.3% to 3.2%. Within-assay precision on quantitation of CD69 on T cells activated with either PMA or CD3 beads had a CV of <3.0%. Within-donor CV of CD69 ABC across 1 month ranged from 2% to 18% on PMA-activated samples and from 7% to 24% on CD3 bead-activated samples. CONCLUSIONS: Our results indicate that the QuantiBRITE PE beads provide a useful tool for standardized analysis across labs. When used in conjunction with 1:1 conjugates of PE-to-monoclonal antibody, the QuantiBRITE PE beads provide a simple yet robust means of quantitating expression levels in terms of ABC. 相似文献
12.
Several choices of amino acid substitution matrices are currently available for searching and alignment applications. These choices were evaluated using the BLAST searching program, which is extremely sensitive to differences among matrices, and the Prosite catalog, which lists members of hundreds of protein families. Matrices derived directly from either sequence-based or structurebased alignments of distantly related proteins performed much better overall than extrapolated matrices based on the Dayhoff evolutionary model. Similar results were obtained with the FASTA searching program. Improved performance appears to be general rather than family-specific, reflecting improved accuracy in scoring alignments. An implementation of a multiple matrix strategy was also tested. While no combination of three matrices performed as well as the single best matrix, BLOSUM 62, good results were obtained using a combination of sequence-based and structure-based matrices. This hybrid set of matrices is likely to be useful in certain situations. Our results illustrate the importance of matrix selection and value of a comprehensive approach to evaluation of protein comparison tools. © 1993 Wiley-Liss, Inc. 相似文献
14.
丙型肝炎病毒(Hepatitis C virus,HCV)感染的持久性引发慢性肝病疾病,并可能发展成为肝硬化和肝癌。目前对HCV的治疗不能达到理想的治疗效果,所以开发新型抗HCV药物迫在眉睫。抗HCV药物筛选的细胞模型,如复制子系统、假病毒系统、细胞培养系统,动物模型,如黑猩猩、uPA-SCID小鼠等,取得了快速的进展,并推动丙型肝炎的研究和抗HCV药物的发现。 相似文献
15.
Two methods have been developed for protein identification from tandem mass spectra: database searching and de novo sequencing. De novo sequencing identifies peptide directly from tandem mass spectra. Among many proposed algorithms, we evaluated the performance of the five de novo sequencing algorithms, AUDENS, Lutefisk, NovoHMM, PepNovo, and PEAKS. Our evaluation methods are based on calculation of relative sequence distance (RSD), algorithm sensitivity, and spectrum quality. We found that de novo sequencing algorithms have different performance in analyzing QSTAR and LCQ mass spectrometer data, but in general, perform better in analyzing QSTAR data than LCQ data. For the QSTAR data, the performance order of the five algorithms is PEAKS > Lutefisk, PepNovo > AUDENS, NovoHMM. The performance of PEAKS, Lutefisk, and PepNovo strongly depends on the spectrum quality and increases with an increase of spectrum quality. However, AUDENS and NovoHMM are not sensitive to the spectrum quality. Compared with other four algorithms, PEAKS has the best sensitivity and also has the best performance in the entire range of spectrum quality. For the LCQ data, the performance order is NovoHMM > PepNovo, PEAKS > Lutefisk > AUDENS. NovoHMM has the best sensitivity, and its performance is the best in the entire range of spectrum quality. But the overall performance of NovoHMM is not significantly different from the performance of PEAKS and PepNovo. AUDENS does not give a good performance in analyzing either QSTAR and LCQ data. 相似文献
16.
A high-speed, low-resistance inertial exercise trainer (IET, Impulse Training Systems, Newnan, Ga) is increasingly employed in rehabilitative and athletic performance settings. Repetitions on an IET are done through a large range of motion because multijoint movements occur over more than one plane of motion, with no limitation on velocities or accelerations attained. The current study purpose is to assess data reproducibility from an instrumented IET through multiple test-retest measures. Data collection methods required the IET left and right halves to be fitted with a TLL-2K force transducer (Transducer Techniques, Temecula, Calif) on one of its pulleys, and an infrared position sensor (Model CX3-AP-1A, automationdirect.com) located midway on the underside of each track. Signals passed through DI-158U signal conditioners (DATAQ Instruments, Akron, Ohio) and were measured with a four-channel analog data acquisition card at 4000 Hz. To assess data reproducibility, college-age subjects (n = 45) performed four IET workouts that were spaced 1 week apart. Workouts entailed two 60-second sets of repetitive knee- and hip-extensor muscle actions as subjects were instructed to exert maximal voluntary effort. Results from multiple test-retest measures show that the IET elicited reproducible intra- and interworkout data despite the unique challenge of multiplanar and multijoint exercise done over a large range of motion. We conclude that future studies in which IET performance measurement is required may choose to instrument the device with current methodology. Current practical applications include making IET data easier to comprehend for the coaches, athletes, and health care providers who use the device. 相似文献
17.
A computer-based expert system for diagnosing colonic sections as normal, adenoma or adenocarcinoma is described, along with an evaluation of its performance. On the basis of its knowledge base, consisting of the values of diagnostic clues and their associated certainty factors for the possible diagnoses, the system will suggest the diagnosis for new cases presented to it. Using the data provided for 16 diagnostic clues, the system arrived at correct diagnoses for all cases of normal colon, for 49 of 50 cases of adenoma and for 48 of 49 cases of adenocarcinoma. Sample outputs from the expert system are presented and discussed, and the effects of possible alterations in the data base are considered. 相似文献
18.
Protein complexes play a dominant role in cellular organization and function. Prediction of protein complexes from the network of physical interactions between proteins (PPI networks) has thus become one of the important research areas. Recently, many computational approaches have been developed to identify these complexes. Various performance assessment measures have been proposed for evaluating the efficiency of these methods. However, there are many inconsistencies in the definitions and usage of the measures across the literature. To address this issue, we have gathered and presented the most important performance evaluation measures and developed a tool, named CompEvaluator, to critically assess the protein complex prediction methods. The tool and documentation are publicly available at https://sourceforge.net/projects/compevaluator/files/. 相似文献
19.
1. 1. The writers present the general theory of evaluation that is being developed by their group. 2. 2. The evaluation of a human environment is a complex mental process. 3. 3. In an effort to express numerically the quality of an environment, one tends to oversimplify the complex aspects of it and the entailing problems in relation to its inhabitants. 4. 4. In this paper, some examples are taken in the evaluation of thermal environments, wherein much has been said and done in setting up numerical scales to express human comfort, and yet neither clear-cut explanations nor convincing logic seem to exist to terminate the argument over the widely scattered and sometimes seemingly contradicting experimental data. 5. 5. The writers suggest that many of the reasons for this confusion may be traced back to the oversimplified notion of evaluation. 6. 6. It is shown that there are various possibilities when looking at the scales of evaluation. 7. 7.|The nominal scale, least studied of all the four traditional scales, may be given a prominent place in evaluating a thermal environment. The pseudo-interval order scale is another example.
Author Keywords: evaluation; scales; thermal environment; classification; pseudo-interval order 相似文献
20.
The graphics processing unit (GPU), which originally was used exclusively for visualization purposes, has evolved into an extremely powerful co-processor. In the meanwhile, through the development of elaborate interfaces, the GPU can be used to process data and deal with computationally intensive applications. The speed-up factors attained compared to the central processing unit (CPU) are dependent on the particular application, as the GPU architecture gives the best performance for algorithms that exhibit high data parallelism and high arithmetic intensity. Here, we evaluate the performance of the GPU on a number of common algorithms used for three-dimensional image processing. The algorithms were developed on a new software platform called "CUDA", which allows a direct translation from C code to the GPU. The implemented algorithms include spatial transformations, real-space and Fourier operations, as well as pattern recognition procedures, reconstruction algorithms and classification procedures. In our implementation, the direct porting of C code in the GPU achieves typical acceleration values in the order of 10-20 times compared to a state-of-the-art conventional processor, but they vary depending on the type of the algorithm. The gained speed-up comes with no additional costs, since the software runs on the GPU of the graphics card of common workstations. 相似文献
|