首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We combined routinely reported tuberculosis (TB) patient characteristics with genotyping data and measures of geospatial concentration to predict which small clusters (i.e., consisting of only3 TB patients) in the United States were most likely to become outbreaks of at least 6 TB cases. Of 146 clusters analyzed, 16 (11.0%) grew into outbreaks. Clusters most likely to become outbreaks were those in which at least 1 of the first 3 patients reported homelessness or excess alcohol or illicit drug use or was incarcerated at the time of TB diagnosis and in which the cluster grew rapidly (i.e., the third case was diagnosed within 5.3 months of the first case). Of 17 clusters with these characteristics and therefore considered high risk, 9 (53%) became outbreaks. This retrospective cohort analysis of clusters in the United States suggests that routinely reported data may identify small clusters that are likely to become outbreaks and which are therefore candidates for intensified contact investigations.  相似文献   

2.
3.
BackgroundTsetse flies occur in much of sub-Saharan Africa where they transmit the trypanosomes that cause the diseases of sleeping sickness in humans and nagana in livestock. One of the most economical and effective methods of tsetse control is the use of insecticide-treated screens, called targets, that simulate hosts. Targets have been ~1m2, but recently it was shown that those tsetse that occupy riverine situations, and which are the main vectors of sleeping sickness, respond well to targets only ~0.06m2. The cheapness of these tiny targets suggests the need to reconsider what intensity and duration of target deployments comprise the most cost-effective strategy in various riverine habitats.Conclusion/SignificanceSeasonal use of tiny targets deserves field trials. The ability to recognise habitat that contains tsetse populations which are not self-sustaining could improve the planning of all methods of tsetse control, against any species, in riverine, savannah or forest situations. Criteria to assist such recognition are suggested.  相似文献   

4.
Abstract: Animal locations estimated by Global Positioning System (GPS) inherently contain errors. Screening procedures used to remove large positional errors often trade data accuracy for data loss. We developed a simple screening method that identifies locations arising from unrealistic movement patterns. When applied to a large data set of moose (Alces alces) locations, our method identified virtually all known errors with minimal loss of data. Thus, our method for screening GPS data improves the quality of data sets and increases the value of such data for research and management.  相似文献   

5.
ABSTRACT Knowledge of the range, behavior, and feeding habits of large carnivores is fundamental to their successful conservation. Traditionally, the best method to obtain feeding data is through continuous observation, which is not always feasible. Reliable automated methods are needed to obtain sample sizes sufficient for statistical inference. Identification of large carnivore kill sites using Global Positioning System (GPS) data is gaining popularity. We assessed performance of generalized linear regression models (GLM) versus classification trees (CT) in a multipredator, multiprey African savanna ecosystem. We applied GLMs and CTs to various combinations of distance-traveled data, cluster durations, and environmental factors to predict occurrence of 234 female African lion (Panthera leo) kill sites from 1,477 investigated GPS clusters. Ratio of distance moved 24 hours before versus 24 hours after a cluster was the most important predictor variable in both GLM and CT analysis. In all cases, GLMs outperformed our cost-complexity-pruned CTs in their discriminative ability to separate kill from nonkill sites. Generalized linear models provided a good framework for kill-site identification that incorporates a hierarchal ordering of cluster investigation and measures to assess trade-offs between classification accuracy and time constraints. Implementation of GLMs within an adaptive sampling framework can considerably increase efficiency of locating kill sites, providing a cost-effective method for increasing sample sizes of kill data.  相似文献   

6.
7.
ABSTRACT Animal movement studies regularly use movement states (e.g., slow and fast) derived from remotely sensed locations to make inferences about strategies of resource use. However, the number of movement state categories used is often arbitrary and rarely inferred from the data. Identifying groups with similar movement characteristics is a statistical problem. We present a framework based on k-means clustering and gap statistic for evaluating the number of movement states without making a priori assumptions about the number of clusters. This allowed us to distinguish 4 movement states using turning angle and step length derived from Global Positioning System locations and head movements derived from tip switches in a neck collar of free-ranging elk (Cervus elaphus) in west central Alberta, Canada. Based on movement characteristics and on the linkage between each state and landscape features, we were able to identify inter-patch movements, intra-patch foraging, rest, and inter-patch foraging movements. Linking behavior to environment (e.g., state-dependent habitat use) can inform decisions on landscape management for wildlife.  相似文献   

8.
The relationship between the locations of the clumps of sprouts, some morphological characteristics of the clumps and the local soil environment in an old sweet chestnut coppice are studied. The theory of marked point process, which has not yet been used extensively in forestry studies, is shown to be adequate for the analysis of this type of spatial data. The marks correspond to morphological characteristics of the clumps: “diameter”, “number of sprouts”, “height at one year”, and “height at three years”. Several covariance functions are described which give a method for exploring the spatial relationships within the stand. Some of these functions are introduced for the first time in an actual statistical analysis. By using these functions, it is shown that the clumps are regularly distributed. The variables “diameter” and “number of sprouts” are strongly spatially negatively correlated, whereas the heights are slightly or not correlated. By categorising the individuals according to the mark values, it is shown that the small clumps tended to be aggregated in the gaps between medium and large clumps. Values of heights in the ties of the distribution are related as well as their spatial correlation to the local soil environment.  相似文献   

9.
10.
It has been shown that the population average blood glucose level of diabetes patients shows seasonal variation, with higher levels in the winter than summer. However, seasonality in the population averages could be due to a tendency in the individual to seasonal variation, or alternatively due to occasional high winter readings (spiking), with different individuals showing this increase in different winters. A method was developed to rule out spiking as the dominant pattern underlying the seasonal variation in the population averages. Three years of data from three community-serving laboratories in Israel were retrieved. Diabetes patients (N?=?3243) with a blood glucose result every winter and summer over the study period were selected. For each individual, the following were calculated: seasonal average glucose for all winters and summers over the period of study (2006–2009), winter-summer difference for each adjacent winter-summer pair, and average of these five differences, an index of the degree of spikiness in the pattern of the six seasonal levels, and number of times out of five that each winter-summer difference was positive. Seasonal population averages were examined. The distribution of the individual's differences between adjacent seasons (winter minus summer) was examined and compared between subgroups. Seasonal population averages were reexamined in groups divided according to the index of the degree of spikiness in the individual's glucose pattern over the series of seasons. Seasonal population averages showed higher winter than summer levels. The overall median winter-summer difference on the individual level was 8?mg/dL (0.4?mmol/L). In 16.9% (95% confidence interval [CI]: 15.6–18.2%) of the population, all five winter-summer differences were positive versus 3.6% (95% CI: 3.0–4.2%) where all five winter-summer differences were negative. Seasonal variation in the population averages was not attenuated in the group having the lowest spikiness index; comparison of the distributions of the winter-summer differences in the high-, medium-, and low-spikiness groups showed no significant difference (p?=?.213). Therefore, seasonality in the population average blood glucose in diabetes patients is not just the result of occasional high measurements in different individuals in different winters, but presumably reflects a general periodic tendency in individuals for winter glucose levels to be higher than summer levels. (Author correspondence: )  相似文献   

11.
IntroductionVaccinating a buffer of individuals around a case (ring vaccination) has the potential to target those who are at highest risk of infection, reducing the number of doses needed to control a disease. We explored the potential vaccine effectiveness (VE) of oral cholera vaccines (OCVs) for such a strategy.ConclusionsThese findings suggest that high-level protection can be achieved if individuals living close to cholera cases are living in a high coverage ring. Since this was an observational study including participants who had received two doses of vaccine (or placebo) in the clinical trial, further studies are needed to determine whether a ring vaccination strategy, in which vaccine is given quickly to those living close to a case, is feasible and effective.

Trial registration

ClinicalTrials.gov NCT00289224  相似文献   

12.
Humans move frequently and tend to carry parasites among areas with endemic malaria and into areas where local transmission is unsustainable. Human-mediated parasite mobility can thus sustain parasite populations in areas where they would otherwise be absent. Data describing human mobility and malaria epidemiology can help classify landscapes into parasite demographic sources and sinks, ecological concepts that have parallels in malaria control discussions of transmission foci. By linking transmission to parasite flow, it is possible to stratify landscapes for malaria control and elimination, as sources are disproportionately important to the regional persistence of malaria parasites. Here, we identify putative malaria sources and sinks for pre-elimination Namibia using malaria parasite rate (PR) maps and call data records from mobile phones, using a steady-state analysis of a malaria transmission model to infer where infections most likely occurred. We also examined how the landscape of transmission and burden changed from the pre-elimination setting by comparing the location and extent of predicted pre-elimination transmission foci with modeled incidence for 2009. This comparison suggests that while transmission was spatially focal pre-elimination, the spatial distribution of cases changed as burden declined. The changing spatial distribution of burden could be due to importation, with cases focused around importation hotspots, or due to heterogeneous application of elimination effort. While this framework is an important step towards understanding progressive changes in malaria distribution and the role of subnational transmission dynamics in a policy-relevant way, future work should account for international parasite movement, utilize real time surveillance data, and relax the steady state assumption required by the presented model.  相似文献   

13.
One of the criticisms of industry-sponsored human subject testing of toxicants is based on the perception that it is often motivated by an attempt to raise the acceptable exposure limit for the chemical. When Reference Doses (RfDs) or Reference Concentrations (RfCs) are based upon no-effect levels from human rather than animal data, an animal-to-human uncertainty factor (usually 10) is not required, which could conceivably result in a higher safe exposure limit. There has been little in the way of study of the effect of using human vs. animal data on the development of RfDs and RfCs to lend empirical support to this argument. We have recently completed an analysis comparing RfDs and RfCs derived from human data with toxicity values for the same chemicals based on animal data. The results, published in detail elsewhere, are summarized here. We found that the use of human data did not always result in higher RfDs or RfCs. In 36% of the comparisons, human-based RfDs or RfCs were lower than the corresponding animal-based toxicity values, and were more than 3-fold lower in 23% of the comparisons. In 10 out of 43 possible comparisons (23%), insufficient experimental animal data are readily available or data are inappropriate to estimate either RfDs or RfCs. Although there are practical limitations in conducting this type of analysis, it nonetheless suggests that the use of human data does not routinely lead to higher toxicity values. Given the inherent ability of human data to reduce uncertainty regarding risks from human exposures, its use in conjunction with data gathered from experimental animals is a public health protective policy that should be encouraged.  相似文献   

14.
The analysis and management of MS data, especially those generated by data independent MS acquisition, exemplified by SWATH-MS, pose significant challenges for proteomics bioinformatics. The large size and vast amount of information inherent to these data sets need to be properly structured to enable an efficient and straightforward extraction of the signals used to identify specific target peptides. Standard XML based formats are not well suited to large MS data files, for example, those generated by SWATH-MS, and compromise high-throughput data processing and storing.We developed mzDB, an efficient file format for large MS data sets. It relies on the SQLite software library and consists of a standardized and portable server-less single-file database. An optimized 3D indexing approach is adopted, where the LC-MS coordinates (retention time and m/z), along with the precursor m/z for SWATH-MS data, are used to query the database for data extraction.In comparison with XML formats, mzDB saves ∼25% of storage space and improves access times by a factor of twofold up to even 2000-fold, depending on the particular data access. Similarly, mzDB shows also slightly to significantly lower access times in comparison with other formats like mz5. Both C++ and Java implementations, converting raw or XML formats to mzDB and providing access methods, will be released under permissive license. mzDB can be easily accessed by the SQLite C library and its drivers for all major languages, and browsed with existing dedicated GUIs. The mzDB described here can boost existing mass spectrometry data analysis pipelines, offering unprecedented performance in terms of efficiency, portability, compactness, and flexibility.The continuous improvement of mass spectrometers (14) and HPLC systems (510) and the rapidly increasing volumes of data they produce pose a real challenge to software developers who constantly have to adapt their tools to deal with different types and increasing sizes of raw files. Indeed, the file size of a single MS analysis evolved from a few MB to several GB in less than 10 years. The introduction of high throughput, high mass accuracy MS analyses in data dependent acquisitions (DDA)1 and the adoption of Data Independent Acquisition (DIA) approaches, for example, SWATH-MS (11), were significant factors in this development. The management of these huge data files is a major issue for laboratories and raw file public repositories, which need to regularly upgrade their storage solutions and capacity.The availability of XML (eXtensible Markup Language) standard formats (12, 13) enhanced data exchange among laboratories. However, XMLs causes the inflation of raw file size by a factor of two to three times compared with their original size. Vendor files, although lighter, are proprietary formats, often not compatible with operating systems other than Microsoft Windows. They do not generally interface with many open source software tools, and do not offer a viable solution for data exchange. In addition to size inflation, other disadvantages associated with the use of XML for the representation of raw data have been previously described in the literature (1417). These include the verbosity of language syntax, the lack of support for multidimensional chromatographic analyses, and the low performance showed during data processing. Although XML standards were originally conceived as a format for enabling data sharing in the community, they are commonly used as the input for MS data analysis. Latest software tools (18, 19) are usually only compatible with mzML files, limiting de facto the throughput of proteomic analyses.To tackle these issues, some independent laboratories developed open formats relying on binary specifications (14, 17, 20, 21), to optimize both file size and data processing performance. Similar efforts started already more than ten years ago, and, among the others, the NetCDF version 4, first described in 2004, added the support for a new data model called HDF5. Because it is particularly well suited to the representation of complex data, HDF5 was used in several scientific projects to store and efficiently access large volumes of bytes, as for the mz5 format (17). Compared with XML based formats, mz5 is much more efficient in terms of file size, memory footprint, and access time. Thus, after replacing the JCAMP text format more than 10 years ago, netCDF is nowadays a suitable alternative to XML based formats. Nonetheless, solutions for storing and indexing large amounts of data in a binary file are not limited to netCDF. For instance, it has been demonstrated that a relational model can represent raw data, as in YAFMS format (14), which is based on SQLite, a technology that allows implementing a portable, self-contained, single file database. Similarly to mz5, YAFMS is definitely more efficient in terms of file size and access times than XML.Despite their improvements, a limitation of these new binary formats relies on the lack of a multi-indexing model to represent the bi-dimensional structure of LC-MS data. The inherently 2D indexing of LC-MS data can indeed be very useful when working with LC-MS/MS acquisition files. At the state-of-the-art, three main raw data access strategies can be identified across DDA and DIA approaches:
  • (1) Sequential reading of whole m/z spectra, for a systematic processing of the entire raw file. Use cases: file format conversion, peak picking, analysis of MS/MS spectra, and MS/MS peak list generation.
  • (2) Systematic processing of the data contained in specific m/z windows, across the entire chromatographic gradient. Use cases: extraction of XICs on the whole chromatographic gradient and MS features detection.
  • (3) Random access to a small region of the LC-MS map (a few spectra or an m/z window of consecutive spectra). Use cases: data visualization, targeted extraction of XICs on a small time range, and targeted extraction of a subset of spectra.
The adoption of a certain data access strategy depends upon the particular data analysis algorithms, which can perform signal extraction mainly by unsupervised or supervised approaches. Unsupervised approaches (18, 2225) recognize LC-MS features on the basis of patterns like the theoretical isotope distribution, the shape of the elution peaks, etc. Conversely, supervised approaches (2933) implement the peak picking as driven data access, using the a priori knowledge on peptide coordinates (m/z, retention time, and m/z precursor for DIA), which are provided by appropriate extraction lists given by the identification search engine or the transition lists in targeted proteomics (34). Data access overhead can vary significantly, according to the specific algorithm, data size, and length of the extraction list. In the unsupervised approach, feature detection is based first on the analysis of the full set of MS spectra and then on the grouping of the peaks detected in adjacent MS scans; thus, optimized sequential spectra access is required. In the supervised approach, peptide XICs are extracted using their a priori coordinates and therefore sequential spectra access is not a suitable solution; for instance, MS spectra shared by different peptides would be loaded multiple times leading to highly redundant data reloading. Even though sophisticated caching mechanisms can reduce the impact of this issue, they would increase memory consumption. It is thus preferable to perform a targeted access to specific MS spectra by leveraging an index in the time dimension. However, it would still be a sub-optimal solution because of redundant loads of full MS spectra, whereas only a small spectral window centered on the peptide m/z is of interest. Thus the quantification of dozens of thousands of peptides (32, 33) requires appropriate data access methods to cope with the repetitive and high load of MS data.We therefore deem that an ideal file format should show comparable efficiency regardless of the particular use case. In order to achieve this important flexibility and efficiency on any data access, we developed a new solution featuring multiple indexing strategies: the mzDB format (i.e. m/z database). As the YAFMS format, mzDB is implemented using SQLite, which is commonly adopted in several computational projects and is compatible with most programming languages. In contrast to mz5 and YAFMS formats, where each spectrum is referred by a single index entry, mzDB has an internal data structure allowing a multidimensional data indexing, and thus results in efficient queries along both time and m/z dimensions. This makes mzDB specifically suited to the processing of large-scale LC-MS/MS data. In particular, the multidimensional data-indexing model was extended for SWATH-MS data, where a third index is given by the m/z of the precursor ion, in addition to the RT and m/z of the fragment ions.In order to show its efficiency for all described data access strategies, mzDB was compared with the mzML format, which is the official XML standard, and the latest mz5 binary format, which has already been compared with many existing file formats (17). Results show that mzDB outperforms other formats on most comparisons, except in sequential reading benchmarks where mz5 and mzDB are comparable. mzDB access performance, portability, and compactness, as well as its compliance to the PSI controlled vocabulary make it complementary to existing solutions for both the storage and exchange of mass spectrometry data and will eventually address the issues related to data access overhead during their processing. mzDB can therefore enhance existing mass spectrometry data analysis pipelines, offering unprecedented performance and therefore possibilities.  相似文献   

15.
The quantitation of human granulocyte movement using a stochastic differential equation is described. The method has the potential to distinguish both positive and negative chemotaxis. Analysis and information concerning cell movements can be obtained for any point in time and distance for the duration of the experiment.  相似文献   

16.
We develop a numerical method for estimating optimal parameters in a mathematical model of the within-host dynamics of malaria infection. The model consists of a quasilinear system of partial differential equations. Convergence theory for the computed parameters is provided. Following this analysis, we present several numerical simulations that suggest that periodic treatments that are in synchronization with the periodic bursting rate of infected erythrocytes are the most productive strategies.  相似文献   

17.
Controlled human malaria infection (CHMI) is a powerful method for assessing the efficacy of anti-malaria vaccines and drugs targeting pre-erythrocytic and erythrocytic stages of the parasite. CHMI has heretofore required the bites of 5 Plasmodium falciparum (Pf) sporozoite (SPZ)-infected mosquitoes to reliably induce Pf malaria. We reported that CHMI using the bites of 3 PfSPZ-infected mosquitoes reared aseptically in compliance with current good manufacturing practices (cGMP) was successful in 6 participants. Here, we report results from a subsequent CHMI study using 3 PfSPZ-infected mosquitoes reared aseptically to validate the initial clinical trial. We also compare results of safety, tolerability, and transmission dynamics in participants undergoing CHMI using 3 PfSPZ-infected mosquitoes reared aseptically to published studies of CHMI using 5 mosquitoes. Nineteen adults aged 18–40 years were bitten by 3 Anopheles stephensi mosquitoes infected with the chloroquine-sensitive NF54 strain of Pf. All 19 participants developed malaria (100%); 12 of 19 (63%) on Day 11. The mean pre-patent period was 258.3 hours (range 210.5–333.8). The geometric mean parasitemia at first diagnosis by microscopy was 9.5 parasites/µL (range 2–44). Quantitative polymerase chain reaction (qPCR) detected parasites an average of 79.8 hours (range 43.8–116.7) before microscopy. The mosquitoes had a geometric mean of 37,894 PfSPZ/mosquito (range 3,500–152,200). Exposure to the bites of 3 aseptically-raised, PfSPZ-infected mosquitoes is a safe, effective procedure for CHMI in malaria-naïve adults. The aseptic model should be considered as a new standard for CHMI trials in non-endemic areas. Microscopy is the gold standard used for the diagnosis of Pf malaria after CHMI, but qPCR identifies parasites earlier. If qPCR continues to be shown to be highly specific, and can be made to be practical, rapid, and standardized, it should be considered as an alternative for diagnosis.

Trial Registration

ClinicalTrials.gov NCT00744133 NCT00744133  相似文献   

18.
Cerebral glucose metabolism is a reliable index of neural activity and may provide evidence for brain function in healthy adults. We studied the correlation between cerebral glucose metabolism and age under the resting-state in both sexes with position emission tomography. Statistical test of age effect on cerebral glucose metabolism was performed using the statistical parametric mapping software with a voxel-by-voxel approach ( family wise error corrected, -voxel threshold). The subjects consisted of 108 females (mean S.D. = 4510 years) and 126 males (mean S.D. = 4911 years). We showed here that brain activity in the frontal and temporal lobes in both sexes decreased significantly with normal aging. The glucose metabolism in the caudate bilaterally showed a negative correlation with age in males, but not in females. Few regions in males were shown with an increased glucose metabolism with age. Although the mechanisms of brain aging are still unknown, a map of brain areas susceptible to age was described in this report.  相似文献   

19.
20.

Background

The Brain Computer Interfaces (BCI) are devices allowing direct communication between the brain of a user and a machine. This technology can be used by disabled people in order to improve their independence and maximize their capabilities such as finding an object in the environment. Such devices can be realized by the non-invasive measurement of information from the cortex by electroencephalography (EEG).

Methods

Our work proposes a novel BCI system that consists of controlling a robot arm based on the user's thought. Four subjects (1 female and 3 males) aged between 20 and 29 years have participated to our experiment. They have been instructed to imagine the execution of movements of the right hand, the left hand, both right and left hands or the movement of the feet depending on the protocol established.EMOTIV EPOC headset was used to record neuronal electrical activities from the subject's scalp, these activities were then sent to the computer for analysis. Feature extraction was performed using the Principal Component Analysis (PCA) method combined with the Fast Fourier transform (FFT) spectrum within the frequency band responsible for sensorimotor rhythms (8 Hz–22 Hz).These features were then fed into a Support Vector Machine (SVM) classifier based on a Radial Base Function (RBF) whose outputs were translated into commands to control the robot arm.

Results

The proposed BCI enabled the control of the robot arm in the four directions: right, left, up and down, achieving an averaged accuracy of 85.45% across all the subjects.

Conclusion

The results obtained would encourage, with further developments, the use of the proposed BCI to perform more complex tasks such as execution of successive movements or stopping the execution once a searched object is detected. This would provide a useful assistance means for people with motor impairment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号