首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
WHO Simplified Methodology (1987) is being applied in several studies: the Task Force on Methods for the Natural Regulation of Fertility of the Special Program on Human Reproduction in centers in Chendu, China; Guatemala City, Guatemala; New Delhi, India; Sagamu, Nigeria; Santiago, Chile; Uppsala, Sweden; and Westmead/Sydney, Australia. 550 lactating mothers who can read and write were examined in order to provide a better understanding of the relationship between breast-feeding duration and lactational amenorrhea, and to determine whether the longitudinal study results are applicable to the general population. Protocol involved data collection of breast-feeding frequency, timing, and duration; supplementary feeding characteristics and timing; and maternal and infant health. WHO protocol is also being examined in studies in Colombo, Sri Lanka, and Sagamu, Nigeria. The study objective was to examine the effect of maternal nutritional supplementation with skimmed milk powder in Colombo and a high protein biscuit in Sagamu on the duration of lactational amenorrhea in moderately malnourished breast-feeding mothers. Followup studies are expected. Optimally, the end product should be a measure of the presence of ovulation, however, the logistics prevented this from occurring. Instead, weekly urine samples were collected and tested for the presence of estrogen and pregnanediol glucuronide. Motivation is a key determinant in the success of these projects, since detailed record keeping over a prolonged period of time is required. Motivational interventions vary between centers and may involve social contact with investigators or health care support for the mother and infant. Some preliminary results indicate that the higher the percentage receiving supplementation, the earlier the return of the menses.  相似文献   

2.
This report provides an overview of the discussions, presentations, and consensus thinking from the Workshop on Smart Data Collection for CryoEM held at the New York Structural Biology Center on April 6–7, 2022. The goal of the workshop was to address next generation data collection strategies that integrate machine learning and real-time processing into the workflow to reduce or eliminate the need for operator intervention.  相似文献   

3.
Understanding patterns of human evolution across space and time requires synthesizing data collected by independent research teams, and this effort is part of a larger trend to develop cyber infrastructure and e‐science initiatives. 1 At present, paleoanthropology cannot easily answer basic questions about the total number of fossils and artifacts that have been discovered, or exactly how those items were collected. In this paper, we examine the methodological challenges to data integration, with the hope that mitigating the technical obstacles will further promote data sharing. At a minimum, data integration efforts must document what data exist and how the data were collected (discovery), after which we can begin standardizing data collection practices with the aim of achieving combined analyses (synthesis). This paper outlines a digital data collection system for paleoanthropology. We review the relevant data management principles for a general audience and supplement this with technical details drawn from over 15 years of paleontological and archeological field experience in Africa and Europe. The system outlined here emphasizes free open‐source software (FOSS) solutions that work on multiple computer platforms; it builds on recent advances in open‐source geospatial software and mobile computing.  相似文献   

4.
5.
6.
7.
8.
Ashkenazy H  Unger R  Kliger Y 《Proteins》2009,74(3):545-555
The main objective of correlated mutation analysis (CMA) is to predict intraprotein residue-residue interactions from sequence alone. Despite considerable progress in algorithms and computer capabilities, the performance of CMA methods remains quite low. Here we examine whether, and to what extent, the quality of CMA methods depends on the sequences that are included in the multiple sequence alignment (MSA). The results revealed a strong correlation between the number of homologs in an MSA and CMA prediction strength. Furthermore, many of the current methods include only orthologs in the MSA, we found that it is beneficial to include both orthologs and paralogs in the MSA. Remarkably, even remote homologs contribute to the improved accuracy. Based on our findings we put forward an automated data collection procedure, with a minimal coverage of 50% between the query protein and its orthologs and paralogs. This procedure improves accuracy even in the absence of manual curation. In this era of massive sequencing and exploding sequence data, our results suggest that correlated mutation-based methods have not reached their inherent performance limitations and that the role of CMA in structural biology is far from being fulfilled.  相似文献   

9.
Over recent years, a number of initiatives have proposed standard reporting guidelines for functional genomics experiments. Associated with these are data models that may be used as the basis of the design of software tools that store and transmit experiment data in standard formats. Central to the success of such data handling tools is their usability. Successful data handling tools are expected to yield benefits in time saving and in quality assurance. Here, we describe the collection of datasets that conform to the recently proposed data model for plant metabolomics known as ArMet (architecture for metabolomics) and illustrate a number of approaches to robust data collection that have been developed in collaboration between software engineers and biologists. These examples also serve to validate ArMet from the data collection perspective by demonstrating that a range of software tools, supporting data recording and data upload to central databases, can be built using the data model as the basis of their design.  相似文献   

10.
11.
12.
13.
14.
15.
16.
17.
  1. Download : Download high-res image (108KB)
  2. Download : Download full-size image
  相似文献   

18.
To increase the efficiency of diffraction data collection for protein crystallographic studies, an automated system designed to store frozen protein crystals, mount them sequentially, align them to the X-ray beam, collect complete data sets, and return the crystals to storage has been developed. Advances in X-ray data collection technology including more brilliant X-ray sources, improved focusing optics, and faster-readout detectors have reduced diffraction data acquisition times from days to hours at a typical protein crystallography laboratory [1,2]. In addition, the number of high-brilliance synchrotron X-ray beam lines dedicated to macromolecular crystallography has increased significantly, and data collection times at these facilities can be routinely less than an hour per crystal. Because the number of protein crystals that may be collected in a 24 hr period has substantially increased, unattended X-ray data acquisition, including automated crystal mounting and alignment, is a desirable goal for protein crystallography. The ability to complete X-ray data collection more efficiently should impact a number of fields, including the emerging structural genomics field [3], structure-directed drug design, and the newly developed screening by X-ray crystallography [4], as well as small molecule applications.  相似文献   

19.
A wide variety of information or ‘metadata’ is required when undertaking dendrochronological sampling. Traditionally, researchers record observations and measurements on field notebooks and/or paper recording forms, and use digital cameras and hand-held GPS devices to capture images and record locations. In the lab, field notes are often manually entered into spreadsheets or personal databases, which are then sometimes linked to images and GPS waypoints. This process is both time consuming and prone to human and instrument error. Specialised hardware technology exists to marry these data sources, but costs can be prohibitive for small scale operations (>$2000 USD). Such systems often include proprietary software that is tailored to very specific needs and might require a high level of expertise to use. We report on the successful testing and deployment of a dendrochronological field data collection system utilising affordable off-the-shelf devices ($100–300 USD). The method builds upon established open source software that has been widely used in developing countries for public health projects as well as to assist in disaster recovery operations. It includes customisable forms for digital data entry in the field, and a marrying of accurate GPS location with geotagged photographs (with possible extensions to other measuring devices via Bluetooth) into structured data fields that are easy to learn and operate. Digital data collection is less prone to human error and efficiently captures a range of important metadata. In our experience, the hardware proved field worthy in terms of size, ruggedness, and dependability (e.g., battery life). The system integrates directly with the Tellervo software to both create forms and populate the database, providing end users with the ability to tailor the solution to their particular field data collection needs.  相似文献   

20.
Recently, the use of mobile technologies in ecological momentary assessments (EMAs) and interventions has made it easier to collect data suitable for intraindividual variability studies in the medical field. Nevertheless, especially when self-reports are used during the data collection process, there are difficulties in balancing data quality and the burden placed on the subject. In this paper, we address this problem for a specific EMA setting that aims to submit a demanding task to subjects at high/low values of a self-reported variable. We adopt a dynamic approach inspired by control chart methods and design optimization techniques to obtain an EMA triggering mechanism for data collection that considers both the individual variability of the self-reported variable and of the adherence. We test the algorithm in both a simulation setting and with real, large-scale data from a tinnitus longitudinal study. A Wilcoxon signed rank test shows that the algorithm tends to have both a higher F1 score and utility than a random schedule and a rule-based algorithm with static thresholds, which are the current state-of-the-art approaches. In conclusion, the algorithm is proven effective in balancing data quality and the burden placed on the participants, especially in studies where data collection is impacted by adherence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号