首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Why is Real-World Visual Object Recognition Hard?   总被引:1,自引:0,他引:1  
Progress in understanding the brain mechanisms underlying vision requires the construction of computational models that not only emulate the brain's anatomy and physiology, but ultimately match its performance on visual tasks. In recent years, “natural” images have become popular in the study of vision and have been used to show apparently impressive progress in building such models. Here, we challenge the use of uncontrolled “natural” images in guiding that progress. In particular, we show that a simple V1-like model—a neuroscientist's “null” model, which should perform poorly at real-world visual object recognition tasks—outperforms state-of-the-art object recognition systems (biologically inspired and otherwise) on a standard, ostensibly natural image recognition test. As a counterpoint, we designed a “simpler” recognition test to better span the real-world variation in object pose, position, and scale, and we show that this test correctly exposes the inadequacy of the V1-like model. Taken together, these results demonstrate that tests based on uncontrolled natural images can be seriously misleading, potentially guiding progress in the wrong direction. Instead, we reexamine what it means for images to be natural and argue for a renewed focus on the core problem of object recognition—real-world image variation.  相似文献   

2.
Studying the cross talk between nonpathogenic organisms and their mammalian hosts represents an experimental challenge because these interactions are typically subtle and the microbial societies that associate with mammalian hosts are very complex and dynamic. A large, functionally stable, climax community of microbes is maintained in the murine and human gastrointestinal tracts. This open ecosystem exhibits not only regional differences in the composition of its microbiota but also regional differences in the differentiation programs of its epithelial cells and in the spatial distribution of its component immune cells. A key experimental strategy for determining whether “nonpathogenic” microorganisms actively create their own regional habitats in this ecosystem is to define cellular function in germ-free animals and then evaluate the effects of adding single or several microbial species. This review focuses on how gnotobiotics—the study of germ-free animals—has been and needs to be used to examine how the gastrointestinal ecosystem is created and maintained. Areas discussed include the generation of simplified ecosystems by using genetically manipulatable microbes and hosts to determine whether components of the microbiota actively regulate epithelial differentiation to create niches for themselves and for other organisms; the ways in which gnotobiology can help reveal collaborative interactions among the microbiota, epithelium, and mucosal immune system; and the ways in which gnotobiology is and will be useful for identifying host and microbial factors that define the continuum between nonpathogenic and pathogenic. A series of tests of microbial contributions to several pathologic states, using germ-free and ex-germ-free mice, are proposed.  相似文献   

3.
The human genetics community needs robust protocols that enable secure sharing of genomic data from participants in genetic research. Beacons are web servers that answer allele-presence queries—such as “Do you have a genome that has a specific nucleotide (e.g., A) at a specific genomic position (e.g., position 11,272 on chromosome 1)?”—with either “yes” or “no.” Here, we show that individuals in a beacon are susceptible to re-identification even if the only data shared include presence or absence information about alleles in a beacon. Specifically, we propose a likelihood-ratio test of whether a given individual is present in a given genetic beacon. Our test is not dependent on allele frequencies and is the most powerful test for a specified false-positive rate. Through simulations, we showed that in a beacon with 1,000 individuals, re-identification is possible with just 5,000 queries. Relatives can also be identified in the beacon. Re-identification is possible even in the presence of sequencing errors and variant-calling differences. In a beacon constructed with 65 European individuals from the 1000 Genomes Project, we demonstrated that it is possible to detect membership in the beacon with just 250 SNPs. With just 1,000 SNP queries, we were able to detect the presence of an individual genome from the Personal Genome Project in an existing beacon. Our results show that beacons can disclose membership and implied phenotypic information about participants and do not protect privacy a priori. We discuss risk mitigation through policies and standards such as not allowing anonymous pings of genetic beacons and requiring minimum beacon sizes.  相似文献   

4.
Contrary to the perception of many researchers that the recent invasion of chikungunya (CHIK) in the Western Hemisphere marked the first episode in history, a recent publication reminded them that CHIK had prevailed in the West Indies and southern regions of the United States from 1827–1828 under the guise of “dengue” (DEN), and that many old outbreaks of so-called “dengue” actually represented the CHIK cases erroneously identified as “dengue.” In hindsight, this confusion was unavoidable, given that the syndromes of the two diseases—transmitted by the same mosquito vector in urban areas—are very similar, and that specific laboratory-based diagnostic techniques for these diseases did not exist prior to 1940. While past reviewers reclassified problematic “dengue” outbreaks as CHIK, primarily based on manifestation of arthralgia as a marker of CHIK, they neither identified the root cause of the alleged misdiagnosis nor did they elaborate on the negative consequences derived from it. This article presents a reconstructed history of the genesis of the clinical definition of dengue by emphasizing problems with the definition, subsequent confusion with CHIK, and the ways in which physicians dealt with the variation in dengue-like (“dengue”) syndromes. Then, the article identifies in those records several factors complicating reclassification, based on current practice and standards. These factors include terms used for characterizing joint problems, style of documenting outbreak data, frequency of manifestation of arthralgia, possible involvement of more than one agent, and occurrence of the principal vector. The analysis of those factors reveals that while some of the old “dengue” outbreaks, including the 1827–1828 outbreaks in the Americas, are compatible with CHIK, similar reclassification of other “dengue” outbreaks to CHIK is difficult because of a combination of the absence of pathognomonic syndrome in these diseases and conflicting background information.  相似文献   

5.
A review is presented of ten years'' experience with the differential diagnosis of oliguria, utilizing the standard tests of renal function with the addition of the phenolsulfonphthalein excretion and urinary chloride measurements. The histories of 60 patients seen in consultation because of 24-hour urinary volume of less than 400 ml were studied in order to clarify the value of these tests. Particular attention was given to the postoperative “dilution state,” the oliguria of which tends to mimic that of “acute tubular necrosis.”In only 25 per cent of the 60 cases was “acute tubular necrosis” responsible for the oliguria. In the remaining 75 per cent of patients, oliguria was due either to the effects of simple dehydration without tubular damage, or to tubular dysfunction on a physiologic rather than an organic basis. Thus, three out of four patients with oliguria required aggressive and specific fluid-electrolyte therapy, often with the intensive use of potassium. One out of four required the opposite in therapy—controlled dehydration without added potassium and, on occasion, peritoneal or extracorporeal dialysis, in order to allow six to ten days for tubular repair.  相似文献   

6.
A program was carried out to test the value and feasibility of performing blood sugar screening tests in conjunction with a community-wide chest x-ray survey. A simple, rapid and inexpensive blood sugar screening test requiring only about two drops of blood from the finger tip was used. Among 14,681 persons who stated that they did not have diabetes, 191 or 1.3 per cent had “positive” results in screening tests. The number of persons referred to their physicians for diagnostic study because of the possibility of diabetes was reduced from 191 to 127 by means of a more specific secondary screening test.Diagnostic information with regard to 102 of the 127 persons referred to their physicians was supplied by the physicians. In 58 (0.40 per cent of the 14,681 participants) the diagnosis was diabetes—newly discovered as a result of referral by the survey.Some of the persons referred to their physicians because of suspicion of diabetes, while not then diabetic, might be considered prediabetic. The appearance of diabetes in this group during the year following the survey was therefore investigated. Glucose tolerance tests were performed for 32 of the diabetes suspects whose diagnosis immediately following the survey was either “not diabetic” or unknown. In 15 cases the glucose tolerance curves were indicative of diabetes, in seven cases questionable and in ten cases normal.The 58 persons diagnosed immediately after the survey plus the 15 found to have “diabetic” glucose tolerance curves a year later made a total of 73 newly discovered diabetics. This is a discovery rate of 0.50 per cent among the 14,681 participants in the survey.The success of this combined diabetes detection and chest x-ray survey suggests that other screening procedures should be studied to determine the desirability of adding them to similar community-wide case-finding programs.  相似文献   

7.
Objective To find out how accurately two point of care test systems—CoaguChek Mini and TAS PT-NC (RapidPointCoag)—display international normalised ratios (INRs).Design Comparison of the INRs from the two systems with a “true” INR on a conventional manual test from the same sample of blood.Setting 10 European Concerted Action on Anticoagulation centres.Participants 600 patients on long term dosage of warfarin.Main outcome measures Comparable results between the different methods.Results The mean displayed INR differed by 21.3% between the two point of care test monitoring systems. The INR on one system was 15.2% higher, on average, than the true INR, but on the other system the INR was 7.1% lower. The percentage difference between the mean displayed INR and the true INR at individual centres varied considerably with both systems.Conclusions Improved international sensitivity index calibration of point of care test monitors by their manufacturers is needed, and better methods of quality control of individual instruments by their users are also needed.  相似文献   

8.
Investigators have failed to show the usefulness of screening electrolyte—sodium, potassium, chloride and bicarbonate—blood urea nitrogen and glucose levels. In spite of this, we observed that that practice continues to be widely used at our university medical center. Using a form of consensus analysis, we examined the records of 301 admissions to the medicine service to determine whether laboratory tests were done for diagnostic or screening purposes and whether screening test results led to changes in patient management. Of the 1,764 tests done, 716 (40.6%) were for screening purposes. Only 2 (0.3%) screening test abnormalities led to any therapeutic changes, and many false-positive tests led to unnecessary diagnostic retesting.  相似文献   

9.
During metastasis, tumor cells may be copying a program that is executed by hematopoietic stem cells during development.That cancer is development gone awry is not a new concept. Most of the “hallmarks” ascribed to cancer—proliferation, invasion and induction of blood vessel growth—also occur during organogenesis and development. Therefore, tumors are not necessarily learning new tricks during their development, but how about when they metastasize? In colonizing a new organ, often with some degree of specificity, tumor cells may simply be copying a program that is executed during development by hematopoietic stem cells (HSCs)—the stem cells that ultimately generate all of the cells in our blood and maintain its homeostasis. One family of cells generated by HSCs—leukocytes—is the focus of the work by Coussens and Pollard (2012). These two scientists have woven together several studies that revolutionized the way we think of immune cells. As pointed out by the investigators (whose respective laboratories are responsible for much of the seminal work on this subject), immune cells also have a variety of trophic functions, and it is these functions that are used rationally during development, and recklessly during tumor growth.This leads us back to metastasis. There is so much to learn about why a tumor travels from one organ to another, how it does so, and the manner by which it adapts to and ultimately flourishes (or fails) in a foreign microenvironment. And as stated above, immune cell precursors, HSCs, do the same. In the mouse, HSCs have originated in one tissue (the dorsal aorta), traveled to another (the placenta) via the circulation, and matured somewhere else (the liver)—all before birth. Finally, HSCs make their way to the bone marrow, where they reside postnatally. Specialized niches in the bone marrow are thought to mediate HSC dormancy as a means to preserve the “stemness” of this population, and there are mechanisms in place that allow these cells to rapidly exit these environs and proliferate in response to injury. Therefore, it should not come as a surprise that a common site where micrometastases are found is the bone marrow for many cancers (including that of the breast).Uncovering whether the same niches that control HSC expansion in the bone marrow are also responsible for maintaining quiescence of tumor cell populations is an exciting prospect, as is deciphering the precise components of these niches. Such work could explain the seemingly incongruous observation that despite an absence of clinically detectable disease, circulating tumor cells are present in the blood of post-treatment cancer patients sometimes even decades later! Perhaps the niches that regulate prolonged dormancy of tumors are dynamic and inhibit tumor proliferation while allowing them to mobilize periodically, much like for HSCs. It also stands to reason that loss of the same controls that prevent HSC expansion until systemic damage occurs could awaken dormant tumors.Shiozawa et al. (2011) have demonstrated that prostate cancer cells do in fact compete with HSCs for niches within the bone marrow, and that tumor cells are mobilized from HSC niches by similar mechanisms as for HSCs. Whether this is the case for other cancers and whether these similarities can be exploited therapeutically remain to be seen.So what more is there to be learned about immune cells? By furthering our understanding of how solid cancers mimic and hijack components of our immune system, we may not “cure” cancer, but we very well may uncover a means to suppress some cancers into a state of permanent dormancy.  相似文献   

10.
Antibodies play a central role in prophylaxis against many infectious agents. While neutralization is a primary function of antibodies, the Fc- and complement-dependent activities of these multifunctional proteins may also be critical in their ability to provide protection against most viruses. Protection against viral pathogens in vivo is complex, and while virus neutralization—the ability of antibody to inactivate virus infectivity, often measured in vitro—is important, it is often only a partial contributor in protection. The rapid fluorescent focus inhibition test (RFFIT) remains the “gold standard” assay to measure rabies virus–neutralizing antibodies. In addition to neutralization, the rabies-specific antigen-binding activity of antibodies may be measured through enzyme-linked immunosorbent assays (ELISAs), as well as other available methods. For any disease, in selecting the appropriate assay(s) to use to assess antibody titers, assay validation and how they are interpreted are important considerations—but for a fatal disease like rabies, they are of paramount importance. The innate limitations of a one-dimensional laboratory test for rabies antibody measurement, as well as the validation of the method of choice, must be carefully considered in the selection of an assay method and for the interpretation of results that might be construed as a surrogate of protection.  相似文献   

11.
Gordon R. Cumming  Rhoda Keynes 《CMAJ》1967,96(18):1262-1269
The Canadian Association for Health, Physical Education and Recreation fitness test (CAHPER test) composed of six items was compared to two laboratory tests of endurance fitness, physical working capacity at a minute pulse rate of 170 (PWC170) and maximum oxygen uptake (Vo2 max.) in over 500 Winnipeg school children of both sexes aged 6 to 17 years. CAHPER test results were similar to the national average published by CAHPER in a test booklet. Correlation coefficients (r) of Vo2 max. for boys with the CAHPER tests were: sit-ups .42, broad jump .69, shuttle run .50, arm hang .43, 50-yard dash .60, 300-yard run .65; for girls the r values were about half the values for the boys. Much of the correlation between CAHPER tests and Vo2 max. or PWC170 depended on the association of each test with body size. When multiple correlations were obtained including surface area as the first variable, the only significant factor correlating with the endurance tests was the arm hang; none of the other tests showed a significant correlation. “Physical fitness” is task-specific, so that a subject''s position in the scoring scale of a fitness test depends entirely upon the test. The CAHPER test for physical fitness shows little or no correlation with standard laboratory measures of endurance in average children.  相似文献   

12.
Epithelial tissues respond to a wide variety of environmental and genotoxic stresses. As an adaptive mechanism, cells can deviate from their natural paths to acquire new identities, both within and across lineages. Under extreme conditions, epithelial tissues can utilize “shape‐shifting” mechanisms whereby they alter their form and function at a tissue‐wide scale. Mounting evidence suggests that in order to acquire these alternate tissue identities, cells follow a core set of “tissue logic” principles based on developmental paradigms. Here, we review the terminology and the concepts that have been put forward to describe cell plasticity. We also provide insights into various cell intrinsic and extrinsic factors, including genetic mutations, inflammation, microbiota, and therapeutic agents that contribute to cell plasticity. Additionally, we discuss recent studies that have sought to decode the “syntax” of plasticity—i.e., the cellular and molecular principles through which cells acquire new identities in both homeostatic and malignant epithelial tissues—and how these processes can be manipulated for developing novel cancer therapeutics.  相似文献   

13.
Improvements to particle tracking algorithms are required to effectively analyze the motility of biological molecules in complex or noisy systems. A typical single particle tracking (SPT) algorithm detects particle coordinates for trajectory assembly. However, particle detection filters fail for data sets with low signal-to-noise levels. When tracking molecular motors in complex systems, standard techniques often fail to separate the fluorescent signatures of moving particles from background signal. We developed an approach to analyze the motility of kinesin motor proteins moving along the microtubule cytoskeleton of extracted neurons using the Kullback-Leibler divergence to identify regions where there are significant differences between models of moving particles and background signal. We tested our software on both simulated and experimental data and found a noticeable improvement in SPT capability and a higher identification rate of motors as compared with current methods. This algorithm, called Cega, for “find the object,” produces data amenable to conventional blob detection techniques that can then be used to obtain coordinates for downstream SPT processing. We anticipate that this algorithm will be useful for those interested in tracking moving particles in complex in vitro or in vivo environments.  相似文献   

14.
BackgroundLeptospirosis has globally significant human mortality and morbidity, yet estimating the clinical and public health burden of leptospirosis is challenging because timely diagnosis remains limited. The goal of the present study was to evaluate leptospirosis undercounting by current standard methods in both clinical and epidemiological study settings.Methodology/Principal findingsA prospective hospital-based study was conducted in multiple hospitals in Sri Lanka from 2016 to 2019. Culture, whole blood, and urine samples were collected from clinically suspected leptospirosis cases and patients with undifferentiated fever. Analysis of biological samples from 1,734 subjects confirmed 591 (34.1%) cases as leptospirosis and 297 (17.1%) were classified as “probable” leptospirosis cases. Whole blood quantitative PCR (qPCR) did identify the most cases (322/540(60%)) but missed 40%. Cases missed by each method include; urine qPCR, 70% (153/220); acute sample microscopic agglutination test (MAT), 80% (409/510); paired serum sample MAT, 58% (98/170); and surveillance clinical case definition, 53% (265/496). qPCR of negative culture samples after six months of observation was of diagnostic value retrospectively with but missed 58% of positives (109/353).ConclusionLeptospirosis disease burden estimates should consider the limitations of standard diagnostic tests. qPCR of multiple sample types should be used as a leading standard test for diagnosing acute leptospirosis.  相似文献   

15.
A brief and preliminary outline is given describing a consecutive and continuing study of laboratory blood values of only 12 of 48 “normal healthy” subjects with only a few values given. The major emphasis is that of obtaining blood from each subject over a 12-week period to be repeated annually in order to determine individual values and in obtaining a chemical identification of each subject with the anticipation that more information will be available concerning the meaning and limiting parameters of “normal” biologic values. Such a study is made available through the application of modern advances in automation and the wide use of computers. It seems likely that some disorders can be discovered before clinically apparent, with the hope that consequent preventive measures and therapy may be more effective.The few values presented represent only a small part of those yet to be obtained. Much more work is needed in this study of normal values whose parameters must be further defined.  相似文献   

16.
Alzheimer’s disease (AD) is the most prevalent neurodegenerative disease and a worldwide health challenge. Different therapeutic approaches are being developed to reverse or slow the loss of affected neurons. Another plausible therapeutic way that may complement the studies is to increase the survival of existing neurons by mobilizing the existing neural stem/progenitor cells (NSPCs) — i.e. “induce their plasticity” — to regenerate lost neurons despite the existing pathology and unfavorable environment. However, there is controversy about how NSPCs are affected by the unfavorable toxic environment during AD. In this review, we will discuss the use of stem cells in neurodegenerative diseases and in particular how NSPCs affect the AD pathology and how neurodegeneration affects NSPCs. In the end of this review, we will discuss how zebrafish as a useful model organism with extensive regenerative ability in the brain might help to address the molecular programs needed for NSPCs to respond to neurodegeneration by enhanced neurogenesis.  相似文献   

17.
The fact that the more resourceful people are sharing with the poor to mitigate inequality—egalitarian sharing—is well documented in the behavioral science research. How inequality evolves as a result of egalitarian sharing is determined by the structure of “who gives whom”. While most prior experimental research investigates allocation of resources in dyads and groups, the paper extends the research of egalitarian sharing to networks for a more generalized structure of social interaction. An agent-based model is proposed to predict how actors, linked in networks, share their incomes with neighbors. A laboratory experiment with human subjects further shows that income distributions evolve to different states in different network topologies. Inequality is significantly reduced in networks where the very rich and the very poor are connected so that income discrepancy is salient enough to motivate the rich to share their incomes with the poor. The study suggests that social networks make a difference in how egalitarian sharing influences the evolution of inequality.  相似文献   

18.
The specificity of the first or “presumptive” portion of the USP rabbit pyrogen test was investigated by use of a new absolute standard of reference. The reference standard was a 0.9% sodium chloride solution prepared to be pyrogen-free. Details of the preparation were described. The hypothesis was explored that the temperature response of rabbits after intravenous injection of the standard solution was independent of exogenous pyrogen. Reactions observed among the rabbits in our colony allowed a classification of these animals ranging from “consistently reliable” to “consistently unreliable.” Details of the experimental results and implications for pyrogen testing are discussed. The recommendation was made that all rabbit test animals be “screened” in sham and actual tests before being used for pyrogen testing.  相似文献   

19.
20.
Genome-wide RNA expression data provide a detailed view of an organism's biological state; hence, a dataset measuring expression variation between genetically diverse individuals (eQTL data) may provide important insights into the genetics of complex traits. However, with data from a relatively small number of individuals, it is difficult to distinguish true causal polymorphisms from the large number of possibilities. The problem is particularly challenging in populations with significant linkage disequilibrium, where traits are often linked to large chromosomal regions containing many genes. Here, we present a novel method, Lirnet, that automatically learns a regulatory potential for each sequence polymorphism, estimating how likely it is to have a significant effect on gene expression. This regulatory potential is defined in terms of “regulatory features”—including the function of the gene and the conservation, type, and position of genetic polymorphisms—that are available for any organism. The extent to which the different features influence the regulatory potential is learned automatically, making Lirnet readily applicable to different datasets, organisms, and feature sets. We apply Lirnet both to the human HapMap eQTL dataset and to a yeast eQTL dataset and provide statistical and biological results demonstrating that Lirnet produces significantly better regulatory programs than other recent approaches. We demonstrate in the yeast data that Lirnet can correctly suggest a specific causal sequence variation within a large, linked chromosomal region. In one example, Lirnet uncovered a novel, experimentally validated connection between Puf3—a sequence-specific RNA binding protein—and P-bodies—cytoplasmic structures that regulate translation and RNA stability—as well as the particular causative polymorphism, a SNP in Mkt1, that induces the variation in the pathway.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号