نتائج البحث

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
تم إضافة الكتاب إلى الرف الخاص بك!
عرض الكتب الموجودة على الرف الخاص بك .
وجه الفتاة! هناك خطأ ما.
وجه الفتاة! هناك خطأ ما.
أثناء محاولة إضافة العنوان إلى الرف ، حدث خطأ ما :( يرجى إعادة المحاولة لاحقًا!
هل أنت متأكد أنك تريد إزالة الكتاب من الرف؟
{{itemTitle}}
{{itemTitle}}
وجه الفتاة! هناك خطأ ما.
وجه الفتاة! هناك خطأ ما.
أثناء محاولة إزالة العنوان من الرف ، حدث خطأ ما :( يرجى إعادة المحاولة لاحقًا!
    منجز
    مرشحات
    إعادة تعيين
  • الضبط
      الضبط
      امسح الكل
      الضبط
  • مُحَكَّمة
      مُحَكَّمة
      امسح الكل
      مُحَكَّمة
  • السلسلة
      السلسلة
      امسح الكل
      السلسلة
  • مستوى القراءة
      مستوى القراءة
      امسح الكل
      مستوى القراءة
  • السنة
      السنة
      امسح الكل
      من:
      -
      إلى:
  • المزيد من المرشحات
      المزيد من المرشحات
      امسح الكل
      المزيد من المرشحات
      نوع المحتوى
    • نوع العنصر
    • لديه النص الكامل
    • الموضوع
    • بلد النشر
    • الناشر
    • المصدر
    • الجمهور المستهدف
    • المُهدي
    • اللغة
    • مكان النشر
    • المؤلفين
    • الموقع
368,397 نتائج ل "Vision."
صنف حسب:
Sight unseen : an exploration of conscious and unconscious vision
Vision, more than any other sense, dominates our mental life. Our conscious visual experience of the world is so rich and detailed that we can hardly distinguish it from the real thing. But as Goodale and Milner make clear in their prize-winning book, Sight Unseen, our visual experience of the world is not all there is to vision. Some of the most important things that vision does for us never reach our consciousness at all. In this updated and extended new edition, Goodale and Milner explore one of the most extraordinary neurological cases of recent years--one that profoundly changed scientific views on the visual brain. It is the story of Dee Fletcher--a young woman who became blind to shape and form as a result of brain damage. Dee was left unable to recognize objects or even tell one simple geometric shape from another. As events unfolded, however, Goodale and Milner found that Dee wasn't in fact blind -- she just didn't know that she could see. They showed, for example, that Dee could reach out and grasp objects with amazing dexterity, despite being unable to perceive their shape, size, or orientation. Taking us on a journey into the unconscious brain, the two scientists who made this incredible discovery tell the amazing story of their work, and the surprising conclusion they were forced to reach. Written to be accessible to students and popular science readers, this book is a fascinating illustration of the power of the 'unconscious' mind.
Normal binocular vision
Binocular vision, i.e. where both eyes are used together, is a fundamental component of human sight. It also aids hand-eye co-ordination, and the perception of the self within the environment. Clinical anomalies pose a wide range of problems to the sufferer, but normal binocular operation must first be understood before the eye specialist can assess and treat dysfunctions. This is a major new textbook for students of optometry, orthoptics and ophthalmology, and also of psychology. Chapters span such key topics as binocular summation, fusion, the normal horopter, anatomy of the extra-ocular muscles, oculomotor control, binocular integration and depth perception. Fully illustrated throughout, the book includes self-assessment exercises at the end of each chapter, and sample experiments in binocular vision functioning.
The pirate of kindergarten
Ginny's eyes play tricks on her, making her see everything double, but when she goes to vision screening at school and discovers that not everyone sees this way, she learns that her double vision can be cured.
Scene Text Detection and Recognition: The Deep Learning Era
With the rise and development of deep learning, computer vision has been tremendously transformed and reshaped. As an important research area in computer vision, scene text detection and recognition has been inevitably influenced by this wave of revolution, consequentially entering the era of deep learning. In recent years, the community has witnessed substantial advancements in mindset, methodology and performance. This survey is aimed at summarizing and analyzing the major changes and significant progresses of scene text detection and recognition in the deep learning era. Through this article, we devote to: (1) introduce new insights and ideas; (2) highlight recent techniques and benchmarks; (3) look ahead into future trends. Specifically, we will emphasize the dramatic differences brought by deep learning and remaining grand challenges. We expect that this review paper would serve as a reference book for researchers in this field. Related resources are also collected in our Github repository ( https://github.com/Jyouhou/SceneTextPapers ).
New development in robot vision
\"The field of robotic vision has advanced dramatically recently with the development of new range sensors. Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related manipulation motion models. For autonomous robot navigation, different vision-based localization and tracking strategies and algorithms are discussed. New approaches using probabilistic analysis for robot navigation, online learning of vision-based robot control, and 3D motion estimation via intensity differences from a monocular camera are described. This collection will be beneficial to graduate students, researchers, and professionals working in the area of robotic vision.\"--back cover.
Case 37-2018: A 23-Year-Old Woman with Vision Loss
Case 37-2018: A 23-Year-Old Woman with Vision Loss (N Engl J Med 2018;379:2152-2159). In the title (page 2152), the Case designation should have been “Case 37-2018,” rather than “Case 36-2018.” The article is correct at NEJM.org. . . .
Visualizing Deep Convolutional Neural Networks Using Natural Pre-images
Image representations, from SIFT and bag of visual words to convolutional neural networks (CNNs) are a crucial component of almost all computer vision systems. However, our understanding of them remains limited. In this paper we study several landmark representations, both shallow and deep, by a number of complementary visualization techniques. These visualizations are based on the concept of “natural pre-image”, namely a natural-looking image whose representation has some notable property. We study in particular three such visualizations: inversion, in which the aim is to reconstruct an image from its representation, activation maximization, in which we search for patterns that maximally stimulate a representation component, and caricaturization, in which the visual patterns that a representation detects in an image are exaggerated. We pose these as a regularized energy-minimization framework and demonstrate its generality and effectiveness. In particular, we show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.
Eyes
Explores the different parts of the eye and each parts specialized function.
VPR-Bench: An Open-Source Visual Place Recognition Evaluation Framework with Quantifiable Viewpoint and Appearance Change
Visual place recognition (VPR) is the process of recognising a previously visited place using visual information, often under varying appearance conditions and viewpoint changes and with computational constraints. VPR is related to the concepts of localisation, loop closure, image retrieval and is a critical component of many autonomous navigation systems ranging from autonomous vehicles to drones and computer vision systems. While the concept of place recognition has been around for many years, VPR research has grown rapidly as a field over the past decade due to improving camera hardware and its potential for deep learning-based techniques, and has become a widely studied topic in both the computer vision and robotics communities. This growth however has led to fragmentation and a lack of standardisation in the field, especially concerning performance evaluation. Moreover, the notion of viewpoint and illumination invariance of VPR techniques has largely been assessed qualitatively and hence ambiguously in the past. In this paper, we address these gaps through a new comprehensive open-source framework for assessing the performance of VPR techniques, dubbed “VPR-Bench”. VPR-Bench (Open-sourced at: https://github.com/MubarizZaffar/VPR-Bench ) introduces two much-needed capabilities for VPR researchers: firstly, it contains a benchmark of 12 fully-integrated datasets and 10 VPR techniques, and secondly, it integrates a comprehensive variation-quantified dataset for quantifying viewpoint and illumination invariance. We apply and analyse popular evaluation metrics for VPR from both the computer vision and robotics communities, and discuss how these different metrics complement and/or replace each other, depending upon the underlying applications and system requirements. Our analysis reveals that no universal SOTA VPR technique exists, since: (a) state-of-the-art (SOTA) performance is achieved by 8 out of the 10 techniques on at least one dataset, (b) SOTA technique in one community does not necessarily yield SOTA performance in the other given the differences in datasets and metrics. Furthermore, we identify key open challenges since: (c) all 10 techniques suffer greatly in perceptually-aliased and less-structured environments, (d) all techniques suffer from viewpoint variance where lateral change has less effect than 3D change, and (e) directional illumination change has more adverse effects on matching confidence than uniform illumination change. We also present detailed meta-analyses regarding the roles of varying ground-truths, platforms, application requirements and technique parameters. Finally, VPR-Bench provides a unified implementation to deploy these VPR techniques, metrics and datasets, and is extensible through templates.