Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
143,955
result(s) for
"Computer Imaging"
Sort by:
Simulating innovation : computer-based tools for rethinking innovation
Christopher Watts and Nigel Gilbert explore the generation, diffusion and impact of innovations, which can now be studied using computer simulations. Agent-based simulation models can be used to explain the innovation that emerges from interactions among complex, adaptive, diverse networks of firms, people, technologies, practices and resources. This book provides a critical review of recent advances in agent-based modelling and other forms of the simulation of innovation. Elements explored include: diffusion of innovations, social networks, organisational learning, science models, adopting and adapting, and technological evolution and innovation networks. Many of the models featured in the book can be downloaded from the book's accompanying website. Bringing together simulation models from several innovation-related fields, this book will prove a fascinating read for academics and researchers in a wide range of disciplines, including: innovation studies, evolutionary economics, complexity science, organisation studies, social networks, and science and technology studies. Scholars and researchers in the areas of computer science, operational research and management science will also be interested in the uses of simulation models to improve the understanding of organisation.
Geometric and Topological Mesh Feature Extraction for 3D Shape Analysis
Three-dimensional surface meshes are the most common discrete representation of the exterior of a virtual shape. Extracting relevant geometric or topological features from them can simplify the way objects are looked at, help with their recognition, and facilitate description and categorization according to specific criteria. This book adopts the point of view of discrete mathematics, the aim of which is to propose discrete counterparts to concepts mathematically defined in continuous terms. It explains how standard geometric and topological notions of surfaces can be calculated and computed on a 3D surface mesh, as well as their use for shape analysis. Several applications are also detailed, demonstrating that each of them requires specific adjustments to fit with generic approaches. The book is intended not only for students, researchers and engineers in computer science and shape analysis, but also numerical geologists, anthropologists, biologists and other scientists looking for practical solutions to their shape analysis, understanding or recognition problems.
CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion
2024
Infrared and visible image fusion targets to provide an informative image by combining complementary information from different sensors. Existing learning-based fusion approaches attempt to construct various loss functions to preserve complementary features, while neglecting to discover the inter-relationship between the two modalities, leading to redundant or even invalid information on the fusion results. Moreover, most methods focus on strengthening the network with an increase in depth while neglecting the importance of feature transmission, causing vital information degeneration. To alleviate these issues, we propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion in an end-to-end manner. Concretely, to simultaneously retain typical features from both modalities and to avoid artifacts emerging on the fused result, we develop a coupled contrastive constraint in our loss function. In a fused image, its foreground target/background detail part is pulled close to the infrared/visible source and pushed far away from the visible/infrared source in the representation space. We further exploit image characteristics to provide data-sensitive weights, allowing our loss function to build a more reliable relationship with source images. A multi-level attention module is established to learn rich hierarchical feature representation and to comprehensively transfer features in the fusion process. We also apply the proposed CoCoNet on medical image fusion of different types, e.g., magnetic resonance image, positron emission tomography image, and single photon emission computed tomography image. Extensive experiments demonstrate that our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation, especially in preserving prominent targets and recovering vital textural details.
Journal Article
Deep Learning for Generic Object Detection: A Survey
2020
Object detection, one of the most fundamental and challenging problems in computer vision, seeks to locate object instances from a large number of predefined categories in natural images. Deep learning techniques have emerged as a powerful strategy for learning feature representations directly from data and have led to remarkable breakthroughs in the field of generic object detection. Given this period of rapid evolution, the goal of this paper is to provide a comprehensive survey of the recent achievements in this field brought about by deep learning techniques. More than 300 research contributions are included in this survey, covering many aspects of generic object detection: detection frameworks, object feature representation, object proposal generation, context modeling, training strategies, and evaluation metrics. We finish the survey by identifying promising directions for future research.
Journal Article
Exploiting Diffusion Prior for Real-World Image Super-Resolution
by
Yue, Zongsheng
,
Zhou, Shangchen
,
Loy, Chen Change
in
Computer vision
,
Controllability
,
Image resolution
2024
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution. Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the generative prior and minimizing training cost. To remedy the loss of fidelity caused by the inherent stochasticity of diffusion models, we employ a controllable feature wrapping module that allows users to balance quality and fidelity by simply adjusting a scalar value during the inference process. Moreover, we develop a progressive aggregation sampling strategy to overcome the fixed-size constraints of pre-trained diffusion models, enabling adaptation to resolutions of any size. A comprehensive evaluation of our method using both synthetic and real-world benchmarks demonstrates its superiority over current state-of-the-art approaches. Code and models are available at https://github.com/IceClear/StableSR.
Journal Article
Computer graphics programming in OpenGL using Java
This new edition provides step-by-step instruction on modern 3D graphics shader programming in OpenGL with Java, along with its theoretical foundations. It is appropriate both for computer science graphics courses, and for professionals interested in mastering 3D graphics skills. It has been designed in a 4-color, \"teach-yourself\" format with numerous examples that the reader can run just as presented. Every shader stage is detailed, starting with the basics of modeling, lighting, textures, etc., up through advanced techniques such as tessellation, soft shadows, and generating realistic materials and environments. Includes companion files with all of the source codemodels, textures, skyboxes and normal maps used in the book. -- back cover.
High-Dynamic-Range Spectral Imaging System for Omnidirectional Scene Capture
2018
Omnidirectional imaging technology has been widely used for scene archiving. It has been a crucial technology in many fields including computer vision, image analysis and virtual reality. It should be noted that the dynamic range of luminance values in a natural scene is quite large, and the scenes containing various objects and light sources consist of various spectral power distributions. Therefore, this paper proposes a system for acquiring high dynamic range (HDR) spectral images for capturing omnidirectional scenes. The system is constructed using two programmable high-speed video cameras with specific lenses and a programmable rotating table. Two different types of color filters are mounted on the two-color video cameras for six-band image acquisition. We present several algorithms for HDR image synthesis, lens distortion correction, image registration, and omnidirectional image synthesis. Spectral power distributions of illuminants (color signals) are recovered from the captured six-band images based on the Wiener estimation algorithm. In this paper, we present two types of applications based on our imaging system: time-lapse imaging and gigapixel imaging. The performance of the proposed system is discussed in detail in terms of the system configurations, acquisition time, artifacts, and spectral estimation accuracy. Experimental results in actual scenes demonstrate that the proposed system is feasible and powerful for acquiring HDR spectral scenes through time-lapse or gigapixel omnidirectional imaging approaches. Finally, we apply the captured omnidirectional images to time-lapse spectral Computer Graphics (CG) renderings and spectral-based relighting of an indoor gigapixel image.
Journal Article