Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
3,066 result(s) for "detail"
Sort by:
Infrared image enhancement algorithm based on detail enhancement guided image filtering
Because of the unique imaging mechanism of infrared (IR) sensors, IR images commonly suffer from blurred edge details, low contrast, and poor signal-to-noise ratio. A new method is proposed in this paper to enhance IR image details so that the enhanced images can effectively inhibit image noise and improve image contrast while enhancing image details. First, for the traditional guided image filter (GIF) applied to IR image enhancement is prone to halo artifacts, this paper proposes a detail enhancement guided filter (DGIF). It mainly adds the constructed edge perception and detail regulation factors to the cost function of the GIF. Then, according to the visual characteristics of human eyes, this paper applies the detail regulation factor to the detail layer enhancement, which solves the problem of amplifying image noise using fixed gain coefficient enhancement. Finally, the enhanced detail layer is directly fused with the base layer so that the enhanced image has rich detail information. We first compare the DGIF with four guided image filters and then compare the algorithm of this paper with three traditional IR image enhancement algorithms and two IR image enhancement algorithms based on the GIF on 20 IR images. The experimental results show that the DGIF has better edge-preserving and smoothing characteristics than the four guided image filters. The mean values of quantitative evaluation of information entropy, average gradient, edge intensity, figure definition, and root-mean-square contrast of the enhanced images, respectively, achieved about 0.23%, 3.4%, 4.3%, 2.1%, and 0.17% improvement over the optimal parameter. It shows that the algorithm in this paper can effectively suppress the image noise in the detail layer while enhancing the detail information, improving the image contrast, and having a better visual effect.
Keep it Coherent: A Meta-Analysis of the Seductive Details Effect
Studies have shown that learners exposed to interesting and irrelevant information, known as seductive details, do not perform as much as those who learned without seductive details. However, findings are mixed in terms of the degree to which seductive details hinder learning. Further research is also needed on how design features of learning materials influence the seductive details effect. This meta-analysis summarizes the seductive details effect and investigates the moderating factors of design and methodology. We also discuss evidence supporting each of the four hypothetical underlying mechanisms for the seductive details effect. Findings show that including seductive details in learning material can hinder learning. Mean effect sizes were moderated by the presence of seductive details, image type used in comparison, delivery format, language, subject, learner pacing, recall question type, and manipulation check approach. We conclude by highlighting limitations in current research, suggesting opportunities for future research, and examining practical implications.
LoDAvatar: hierarchical embedding and selective detail enhancement for adaptive levels of detail Gaussian avatars
With the advancement of virtual reality, the demand for 3D human avatars is increasing. The emergence of Gaussian Splatting technology has enabled the rendering of Gaussian avatars with superior visual quality and reduced computational costs. Despite numerous methods researchers propose for implementing drivable Gaussian avatars, limited attention has been given to balancing visual quality and computational costs. In this paper, we introduce LoDAvatar, a method that introduces levels of detail into Gaussian avatars through hierarchical embedding and selective detail enhancement methods. The key steps of LoDAvatar encompass data preparation, Gaussian embedding, Gaussian optimization, and selective detail enhancement. We conducted experiments involving Gaussian avatars at various detail levels, employing objective assessments and subjective evaluations. The outcomes indicate that incorporating levels of detail into Gaussian avatars can decrease computational costs during rendering while upholding commendable visual quality, thereby enhancing runtime frame rates. We advocate adopting LoDAvatar to render multiple dynamic Gaussian avatars or extensive Gaussian scenes, balancing visual quality and computational costs.
Architectural detailing
The industry-standard guide to designing well-performing buildings Architectural Detailing systematically describes the principles by which good architectural details are designed. Principles are explained in brief, and backed by extensive illustrations that show you how to design details that will not leak water or air, will control the flow of heat and water vapor, will adjust to all kinds of movement, and will be easy to construct. This new third edition has been updated to conform to International Building Code 2012, and incorporates current knowledge about new material and construction technology. Sustainable design issues are integrated where relevant, and the discussion includes reviews of recent built works that extract underlying principles that can be the basis for new patterns or the alteration and addition to existing patterns. Regulatory topics are primarily focused on the US, but touch on other jurisdictions and geographic settings to give you a well-rounded perspective of the art and science of architectural detailing. In guiding a design from idea to reality, architects design a set of details that show how a structure will be put together. Good details are correct, complete, and provide accurate information to a wide variety of users. By demonstrating the use of detail patterns, this book teaches you how to design a building that will perform as well as you intend. * Integrate appropriate detailing into your designs * Learn the latest in materials, assemblies, and construction methods * Incorporate sustainable design principles and current building codes * Design buildings that perform well, age gracefully, and look great Architects understand that aesthetics are only a small fraction of good design, and that stability and functionality require a deep understanding of how things come together. Architectural Detailing helps you bring it all together with a well fleshed-out design that communicates accurately at all levels of the construction process.
Electrocardiogram features detection using stationary wavelet transform
The main objective of this paper is to provide a novel stationary wavelet transform (SWT) based method for electrocardiogram (ECG) feature detection. The proposed technique uses the detail coefficients of the ECG signal decomposition by SWT and the selection of the appropriate coefficient to detect a specific wave of the signal. Indeed, the temporal and frequency analysis of these coefficients allowed us to choose detail coefficient of level 2 (Cd2) to detect the R peaks. In contrast, the coefficient of level 3 (Cd3) is determined to extract the Q, S, P, and T waves from the ECG. The proposed method was tested on recordings from the apnea and Massachusetts Institute of Technology–Beth Israel hospital (MIT-BIH) databases. The performances obtained are excellent. Indeed, the technique presents a sensitivity of 99.83%, a predictivity of 99.72%, and an error rate of 0.44%. A further important advantage of the method is its ability to detect different waves even in the presence of baseline wander (BLW) of the ECG signal. This property makes it possible to bypass the filtering operation of BLW.
Single Image Haze Removal from Image Enhancement Perspective for Real-Time Vision-Based Systems
Vision-based systems operating outdoors are significantly affected by weather conditions, notably those related to atmospheric turbidity. Accordingly, haze removal algorithms, actively being researched over the last decade, have come into use as a pre-processing step. Although numerous approaches have existed previously, an efficient method coupled with fast implementation is still in great demand. This paper proposes a single image haze removal algorithm with a corresponding hardware implementation for facilitating real-time processing. Contrary to methods that invert the physical model describing the formation of hazy images, the proposed approach mainly exploits computationally efficient image processing techniques such as detail enhancement, multiple-exposure image fusion, and adaptive tone remapping. Therefore, it possesses low computational complexity while achieving good performance compared to other state-of-the-art methods. Moreover, the low computational cost also brings about a compact hardware implementation capable of handling high-quality videos at an acceptable rate, that is, greater than 25 frames per second, as verified with a Field Programmable Gate Array chip. The software source code and datasets are available online for public use.
The effects of a model statement on information elicitation and deception detection in multiple interviews
Researchers started developing interview techniques to enhance deception detection in forensic settings. One of those techniques is the Model Statement, which has been shown to be effective for eliciting information and cues to deception in single interviews. In the current research, we focused on the effect of the Model Statement in multiple interviews. Participants (N = 243) were interviewed three times-each time one week apart-about a genuine (truth tellers) or fabricated (lie tellers) memorable event. They listened to a Model Statement at Time 1, Time 2, Times 1 and 2, or not at all. Hypotheses focused on participants' verbal reports at Time 3 and on unique details provided across the three interviews. In both instances, truth tellers provided more core and total details and complications and fewer common knowledge details and self-handicapping strategies and obtained higher proportion scores of (i) complications and (ii) core details than lie tellers. Complications and proportion of complications were the most diagnostic cues. The Model Statement was effective only when presented at Time 1, resulting in more common knowledge details. No Veracity × Model Statement interaction effects emerged.
A General Paradigm with Detail-Preserving Conditional Invertible Network for Image Fusion
Existing deep learning techniques for image fusion either learn image mapping (LIM) directly, which renders them ineffective at preserving details due to the equal consideration to each pixel, or learn detail mapping (LDM), which only attains a limited level of performance because only details are used for reasoning. The recent lossless invertible network (INN) has demonstrated its detail-preserving ability. However, the direct applicability of INN to the image fusion task is limited by the volume-preserving constraint. Additionally, there is the lack of a consistent detail-preserving image fusion framework to produce satisfactory outcomes. To this aim, we propose a general paradigm for image fusion based on a novel conditional INN (named DCINN). The DCINN paradigm has three core components: a decomposing module that converts image mapping to detail mapping; an auxiliary network (ANet) that extracts auxiliary features directly from source images; and a conditional INN (CINN) that learns the detail mapping based on auxiliary features. The novel design benefits from the advantages of INN, LIM, and LDM approaches while avoiding their disadvantages. Particularly, using INN to LDM can easily meet the volume-preserving constraint while still preserving details. Moreover, since auxiliary features serve as conditional features, the ANet allows for the use of more than just details for reasoning without compromising detail mapping. Extensive experiments on three benchmark fusion problems, i.e., pansharpening, hyperspectral and multispectral image fusion, and infrared and visible image fusion, demonstrate the superiority of our approach compared with recent state-of-the-art methods. The code is available at https://github.com/wwhappylife/DCINN
Diffusion Model with Detail Complement for Super-Resolution of Remote Sensing
Remote sensing super-resolution (RSSR) aims to improve remote sensing (RS) image resolution while providing finer spatial details, which is of great significance for high-quality RS image interpretation. The traditional RSSR is based on the optimization method, which pays insufficient attention to small targets and lacks the ability of model understanding and detail supplement. To alleviate the above problems, we propose the generative Diffusion Model with Detail Complement (DMDC) for RS super-resolution. Firstly, unlike traditional optimization models with insufficient image understanding, we introduce the diffusion model as a generation model into RSSR tasks and regard low-resolution images as condition information to guide image generation. Next, considering that generative models may not be able to accurately recover specific small objects and complex scenes, we propose the detail supplement task to improve the recovery ability of DMDC. Finally, the strong diversity of the diffusion model makes it possibly inappropriate in RSSR, for this purpose, we come up with joint pixel constraint loss and denoise loss to optimize the direction of inverse diffusion. The extensive qualitative and quantitative experiments demonstrate the superiority of our method in RSSR with small and dense targets. Moreover, the results from direct transfer to different datasets also prove the superior generalization ability of DMDC.
The evolution of the form and function of the window as a detail influencing historical architecture
Using the concept of the window, which is the interface between the interpenetrating interior and the exterior of a building, the article shows window joinery as an independent architectural detail with aesthetic value expressed in an artistic form. The author discusses the changing function of the wall opening and emphasises its timeless role: bringing light and air into the building. Analysing successively the design assumptions of creators from antiquity to modernism, the article illustrates the integrity of windows and the façade of buildings, which influence the visual perception of the development in a broader cultural, artistic and historical context.