Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
431 result(s) for "Texture in architecture."
Sort by:
Interactive textures for architecture and landscaping : digital elements and technologies
\"This book addresses the phenomenon called \"interactive architecture that challenges artists, architects, designers, theorists, and geographers to develop a language and designs toward the \"use\" of these environments\"--Provided by publisher.
DeepMatching: Hierarchical Deformable Dense Matching
We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al. A comparison of affine region detectors, 2005 ), the MPI-Sintel (Butler et al. A naturalistic open source movie for optical flow evaluation, 2012 ) and the Kitti (Geiger et al. Vision meets robotics: The KITTI dataset, 2013 ) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation, 2011 ). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.
Water transport, perception, and response in plants
Sufficient water availability in the environment is critical for plant survival. Perception of water by plants is necessary to balance water uptake and water loss and to control plant growth. Plant physiology and soil science research have contributed greatly to our understanding of how water moves through soil, is taken up by roots, and moves to leaves where it is lost to the atmosphere by transpiration. Water uptake from the soil is affected by soil texture itself and soil water content. Hydraulic resistances for water flow through soil can be a major limitation for plant water uptake. Changes in water supply and water loss affect water potential gradients inside plants. Likewise, growth creates water potential gradients. It is known that plants respond to changes in these gradients. Water flow and loss are controlled through stomata and regulation of hydraulic conductance via aquaporins. When water availability declines, water loss is limited through stomatal closure and by adjusting hydraulic conductance to maintain cell turgor. Plants also adapt to changes in water supply by growing their roots towards water and through refinements to their root system architecture. Mechanosensitive ion channels, aquaporins, proteins that sense the cell wall and cell membrane environment, and proteins that change conformation in response to osmotic or turgor changes could serve as putative sensors. Future research is required to better understand processes in the rhizosphere during soil drying and how plants respond to spatial differences in water availability. It remains to be investigated how changes in water availability and water loss affect different tissues and cells in plants and how these biophysical signals are translated into chemical signals that feed into signaling pathways like abscisic acid response or organ development.
RooTrak: Automated Recovery of Three-Dimensional Plant Root Architecture in Soil from X-Ray Microcomputed Tomography Images Using Visual Tracking
X-ray microcomputed tomography (μCT) is an invaluable tool for visualizing plant root systems within their natural soil environment noninvasively. However, variations in the x-ray attenuation values of root material and the overlap in attenuation values between roots and soil caused by water and organic materials represent major challenges to data recovery. We report the development of automatic root segmentation methods and software that view μCT data as a sequence of images through which root objects appear to move as the x-y cross sections are traversed along the z axis of the image stack. Previous approaches have employed significant levels of user interaction and/or fixed criteria to distinguish root and nonroot material. RooTrak exploits multiple, local models of root appearance, each built while tracking a specific segment, to identify new root material. It requires minimal user interaction and is able to adapt to changing root density estimates. The model-guided search for root material arising from the adoption of a visual-tracking framework makes RooTrak less sensitive to the natural ambiguity of x-ray attenuation data. We demonstrate the utility of RooTrak using μCT scans of maize (Zea mays), wheat (Triticum aestivum), and tomato (Solanum lycopersicum) grown in a range of contrasting soil textures. Our results demonstrate that RooTrak can successfully extract a range of root architectures from the surrounding soil and promises to facilitate future root phenotyping efforts.
An efficient texture descriptor based on local patterns and particle swarm optimization algorithm for face recognition
Face recognition is used in many applications such as access control, automobile security, criminal identification, immigration, healthcare, cyber security, and so on. Each person has his/her own unique face, so the face can help distinguish people from each other. Feature extraction process plays a fundamental role in accuracy of face recognition, and many algorithms have been presented to extract more informative features from the face image. In this paper, an efficient texture descriptor is proposed based on local information of the face image. In the proposed method, at first, face image is split into several sub-images in such a way that each sub-image includes one of the facial parts such as eyes, nose, and lips. Second, texture features are extracted from each sub-image using a new local pattern descriptor, and then features of sub-images are concatenated to construct feature vector. Finally, the face image is compared to images in a dataset based on a similarity measure. In addition, particle swarm optimization algorithm is used to assign weight to the features of different parts of the face image. To evaluate the proposed algorithm, four face datasets, Yale, ORL, GT and KDEF, are used. Implementation results show that the proposed method outperforms recent methods in terms of accuracy, receiver operating characteristic (ROC) curve, and area under ROC curve.
MDR–SLAM: Robust 3D Mapping in Low-Texture Scenes with a Decoupled Approach and Temporal Filtering
Realizing real-time dense 3D reconstruction on resource-limited mobile platforms remains a significant challenge, particularly in low-texture environments that demand robust multi-frame fusion to resolve matching ambiguities. However, the inherent tight coupling of pose estimation and mapping in traditional monolithic SLAM architectures imposes a severe restriction on integrating high-complexity fusion algorithms without compromising tracking stability. To overcome these limitations, this paper proposes MDR–SLAM, a modular and fully decoupled stereo framework. The system features a novel keyframe-driven temporal filter that synergizes efficient ELAS stereo matching with Kalman filtering to effectively accumulate geometric constraints, thereby enhancing reconstruction density in textureless areas. Furthermore, a confidence-based fusion backend is employed to incrementally maintain global map consistency and filter outliers. Quantitative evaluation on the NUFR-M3F indoor dataset demonstrates the effectiveness of the proposed method: compared to the standard single-frame baseline, MDR–SLAM reduces map RMSE by 83.3% (to 0.012 m) and global trajectory drift by 55.6%, while significantly improving map completeness. The system operates entirely on CPU resources with a stable 4.7 Hz mapping frequency, verifying its suitability for embedded mobile robotics.
Steel surface defect detection based on MobileViTv2 and YOLOv8
To address the issue of low detection accuracy of steel surface defects due to complex texture background interference and complex defect morphology, this paper proposes an improved YOLOv8 model based on MobileViTv2 and Cross-Local Connection for steel surface defect detection. Firstly, the lightweight MobileViTv2 network is introduced into the backbone network, which enhances the feature extraction capability of the model in complex defect shapes by combining the advantages of CNN and Transformer. Then, the designed CLC method is introduced into the neck network, which connects deep and shallow features through additional convolutional layers, further integrating defect features in the presence of complex texture background interference. Finally, the NET-DET dataset is augmented to improve the model’s robustness. Experimental results show that the mAP of the improved model is 74.1%, with a detection speed of 86.2 FPS and model memory usage of 27.5 MB. Compared to YOLOv5 and YOLOv8, the mAP of the improved model is increased by 6.5% and 4%, respectively. Compared to existing object detection models, the improved model has the characteristics of high detection accuracy and fast detection speed, meeting the requirements of industrial production for steel surface defect detection.
Towards an optimized paradigm: generative adversarial networks and 3D modeling in landscape design and generation
Virtual reality (VR) integrates technologies like computer graphics, artificial intelligence, and multi-sensor systems, creating transformative tools for designers and users. This study proposes a novel urban landscape design method using 3D laser scanning combined with frame reorganization and texture mapping. Despite the advancements in VR-based landscape design, existing methods often suffer from inefficiencies in rendering time and suboptimal visual fidelity, limiting their practical application in large-scale urban projects. In the initial phase, we acquire the central pixel point of the images via a meticulous 3D scanning process, thus facilitating a three-dimensional stereo reorganization of urban architectural landscapes. This stage is succeeded by the application of a terahertz wave image segmentation strategy, grounded in the sophisticated utilization of adversarial generative networks and a structured texture mapping procedure. This technique permits the virtual reconstruction of the architectural blueprint, wherein each image layer is systematically traversed, engendering a dynamic representation of the urban landscape. The final step generates realistic urban landscape simulations using integrated 3D laser scanning. To ascertain the efficacy of the proposed methodology, we embarked upon a series of performance assessments across four disparate simulation design scenarios, yielding verifiable outcomes. Our empirical findings demonstrate that the proposed method reduces rendering times by up to 90% compared to traditional tools like SketchUp and 3D Studio Max, while achieving a significant improvement in visual fidelity, as evidenced by standard image quality metrics. These results attest to the formidable potential of this avant-garde approach within the VR landscape design milieu, significantly diminishing the time imperative while augmenting visual fidelity and fortifying automatic display proficiencies. By virtue of its robust analytical underpinnings and innovative approach, this research furnishes a substantial theoretical scaffolding for the evolving discourse in landscape space design, prompting a reevaluation of conventional methodologies while propelling the field towards a more efficient and visually immersive future.
Leaf species and disease classification using multiscale parallel deep CNN architecture
Plant species are often affected by conquering biotic strains and for sustainable yield more emphasis can be on the novel mitigation measures rather than traditional methods. Plant diseases are witnessed by visible effect on the leaf like the detectable change in color, texture or shape. Categorizing leaf diseases poses challenges like intensity of the disease in the leaf, resolution of the image, shot category and complex background. Literature reports myriads of architecture employing Convolutional Neural Networks for generating models that assist in detecting plant disease. This research work has merged responses from customized filters (Law’s Mask) that well define the texture pattern and learnable filters to ensure adaptive learning. Depending upon the stages of diseases in leaves, the defects occur at varying scales and at varying locations of leaves. Thus, rather than single deep stream of network, a specialized parallel multiscale stream with learnable filters that extract inherent attributes are utilized for improved performance. Experimental evaluation of the proposed methodology with end to end training on Plant Village dataset with 39 classes gives 99.17% for plant species classification and 98.61% for disease classification. For data Repository of Leaf Images with 12 species, 97.16% for plant species classification and 90.02% for leaf disease classification. MepcoTropicLeaf an Indian Ayurvedic Leaf dataset with 50 species is experimented using the proposed algorithm and reported with 90.86% of classification accuracy.
Creation mechanism of new media art combining artificial intelligence and internet of things technology in a metaverse environment
The Metaverse is regarded as a brand-new virtual society constructed by deep media, and the new media art produced by new media technology will gradually replace the traditional art form and play an important role in the infinite Metaverse in the future. The maturity of the new media art creation mechanism must also depend on the help of artificial intelligence (AI) and Internet of Things (IoT) technology. The purpose of this study is to explore the image style transfer of digital painting art in new media art, that is, to reshape the image style by neural network technology in AI based on retaining the semantic information of the original image. Based on neural style transfer, an image style conversion method based on feature synthesis is proposed. Using the feature mapping of content image and style image and combining the advantages of traditional texture synthesis, a richer multi-style target feature mapping is synthesized. Then, the inverse transformation of target feature mapping is restored to an image to realize style transformation. In addition, the research results are analyzed. Under the background of integrating AI and IoT, the creation mechanism of new media art is optimized. Regarding digital art style transformation, the Tensorflow program framework is used for simulation verification and performance evaluation. The experimental results show that the image style transfer method based on feature synthesis proposed in this study can make the image texture more reasonably distributed, and can change the style texture by retaining more semantic structure content of the original image, thus generating richer artistic effects, and having better interactivity and local controllability. It can provide theoretical help and reference for developing new media art creation mechanisms.