Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
14,723 result(s) for "3D models"
Sort by:
Tumor microenvironment signaling and therapeutics in cancer progression
Tumor development and metastasis are facilitated by the complex interactions between cancer cells and their microenvironment, which comprises stromal cells and extracellular matrix (ECM) components, among other factors. Stromal cells can adopt new phenotypes to promote tumor cell invasion. A deep understanding of the signaling pathways involved in cell‐to‐cell and cell‐to‐ECM interactions is needed to design effective intervention strategies that might interrupt these interactions. In this review, we describe the tumor microenvironment (TME) components and associated therapeutics. We discuss the clinical advances in the prevalent and newly discovered signaling pathways in the TME, the immune checkpoints and immunosuppressive chemokines, and currently used inhibitors targeting these pathways. These include both intrinsic and non‐autonomous tumor cell signaling pathways in the TME: protein kinase C (PKC) signaling, Notch, and transforming growth factor (TGF‐β) signaling, Endoplasmic Reticulum (ER) stress response, lactate signaling, Metabolic reprogramming, cyclic GMP–AMP synthase (cGAS)–stimulator of interferon genes (STING) and Siglec signaling pathways. We also discuss the recent advances in Programmed Cell Death Protein 1 (PD‐1), Cytotoxic T‐Lymphocyte Associated Protein 4 (CTLA4), T‐cell immunoglobulin mucin‐3 (TIM‐3) and Lymphocyte Activating Gene 3 (LAG3) immune checkpoint inhibitors along with the C‐C chemokine receptor 4 (CCR4)‐ C‐C class chemokines 22 (CCL22)/ and 17 (CCL17), C‐C chemokine receptor type 2 (CCR2)‐ chemokine (C‐C motif) ligand 2 (CCL2), C‐C chemokine receptor type 5 (CCR5)‐ chemokine (C‐C motif) ligand 3 (CCL3) chemokine signaling axis in the TME. In addition, this review provides a holistic understanding of the TME as we discuss the three‐dimensional and microfluidic models of the TME, which are believed to recapitulate the original characteristics of the patient tumor and hence may be used as a platform to study new mechanisms and screen for various anti‐cancer therapies. We further discuss the systemic influences of gut microbiota in TME reprogramming and treatment response. Overall, this review provides a comprehensive analysis of the diverse and most critical signaling pathways in the TME, highlighting the associated newest and critical preclinical and clinical studies along with their underlying biology. We highlight the importance of the most recent technologies of microfluidics and lab‐on‐chip models for TME research and also present an overview of extrinsic factors, such as the inhabitant human microbiome, which have the potential to modulate TME biology and drug responses.
ArkaeVision VR Game: User Experience Research between Real and Virtual Paestum
The design of a virtual reality (VR) cultural application is aimed at supporting the steps of the learning process-like concrete experimentation, reflection and abstraction—which are generally difficult to induce when looking at ruins and artifacts that bring back to the past. With the use of virtual technologies (e.g., holographic surfaces, head-mounted displays, motion—cation sensors) those steps are surely supported thanks to the immersiveness and natural interaction granted by such devices. VR can indeed help to symbolically recreate the context of life of cultural objects, presenting them in their original place of belonging, while they were used for example, increasing awareness and understanding of history. The ArkaeVision VR application takes advantages of storytelling and user experience design to tell the story of artifacts and sites of an important cultural heritage site of Italy, Paestum, creating a dramaturgy around them and relying upon historical and artistic content revised by experts. Visitors will virtually travel into the temple dedicated to Hera II of Paestum, in the first half of the fifth century BC, wearing an immersive viewer–HTC Vive; here, they will interact with the priestess Ariadne, a digital actor, who will guide them on a virtual tour presenting the beliefs, the values and habits of an ancient population of the Magna Graecia city. In the immersive VR application, the memory is indeed influenced by the visitors’ ability to proceed with the exploratory activity. Two evaluation sessions were planned and conducted to understand the effectiveness of the immersive experience, usability of the virtual device and the learnability of the digital storytelling. Results revealed that certainly the realism of the virtual reconstructions, the atmosphere and the “sense of the past” that pervades the whole VR cultural experience, characterize the positive feedback of visitors, their emotional engagement and their interest to proceed with the exploration.
Applications of 3D City Models: State of the Art Review
In the last decades, 3D city models appear to have been predominantly used for visualisation; however, today they are being increasingly employed in a number of domains and for a large range of tasks beyond visualisation. In this paper, we seek to understand and document the state of the art regarding the utilisation of 3D city models across multiple domains based on a comprehensive literature study including hundreds of research papers, technical reports and online resources. A challenge in a study such as ours is that the ways in which 3D city models are used cannot be readily listed due to fuzziness, terminological ambiguity, unclear added-value of 3D geoinformation in some instances, and absence of technical information. To address this challenge, we delineate a hierarchical terminology (spatial operations, use cases, applications), and develop a theoretical reasoning to segment and categorise the diverse uses of 3D city models. Following this framework, we provide a list of identified use cases of 3D city models (with a description of each), and their applications. Our study demonstrates that 3D city models are employed in at least 29 use cases that are a part of more than 100 applications. The classified inventory could be useful for scientists as well as stakeholders in the geospatial industry, such as companies and national mapping agencies, as it may serve as a reference document to better position their operations, design product portfolios, and to better understand the market.
3D Model Generation and Reconstruction Using Conditional Generative Adversarial Network
Generative adversarial network (GANs) has significant progress in 3D model generation and reconstruction recently years. GANs can generate 3D models by sampling from uniform noise distribution. But they generate randomly and are often not easy to control. To address this problem, we add the class information to both generator and discriminator and construct a new network named 3D conditional GAN. Moreover, to better guide generator to reconstruct 3D model from a single image in high quality, we propose a new 3D model reconstruction network by integrating a classifier into the traditional system. Experimental results on ModelNet10 dataset show that our method can effectively generate realistic 3D models corresponding to the given class labels. And the qualities of 3D model reconstruction have been improved considerably by using proposed method in IKEA dataset.
System for Estimation of Human Anthropometric Parameters Based on Data from Kinect v2 Depth Camera
Anthropometric measurements of the human body are an important problem that affects many aspects of human life. However, anthropometric measurement often requires the application of an appropriate measurement procedure and the use of specialized, sometimes expensive measurement tools. Sometimes the measurement procedure is complicated, time-consuming, and requires properly trained personnel. This study aimed to develop a system for estimating human anthropometric parameters based on a three-dimensional scan of the complete body made with an inexpensive depth camera in the form of the Kinect v2 sensor. The research included 129 men aged 18 to 28. The developed system consists of a rotating platform, a depth sensor (Kinect v2), and a PC computer that was used to record 3D data, and to estimate individual anthropometric parameters. Experimental studies have shown that the precision of the proposed system for a significant part of the parameters is satisfactory. The largest error was found in the waist circumference parameter. The results obtained confirm that this method can be used in anthropometric measurements.
Magnetic Resonance Imaging–Based 3-Dimensional Models of the Pelvis and Hip Using Machine Learning for Automatic Bone Segmentation in a Dynamic Hip Impingement Simulation
Background: Femoroacetabular impingement (FAI) can cause hip pain and osteoarthritis in patients. Importantly, 3-dimensional (3D) bone models in dynamic hip impingement simulations can enable a patient-specific diagnosis of FAI. The manual segmentation of bone models is time-consuming; therefore, automatic segmentation should be investigated. Purpose: To investigate the difference between manual and automatic segmentation of magnetic resonance imaging (MRI)–based 3D bone models of the hip, (2) to correlate impingement-free hip range of motion, and (3) to determine external validation. Study Design: Cohort study (Diagnosis); Level of evidence, 3. Methods: An institutional review board–approved retrospective study involving a total of 98 hips was performed. Of these, 30 patients with symptomatic FAI (60 hips; mean age, 27 ± 9 years) and 19 asymptomatic participants (38 hips) underwent 3-T MRI of the hip including a rapid 3D T1-weighted VIBE Dixon sequence of the pelvis (192 images acquired in 32 seconds). The automatic segmentation of MRI-based 3D bone models was performed using machine learning (convolutional neural network). The Dice similarity coefficient (DSC) was calculated for 98 hips to assess the overlap and difference in MRI-based 3D bone models with 5-fold cross-validation. Automatic segmentation was assessed with 16 patients (32 hips) from another institution for external validation. Impingement-free range of motion was compared between the manual and automatic segmentation of MRI-based 3D bone models (30 patients with FAI). Results: The difference between the manual and automatic segmentation of MRI-based 3D bone models was <1 mm (<0.6 mm for pelvic models and <0.5 mm for femoral models), and the DSC of the 30 patients with FAI was 94.1% (pelvis) and 97.0% (proximal femur); for the 19 asymptomatic participants, the difference was <0.5 mm, and the DSC was 95.5% and 97.5%, respectively. The correlation for impingement-free flexion (r = 0.93; P < .001) was excellent (30 patients with FAI). The mean difference in flexion and internal rotation at 90° of flexion was 2.9°± 4.0° and 3.0°± 4.0°, respectively. The DSC of the 16 patients from another institution was 92.2% (pelvis) and 94.9% (proximal femur) for external validation. Conclusion: The automatic segmentation of MRI-based 3D bone models was as accurate as the manual segmentation of MRI-based 3D bone models for patients with FAI. It allows radiation-free and patient-specific preoperative surgical planning of hip preservation surgery and hip arthroscopic surgery for patients with FAI of a childbearing age. The automatic segmentation of MRI-based 3D bone models using deep learning was feasible with routine MRI (3D T1-weighted VIBE Dixon sequence) with a short image acquisition time (<1 minute).
Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.
Automated 3D Building Model Reconstruction from Satellite Images Using Two-Stage Polygon Decomposition and Adaptive Roof Fitting
Digital surface models (DSMs) derived from high-resolution satellite imagery often contain mismatches, voids, and coarse building geometry, limiting their suitability for accurate and standardized 3D reconstruction. The scarcity of finely annotated samples further constrains generalization to complex structures. To address these challenges, an automated building reconstruction method based on two-stage polygon decomposition and adaptive roof fitting is proposed. Building polygons are first extracted and standardized to preserve primary contours while improving geometric regularity. A two-stage decomposition is then applied. In the first stage, polygons are coarsely decomposed, and redundant rectangles are removed by analyzing containment relationships. In the second stage, non-flat regions are identified and further decomposed to accommodate complex building connections. For 3D model fitting, flat-roof buildings are reconstructed by integrating structural analysis of DSM elevation distributions with adaptive rooftop partitioning, which enables accurate modeling of complex flat structures with auxiliary components. For non-flat roofs, a representative parameter space is defined and explored through systematic search and optimization to obtain precise fits. Finally, intersecting primitives are normalized and optimally merged to ensure structural coherence and standardized representation. Experiments on the US3D, MVS3D, and Beijing-3 datasets demonstrate that the proposed method achieves higher geometric accuracy and more standardized models, with an average IOU3 of 91.26%, RMSE of 0.78 m, and MHE of 0.22 m.
Realistic 3D Object Generation Using Seam Aware Landmark Detectors with Texture and Lighting
The demand for high-quality and diverse 3D content has increased due to its applications in virtual reality, augmented reality, and 3D printing. Converting 2D images to 3D models is a challenging task requiring an understanding of depth, texture, and illumination. This study delves into a novel deep learning-based methodology for generating high-quality and accurate 3D models from 2D images. The proposed approach combines Generative Adversarial Networks (GANs) and Deep Marching Tetrahedra to synthesize complex 3D objects with realistic textures and lighting effects. Additionally, a 2D Texture Generator based on Random Noise and a GAN, as well as a Light Map Generator using Spectral Power Distribution Function and a GAN, are designed to enhance the visual appeal and realism of the generated 3D models. The paper presents a tri-model architecture incorporating a seam-aware landmark detector, which identifies heatmaps to ensure precise mapping of 2D textures onto the 3D model. This feature significantly improves the accuracy and quality of texture application by aligning key points from the 2D images with corresponding areas on the 3D geometry. Furthermore, the model employs point source lighting for light map generation, simulating realistic illumination effects that contribute to the final output’s visual richness. The proposed technique is evaluated, showcasing its superiority over existing methods in generating diverse and realistic 3D models. The study highlights the potential applications of the proposed technique in various domains, including computer graphics, virtual reality, architecture, and industrial design. The ability to generate accurate 3D models with diverse variations opens up exciting opportunities for design exploration and visualization. This work contributes transformative solutions for 3D object synthesis, 2D texture generation, and light map simulation, paving the way for advancements in 3D modeling and design.
3D textured model encryption via 3D Lu chaotic mapping
In the emerging Virtual/Augmented Reality(VR/AR) era, three dimensional(3 D) content will be popularized just as images and videos today. The security and privacy of these 3 D contents should be taken into consideration. 3 D contents contain surface models and solid models. Surface models include point clouds,meshes and textured models. Previous work mainly focused on the encryption of solid models, point clouds and meshes. This work focuses on the most complicated 3 D textured model. We propose a 3 D Lu chaotic mapping based encryption method for 3 D textured models. We encrypt the vertices, polygons, and textures of3 D models separately using the 3 D Lu chaotic mapping. Then the encrypted vertices, polygons and textures are composited together to form the final encrypted 3 D textured model. The experimental results reveal that our method can encrypt and decrypt 3 D textured models correctly. Furthermore, typical statistic and brute-force attacks can be resisted by the proposed method.