Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
14,036 result(s) for "3D model"
Sort by:
Tumor microenvironment signaling and therapeutics in cancer progression
Tumor development and metastasis are facilitated by the complex interactions between cancer cells and their microenvironment, which comprises stromal cells and extracellular matrix (ECM) components, among other factors. Stromal cells can adopt new phenotypes to promote tumor cell invasion. A deep understanding of the signaling pathways involved in cell‐to‐cell and cell‐to‐ECM interactions is needed to design effective intervention strategies that might interrupt these interactions. In this review, we describe the tumor microenvironment (TME) components and associated therapeutics. We discuss the clinical advances in the prevalent and newly discovered signaling pathways in the TME, the immune checkpoints and immunosuppressive chemokines, and currently used inhibitors targeting these pathways. These include both intrinsic and non‐autonomous tumor cell signaling pathways in the TME: protein kinase C (PKC) signaling, Notch, and transforming growth factor (TGF‐β) signaling, Endoplasmic Reticulum (ER) stress response, lactate signaling, Metabolic reprogramming, cyclic GMP–AMP synthase (cGAS)–stimulator of interferon genes (STING) and Siglec signaling pathways. We also discuss the recent advances in Programmed Cell Death Protein 1 (PD‐1), Cytotoxic T‐Lymphocyte Associated Protein 4 (CTLA4), T‐cell immunoglobulin mucin‐3 (TIM‐3) and Lymphocyte Activating Gene 3 (LAG3) immune checkpoint inhibitors along with the C‐C chemokine receptor 4 (CCR4)‐ C‐C class chemokines 22 (CCL22)/ and 17 (CCL17), C‐C chemokine receptor type 2 (CCR2)‐ chemokine (C‐C motif) ligand 2 (CCL2), C‐C chemokine receptor type 5 (CCR5)‐ chemokine (C‐C motif) ligand 3 (CCL3) chemokine signaling axis in the TME. In addition, this review provides a holistic understanding of the TME as we discuss the three‐dimensional and microfluidic models of the TME, which are believed to recapitulate the original characteristics of the patient tumor and hence may be used as a platform to study new mechanisms and screen for various anti‐cancer therapies. We further discuss the systemic influences of gut microbiota in TME reprogramming and treatment response. Overall, this review provides a comprehensive analysis of the diverse and most critical signaling pathways in the TME, highlighting the associated newest and critical preclinical and clinical studies along with their underlying biology. We highlight the importance of the most recent technologies of microfluidics and lab‐on‐chip models for TME research and also present an overview of extrinsic factors, such as the inhabitant human microbiome, which have the potential to modulate TME biology and drug responses.
3D Model Generation and Reconstruction Using Conditional Generative Adversarial Network
Generative adversarial network (GANs) has significant progress in 3D model generation and reconstruction recently years. GANs can generate 3D models by sampling from uniform noise distribution. But they generate randomly and are often not easy to control. To address this problem, we add the class information to both generator and discriminator and construct a new network named 3D conditional GAN. Moreover, to better guide generator to reconstruct 3D model from a single image in high quality, we propose a new 3D model reconstruction network by integrating a classifier into the traditional system. Experimental results on ModelNet10 dataset show that our method can effectively generate realistic 3D models corresponding to the given class labels. And the qualities of 3D model reconstruction have been improved considerably by using proposed method in IKEA dataset.
Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.
ArkaeVision VR Game: User Experience Research between Real and Virtual Paestum
The design of a virtual reality (VR) cultural application is aimed at supporting the steps of the learning process-like concrete experimentation, reflection and abstraction—which are generally difficult to induce when looking at ruins and artifacts that bring back to the past. With the use of virtual technologies (e.g., holographic surfaces, head-mounted displays, motion—cation sensors) those steps are surely supported thanks to the immersiveness and natural interaction granted by such devices. VR can indeed help to symbolically recreate the context of life of cultural objects, presenting them in their original place of belonging, while they were used for example, increasing awareness and understanding of history. The ArkaeVision VR application takes advantages of storytelling and user experience design to tell the story of artifacts and sites of an important cultural heritage site of Italy, Paestum, creating a dramaturgy around them and relying upon historical and artistic content revised by experts. Visitors will virtually travel into the temple dedicated to Hera II of Paestum, in the first half of the fifth century BC, wearing an immersive viewer–HTC Vive; here, they will interact with the priestess Ariadne, a digital actor, who will guide them on a virtual tour presenting the beliefs, the values and habits of an ancient population of the Magna Graecia city. In the immersive VR application, the memory is indeed influenced by the visitors’ ability to proceed with the exploratory activity. Two evaluation sessions were planned and conducted to understand the effectiveness of the immersive experience, usability of the virtual device and the learnability of the digital storytelling. Results revealed that certainly the realism of the virtual reconstructions, the atmosphere and the “sense of the past” that pervades the whole VR cultural experience, characterize the positive feedback of visitors, their emotional engagement and their interest to proceed with the exploration.
System for Estimation of Human Anthropometric Parameters Based on Data from Kinect v2 Depth Camera
Anthropometric measurements of the human body are an important problem that affects many aspects of human life. However, anthropometric measurement often requires the application of an appropriate measurement procedure and the use of specialized, sometimes expensive measurement tools. Sometimes the measurement procedure is complicated, time-consuming, and requires properly trained personnel. This study aimed to develop a system for estimating human anthropometric parameters based on a three-dimensional scan of the complete body made with an inexpensive depth camera in the form of the Kinect v2 sensor. The research included 129 men aged 18 to 28. The developed system consists of a rotating platform, a depth sensor (Kinect v2), and a PC computer that was used to record 3D data, and to estimate individual anthropometric parameters. Experimental studies have shown that the precision of the proposed system for a significant part of the parameters is satisfactory. The largest error was found in the waist circumference parameter. The results obtained confirm that this method can be used in anthropometric measurements.
Image based 3D city modeling : Comparative study
3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city reconstruction; CityEngine is a good product. Agisoft Photoscan software creates much better 3D model with good texture quality and automatic processing. So this image based comparative study is useful for 3D city user community. Thus this study will provide a good roadmap for geomatics user community to create photo-realistic virtual 3D city model by using image based techniques.
Applications of 3D City Models: State of the Art Review
In the last decades, 3D city models appear to have been predominantly used for visualisation; however, today they are being increasingly employed in a number of domains and for a large range of tasks beyond visualisation. In this paper, we seek to understand and document the state of the art regarding the utilisation of 3D city models across multiple domains based on a comprehensive literature study including hundreds of research papers, technical reports and online resources. A challenge in a study such as ours is that the ways in which 3D city models are used cannot be readily listed due to fuzziness, terminological ambiguity, unclear added-value of 3D geoinformation in some instances, and absence of technical information. To address this challenge, we delineate a hierarchical terminology (spatial operations, use cases, applications), and develop a theoretical reasoning to segment and categorise the diverse uses of 3D city models. Following this framework, we provide a list of identified use cases of 3D city models (with a description of each), and their applications. Our study demonstrates that 3D city models are employed in at least 29 use cases that are a part of more than 100 applications. The classified inventory could be useful for scientists as well as stakeholders in the geospatial industry, such as companies and national mapping agencies, as it may serve as a reference document to better position their operations, design product portfolios, and to better understand the market.
3D textured model encryption via 3D Lu chaotic mapping
In the emerging Virtual/Augmented Reality(VR/AR) era, three dimensional(3 D) content will be popularized just as images and videos today. The security and privacy of these 3 D contents should be taken into consideration. 3 D contents contain surface models and solid models. Surface models include point clouds,meshes and textured models. Previous work mainly focused on the encryption of solid models, point clouds and meshes. This work focuses on the most complicated 3 D textured model. We propose a 3 D Lu chaotic mapping based encryption method for 3 D textured models. We encrypt the vertices, polygons, and textures of3 D models separately using the 3 D Lu chaotic mapping. Then the encrypted vertices, polygons and textures are composited together to form the final encrypted 3 D textured model. The experimental results reveal that our method can encrypt and decrypt 3 D textured models correctly. Furthermore, typical statistic and brute-force attacks can be resisted by the proposed method.
3D model retrieval based on interactive attention CNN and multiple features
3D (three-dimensional) models are widely applied in our daily life, such as mechanical manufacture, games, biochemistry, art, virtual reality, and etc . With the exponential growth of 3D models on web and in model library, there is an increasing need to retrieve the desired model accurately according to freehand sketch. Researchers are focusing on applying machine learning technology to 3D model retrieval. In this article, we combine semantic feature, shape distribution features and gist feature to retrieve 3D model based on interactive attention convolutional neural networks (CNN). The purpose is to improve the accuracy of 3D model retrieval. Firstly, 2D (two-dimensional) views are extracted from 3D model at six different angles and converted into line drawings. Secondly, interactive attention module is embedded into CNN to extract semantic features, which adds data interaction between two CNN layers. Interactive attention CNN extracts effective features from 2D views. Gist algorithm and 2D shape distribution (SD) algorithm are used to extract global features. Thirdly, Euclidean distance is adopted to calculate the similarity of semantic feature, the similarity of gist feature and the similarity of shape distribution feature between sketch and 2D view. Then, the weighted sum of three similarities is used to compute the similarity between sketch and 2D view for retrieving 3D model. It solves the problem that low accuracy of 3D model retrieval is caused by the poor extraction of semantic features. Nearest neighbor (NN), first tier (FT), second tier (ST), F-measure (E(F)), and discounted cumulated gain (DCG) are used to evaluate the performance of 3D model retrieval. Experiments are conducted on ModelNet40 and results show that the proposed method is better than others. The proposed method is feasible in 3D model retrieval.
New Techniques and Methods for Modelling, Visualization, and Analysis of a 3D City
The recent years observe the vast development in new techniques and methods for modelling, visualization, and analysis of 3D digital cities, as the need of digital twins of urban environment in different applications and simulations has been increased dramatically. This special issue attempts to give an overview of the recent progress and future tendency of research activities in the aforementioned domain. The special issue includes seven articles with topics ranging from data acquisition and data processing, to data modelling and applications. The experience in this special issue says that 3D building models should contain semantic information for various applications and therefore set the corresponding requirement in techniques and methods for 3D objects detection and modelling.