Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
2,930 result(s) for "Object linking "
Sort by:
Design of New Test System for Proton Exchange Membrane Fuel Cell
A comprehensive test system for proton exchange membrane fuel cells (PEMFCs) is designed and developed for monitoring and controlling the inlet and outlet parameters and safety issues of fuel cells. The data acquisition and output instruction rely on the connection between PLC (programmable logic controller) and OPC (object linking and embedding for process control). Based on Siemens S7-200 series PLC and PID (proportion integration differentiation) technology, the margin of error in relative humidity of inlet air is controlled at less than 0.7%. Furthermore, a hydrogen recycling system and an alarm module are introduced, considering the hydrogen or nitrogen solenoid valve power failure, cooling fan power failure, temperature anomaly, and hydrogen leakage. This developed test system is evaluated by the experimental investigation of PEMFC performance. The results show that the test system has very good test and control performances. At a cell temperature of 40 °C, enhanced performance in the polarization tests is depicted at a high humidification temperature of 60 °C.
Integrating Virtual Reality and Digital Twin in Circular Economy Practices: A Laboratory Application Case
The increasing awareness of customers toward climate change effects, the high demand instability affecting several industrial sectors, and the fast automation and digitalization of production systems are forcing companies to re-think their business strategies and models in view of both the Circular Economy (CE) and Industry 4.0 (I4.0) paradigms. Some studies have already assessed the relations between CE and I4.0, their benefits, and barriers. However, a practical demonstration of their potential impact in real contexts is still lacking. The aim of this paper is to present a laboratory application case showing how I4.0-based technologies can support CE practices by virtually testing a waste from electrical and electronic equipment (WEEE) disassembly plant configuration through a set of dedicated simulation tools. Our results highlight that service-oriented, event-driven processing and information models can support the integration of smart and digital solutions in current CE practices at the factory level.
Randomly distributed embedding making short-term high-dimensional data predictable
Future state prediction for nonlinear dynamical systems is a challenging task, particularly when only a few time series samples for high-dimensional variables are available from real-world systems. In this work, we propose a model-free framework, named randomly distributed embedding (RDE), to achieve accurate future state prediction based on short-term high-dimensional data. Specifically, from the observed data of high-dimensional variables, the RDE framework randomly generates a sufficient number of low-dimensional “nondelay embeddings” and maps each of them to a “delay embedding,” which is constructed from the data of a to be predicted target variable. Any of these mappings can perform as a low-dimensional weak predictor for future state prediction, and all of such mappings generate a distribution of predicted future states. This distribution actually patches all pieces of association information from various embeddings unbiasedly or biasedly into the whole dynamics of the target variable, which after operated by appropriate estimation strategies, creates a stronger predictor for achieving prediction in a more reliable and robust form. Through applying the RDE framework to data from both representative models and real-world systems, we reveal that a high-dimension feature is no longer an obstacle but a source of information crucial to accurate prediction for short-term data, even under noise deterioration.
LaMMOn: language model combined graph neural network for multi-target multi-camera tracking in online scenarios
Multi-target multi-camera tracking is crucial to intelligent transportation systems. Numerous recent studies have been undertaken to address this issue. Nevertheless, using the approaches in real-world situations is challenging due to the scarcity of publicly available data and the laborious process of manually annotating the new dataset and creating a tailored rule-based matching system for each camera scenario. To address this issue, we present a novel solution termed LaMMOn , an end-to-end transformer and graph neural network-based multi-camera tracking model. LaMMOn consists of three main modules: (1) Language Model Detection (LMD) for object detection; (2) Language and Graph Model Association module (LGMA) for object tracking and trajectory clustering; (3) Text-to-embedding module (T2E) that overcome the problem of data limitation by synthesizing the object embedding from defined texts. LaMMOn can be run online in real-time scenarios and achieve a competitive result on many datasets, e.g., CityFlow (HOTA 76.46%), I24 (HOTA 25.7%), and TrackCUIP (HOTA 80.94%) with an acceptable FPS (from 12.20 to 13.37) for an online application.
Spatiotemporal tubelet feature aggregation and object linking for small object detection in videos
This paper addresses the problem of exploiting spatiotemporal information to improve small object detection precision in video. We propose a two-stage object detector called FANet based on short-term spatiotemporal feature aggregation and long-term object linking to refine object detections. First, we generate a set of short tubelet proposals. Then, we aggregate RoI pooled deep features throughout the tubelet using a new temporal pooling operator that summarizes the information with a fixed output size independent of the tubelet length. In addition, we define a double head implementation that we feed with spatiotemporal information for spatiotemporal classification and with spatial information for object localization and spatial classification. Finally, a long-term linking method builds long tubes with the previously calculated short tubelets to overcome detection errors. The association strategy addresses the generally low overlap between instances of small objects in consecutive frames by reducing the influence of the overlap in the final linking score. We evaluated our model in three different datasets with small objects, outperforming previous state-of-the-art spatiotemporal object detectors and our spatial baseline.
Steganalysis of LSB matching using differences between nonadjacent pixels
This paper models the messages embedded by spatial least significant bit (LSB) matching as independent noises to the cover image, and reveals that the histogram of the differences between pixel gray values is smoothed by the stego bits despite a large distance between the pixels. Using the characteristic function of difference histogram ( DHCF ), we prove that the center of mass of DHCF ( DHCF COM ) decreases after messages are embedded. Accordingly, the DHCF COMs are calculated as distinguishing features from the pixel pairs with different distances. The features are calibrated with an image generated by average operation, and then used to train a support vector machine (SVM) classifier. The experimental results prove that the features extracted from the differences between nonadjacent pixels can help to tackle LSB matching as well.
Fast re-OBJ: real-time object re-identification in rigid scenes
Re-identifying objects in a rigid scene across varying viewpoints (object Re-ID) is a challenging task, in particular when there are similar, even identical objects coexist in the same environment. Discriminative features play no doubt an essential role in addressing this challenge, while for practical deployment, real-time performance is another desired attribute. We therefore propose a novel framework, named Fast re-OBJ , that is able to improve both Re-ID accuracy and processing speed via tight coupling between the instance segmentation module and embedding generation module. The rich object encoding in the instance segmentation backbone is directly shared to the embedding generation module for training a more discriminative representation via a triplet network. Moreover, we create datasets with the segmentation outputs using real-time object detectors to train and evaluate our object embedding module. With extensive experiments, we prove that our proposed Fast re-OBJ improves the object Re-ID accuracy by 5% and the speed is 5 × faster compared to the state-of-the-art methods. The dataset and code repository are publicly available at: https://tinyurl.com/bdsb53c4 .
Instanton R-matrix and W-symmetry
A bstract We study the relation between W 1 + ∞ algebra and Arbesfeld-Schiffmann­ Tsymbaliuk Yangian using the Maulik-Okounkov R-matrix. The central object linking these two pictures is the Miura transformation. Using the results of Nazarov and Sklyanin we find an explicit formula for the mixed R-matrix acting on two Fock spaces associated to two different asymptotic directions of the affine Yangian. Using the free field representation we propose an explicit identification of Arbesfeld-Schiffmann-Tsymbaliuk generators with the generators of Maulik-Okounkov Yangian. In the last part we use the Miura transformation to give a conformal field theoretic construction of conserved quantities and ladder operators in the quantum mechanical rational and trigonometric Calogero-Sutherland models on which a vector representation of the Yangian acts.
Novel translation knowledge graph completion model based on 2D convolution
The knowledge graph completion task involves predicting missing entities and relations in a knowledge graph. Many models have achieved good results, but they have become increasingly complex. In this study, we propose a simple translation-based model that relies on the fact that the multiplication of subjects and relations is approximately equal to the object. First, we utilize embeddings to represent entities and relations. Second, we perform vector multiplication on subject embedding and relation embedding to generate a 2D matrix and achieve full fusion of embedding at the element level. Third, we adopt a convolutional neural network on the 2D matrix. Thereafter, we can generate feature maps, which are then spliced into a 1D feature vector. The feature vector is transformed into predicted object embedding through a fully connected operation. Finally, we use the scoring function to score the candidate triples. Experimental results strongly demonstrate that the translation knowledge graph completion model based on 2D convolution achieves state-of-the-art results compared with the baseline.
A Comprehensive Hybrid Approach for Indoor Scene Recognition Combining CNNs and Text-Based Features
Indoor scene recognition is a computer vision task that identifies various indoor environments, such as offices, libraries, kitchens, and restaurants. This research area is particularly significant for applications in robotics, security, and assistance for individuals with disabilities, as it enables the categorization of spaces and the provision of contextual information. Convolutional Neural Networks (CNNs) are commonly employed in this field. While CNNs perform well in outdoor scene recognition by focusing on global features such as mountains and skies, they often struggle with indoor scenes, where local features like furniture and objects are more critical. In this study, the “MIT 67 Indoor Scene” dataset is used to extract and combine features from both a CNN and a text-based model utilizing object recognition outputs, resulting in a two-channel hybrid model. The experimental results demonstrate that this hybrid approach, which integrates natural language processing and image processing techniques, improves the test accuracy of the image processing model by 8.3%, achieving a notable success rate. Furthermore, this study offers contributions to new application areas in remote sensing, particularly in indoor scene understanding and indoor mapping.