Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
62 result(s) for "Lin, Ching-Sheng"
Sort by:
Smart technology–driven aspects for human-in-the-loop smart manufacturing
Industry 4.0 has led to paradigm shifts and changes for planning and developing manufacturing processes. To successfully embrace this revolution and confront the numerous challenges, manufacturing enterprises have to cope with the need for technological advancement and provide sustainable training and education for their workforce. This great transformation affects not only the integration of the digital and physical environments but, most importantly, also the relationships between humans and manufacturing sites. In this paper, we take into account the human-in-the-loop for digitalization challenges and present a 3I (Intellect, Interaction, and Interface) aspect for factories to increase the adoption of smart technologies toward the smart manufacturing vision. The Intellect aspect aims to add knowledge to the manufacturing equipment. The Interaction aspect targets at the collaboration between humans and manufacturing equipment. The Interface aspect explores appropriate means for humans to exploit the intelligence of technologies for the communication with the manufacturing equipment. In addition to the concept-related propositions, we also demonstrate a set of selected application examples to illustrate how the proposed aspects can drive new growth opportunities for enterprises.
Data Twin-Driven Cyber-Physical Factory for Smart Manufacturing
Because of the complex production processes and technology-intensive operations that take place in the aerospace and defense industry, introducing Industry 4.0 into the manufacturing processes of aircraft composite materials is inevitable. Digital Twin and Cyber-Physical Systems in Industry 4.0 are key techniques to develop digital manufacturing. Since it is very difficult to create high-fidelity virtual models, the development of digital manufacturing for aircraft manufacturers is challenging. In this study, we provide a view from a data simulation perspective and adopt machine learning approaches to simplify the high-fidelity virtual models in Digital Twin. The novel concept is called Data Twin, and the deployable service to support the simulation is known as the Data Twin Service (DTS). Relying on the DTS, we also propose a microservice software architecture, Cyber-Physical Factory (CPF), to simulate the shop floor environment. Additionally, there are two war rooms in the CPF that can be used to establish a collaborative platform: one is the Physical War Room, used to integrate real data, and the other is the Cyber War Room for handling simulation data and the results of the CPF.
Adapting Static and Contextual Representations for Policy Gradient-Based Summarization
Considering the ever-growing volume of electronic documents made available in our daily lives, the need for an efficient tool to capture their gist increases as well. Automatic text summarization, which is a process of shortening long text and extracting valuable information, has been of great interest for decades. Due to the difficulties of semantic understanding and the requirement of large training data, the development of this research field is still challenging and worth investigating. In this paper, we propose an automated text summarization approach with the adaptation of static and contextual representations based on an extractive approach to address the research gaps. To better obtain the semantic expression of the given text, we explore the combination of static embeddings from GloVe (Global Vectors) and the contextual embeddings from BERT (Bidirectional Encoder Representations from Transformer) and GPT (Generative Pre-trained Transformer) based models. In order to reduce human annotation costs, we employ policy gradient reinforcement learning to perform unsupervised training. We conduct empirical studies on the public dataset, Gigaword. The experimental results show that our approach achieves promising performance and is competitive with various state-of-the-art approaches.
Multi-label mental health classification in social media posts with multi-perspective prompt ensemble and auxiliary self-supervision
Anxiety and depression have become major global health concerns. With the rapid rise of social media, people increasingly share emotions and personal struggles through posts, which often convey multiple mental states simultaneously. To address this multi-label classification challenge in mental health texts, this study proposes a multi-task framework with two main modules, a multi-perspective prompt design module and a perturbation-based self-supervised learning module, based on a pre-trained language model backbone. Prompts from sociological, psychological, and educational perspectives are used to enhance semantic understanding. To improve model robustness, we formulate self-supervised auxiliary tasks where the model predicts whether a sentence has undergone insertion, swap, or deletion. Experiments on the MultiWD dataset, covering six wellness dimensions, show that our method outperforms all baselines. Furthermore, ablation studies explore the impact of different training configurations and confirm the critical contributions of both proposed modules.
Multiphysics Simulation of the NASA SIRIUS-CAL Fuel Experiment in the Transient Test Reactor Using Griffin
After approximately 50 years, NASA is restarting efforts to develop nuclear thermal propulsion (NTP) for interplanetary missions. Building upon nuclear engine tests performed from the late 1950s to the early 1970s, the present research and testing focuses on advanced materials and fabrication methods. A number of transient tests have been performed to evaluate materials performance under high-temperature, high-flux conditions, with several more experiments in the pipeline for future testing. The measured data obtained from those tests are being used to validate the Griffin reactor multiphysics code for this particular type of application. Griffin was developed at Idaho National Laboratory (INL) using the MOOSE framework. This article describes the simulation results of the SIRIUS-CAL calibration experiment in the Transient Reactor Test Facility (TREAT). SIRIUS-CAL was the first transient test conducted on NASA fuels, and although the test was performed with a relatively low core peak power, the test specimen survived a temperature exceeding 900 K. Griffin simulations of the experiment successfully matched the reactor’s power transient after calibrating the initial control rod position to match the initial reactor period. The thermal-hydraulics model largely matches the time-dependent response of a thermocouple located within the experiment specimen to within the uncertainty estimate. However, the uncertainty range is significant and must be reduced in the future.
Parallel Communicating Finite Automata: Productiveness and Succinctness
Parallel Communicating Finite Automata (PCFA) extend classical finite automata by enabling multiple automata to operate in parallel and communicate upon request, capturing essential aspects of parallel and distributed computation. This model is relevant for studying complex systems such as computer networks and multi-agent environments. In this paper, we explore two key aspects of PCFA: their undecidability and their descriptional complexity. We first show that deterministic PCFA of degree 2 (DPCFA(2)) can accept a set of valid computations of a deterministic Turing machine, leading to the undecidability of restricted versions of emptiness and universality problems. Additionally, we employ the concept of productiveness (a stronger form of non-recursive enumerability) to demonstrate that these problems are not only undecidable but also unprovable. Second, we investigate the descriptional complexity of PCFA and establish non-recursive trade-offs between different PCFA models and many classes of language descriptors, such as DFAs and subclasses of regular expressions, offering new insights into their computational and structural properties.
Few-Shot Learning for Misinformation Detection Based on Contrastive Models
With the development of social media, the amount of fake news has risen significantly and had a great impact on both individuals and society. The restrictions imposed by censors make the objective reporting of news difficult. Most studies use supervised methods, relying on a large amount of labeled data for fake news detection, which hinders the effectiveness of the detection. Meanwhile, the focus of these studies is on the detection of fake news in a single modality, either text or images, but actual fake news is more often in the form of text–image pairs. In this paper, we introduce a self-supervised model grounded in contrastive learning. This model facilitates simultaneous feature extraction for both text and images by employing dot product graphic matching. Through contrastive learning, it augments the extraction capability of image features, leading to a robust visual feature extraction ability with reduced training data requirements. The model’s effectiveness was assessed against the baseline using the COSMOS fake news dataset. The experiments reveal that, when detecting fake news with mismatched text–image pairs, only approximately 3% of the data are used for training. The model achieves an accuracy of 80%, equivalent to 95% of the original model’s performance using full-size data for training. Notably, replacing the text encoding layer enhances experimental stability, providing a substantial advantage over the original model, specifically on the COSMOS dataset.
A hybrid model for the detection of multi-agent written news articles based on linguistic features and BERT
Large language models (LLMs) are central to AI systems and Excel in natural language processing tasks. They blur the line between human and machine-generated text and are widely used by professional writers across domains including news article generation. The challenge of detecting LLM-written articles introduces novel obstacles regarding misuse and the generation of fake content. In this work, we aim to recognize two kinds of LLM-written news where one type is entirely generated by LLMs and another is paraphrased based on existing news sources. We propose a neural network model that incorporates linguistic features and BERT contextual embedding features for LLM-written news article detection. In conjunction with the proposed model, we also produce a news article corpus based on the BBC dataset to generate and paraphrase news articles through multi-agent cooperation using ChatGPT. Our model obtains 96.57% accuracy and 96.44% F1macro score, respectively, outperforming other existing models and indicating the capability of helping readers to identify LLM-written news articles. To assess the model’s robustness, we also construct another corpus based on the BBC dataset using a different language model, Claude, and demonstrate that our detection model achieves strong results. Furthermore, we apply our model to text generation detection in the medical domain, where it also delivers promising performance.
Dual Siamese transformer-encoder-based network for remaining useful life prediction
Accurately predicting the capacity and remaining useful life (RUL) of lithium-ion batteries is crucial for their reliable and safe functioning. In this research, we propose a dual Siamese transformer-encoder-based network consisting of two subnetworks to improve the RUL prediction capability and for wider applications. The first subnetwork, autoTrans, adopts a transformer-encoder architecture to form an autoencoder structure for the feature extraction. The second subnetwork, regTrans, is a transformer-encoder-based regressor model which takes the featuring encodings from autoTrans as inputs and makes the RUL prediction. To enhance the model's robustness, both subnetworks employ the Siamese architecture for handling raw and noisy inputs. A joint training strategy is applied on autoTrans and regTrans to optimize the proposed approach. The experimental verification is conducted on the NASA battery, and our model achieves the best average results across different evaluation criteria. Furthermore, we also apply our model to the turbofan engine dataset and demonstrate promising performance as well.
Efficient Parameterization for Knowledge Graph Embedding Using Hierarchical Attention Network
In the domain of knowledge graph embedding, conventional approaches typically transform entities and relations into continuous vector spaces. However, parameter efficiency becomes increasingly crucial when dealing with large-scale knowledge graphs that contain vast numbers of entities and relations. In particular, resource-intensive embeddings often lead to increased computational costs, and may limit scalability and adaptability in practical environments, such as in low-resource settings or real-world applications. This paper explores an approach to knowledge graph representation learning that leverages small, reserved entities and relation sets for parameter-efficient embedding. We introduce a hierarchical attention network designed to refine and maximize the representational quality of embeddings by selectively focusing on these reserved sets, thereby reducing model complexity. Empirical assessments validate that our model achieves high performance on the benchmark dataset with fewer parameters and smaller embedding dimensions. The ablation studies further highlight the impact and contribution of each component in the proposed hierarchical attention structure.