Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
95 result(s) for "answer selection"
Sort by:
Chinese Medical Question Answer Matching Using End-to-End Character-Level Multi-Scale CNNs
This paper focuses mainly on the problem of Chinese medical question answer matching, which is arguably more challenging than open-domain question answer matching in English due to the combination of its domain-restricted nature and the language-specific features of Chinese. We present an end-to-end character-level multi-scale convolutional neural framework in which character embeddings instead of word embeddings are used to avoid Chinese word segmentation in text preprocessing, and multi-scale convolutional neural networks (CNNs) are then introduced to extract contextual information from either question or answer sentences over different scales. The proposed framework can be trained with minimal human supervision and does not require any handcrafted features, rule-based patterns, or external resources. To validate our framework, we create a new text corpus, named cMedQA, by harvesting questions and answers from an online Chinese health and wellness community. The experimental results on the cMedQA dataset show that our framework significantly outperforms several strong baselines, and achieves an improvement of top-1 accuracy by up to 19%.
Chinese medical question answer selection via hybrid models based on CNN and GRU
Question answer selection in the Chinese medical field is very challenging since it requires effective text representations to capture the complex semantic relationships between Chinese questions and answers. Recent approaches on deep learning, e.g., CNN and RNN, have shown their potential in improving the selection quality. However, these existing methods can only capture a part or one-side of semantic relationships while ignoring the other rich and sophisticated ones, leading to limited performance improvement. In this paper, a series of neural network models are proposed to address Chinese medical question answer selection issue. In order to model the complex relationships between questions and answers, we develop both single and hybrid models with CNN and GRU to combine the merits of different neural network architectures. This is different from existing works that can onpy capture partial relationships by utilizing a single network structure. Extensive experimental results on cMedQA dataset demonstrate that the proposed hybrid models, especially BiGRU-CNN, significantly outperform the state-of-the-art methods. The source codes of our models are available in the GitHub (https://github.com/zhangyuteng/MedicalQA-CNN-BiGRU).
Multi-view pre-trained transformer via hierarchical capsule network for answer sentence selection
Answer selection requires technology that effectively captures in-depth semantic information between the question and the corresponding answer. Most existing studies focus on using linear or pooling operations to directly classify the output representation, resulting in the absence of critical information and the emergence of single-label predictions. To address these issues, we propose a novel Multi-view Pre-trained Transformer with Hierarchical Capsule Network (MPT-HCN). Specifically, we propose a Hierarchical Capsule Network composed of three capsule networks to independently process high-dimensional sparse information of words, semantic information of similar expressions, and feature classification information so that multiple attributes can be fully considered and accurately clustered. Moreover, we consider the impact of the intermediate encoder layer output information on the overall sequence semantic representation and propose a Multi-view Information Fusion that obtains the final semantic representation information by weighted fusion of the output information of all encoder layers, thereby avoiding the appearance of a single prediction label. Extensive experiments on five typical representative datasets, especially on the WikiQA dataset, show that our model MPT-HCN (RL) achieves an excellent performance of 0.939 on MAP and 0.942 on MRR, which is a significant improvement of 3.9% and 2.7% respectively, compared to the state-of-the-art baseline model.
Co-attention fusion based deep neural network for Chinese medical answer selection
Chinese selection is one of the most important subtasks in Chinese medical question-answer system. To obtain the representations of question and answer, an attractive method is to use the attentive pooling based deep neural network. However, this method suffers from the over-pooling problem. It generates attentive information by only using the related medical keywords, and neglects the local semantic information of sentences. In this paper, a novel co-attention fusion based deep neural network method is proposed. Our method solves the over-pooling problem by fusing local semantic information with attentive information. Because of the usage of the fusion mechanism, the proposed method tends to obtain more useful information for pooling and produce better representations for question and answer. For comparison, we create a new Chinese medical answer selection dataset in the epilepsy theme (i.e., cEpilepsyQA), and our method performs much better than the state-of-the-art methods. Also, the proposed method gets competitive results on the public Chinese medical answer selection datasets: cMedQA v1.0 and v2.0.
Testing the List Order Response Effect Among Respondents With Cognitive Sophistication: Experimental Evidence in Management Information Systems Research
Questionnaires constitute a valuable data-collection tool in Management Information Systems (MIS) research. However, MIS researchers have identified various biases in the design and implementation of questionnaires. This paper focuses on the bias resulting from the order of items in the answer choices, called list order bias. Such bias is described through a framework of cognitive theories, including the cognitive elaboration model, memory limitation hypothesis, and satisficing theory. Previous literature has proved that satisficing theory is superior in explaining list order bias; therefore, such theory is adopted for this study. Satisficing  theory posits that respondents provide a satisfactory rather than an optimal answer when a survey question requires cognitive effort. Previous research has shown that satisficing is triggered by respondents' cognitive abilities to complete the questionnaire and, therefore, it is predominant among less educated respondents. However, the extent to which satisficing behaviors could occur, even among respondents with higher education and cognitive abilities, still needs to be ascertained. This is particularly important for MIS studies that investigate information systems' adoption at the organizational level because they rely mostly on respondents who are information technology (IT) managers. Therefore, this study adopts the satisficing theory to examine the list order response effect among cognitively sophisticated respondents in the MIS field. The authors selected and manipulated a question from the Society for Information Management's (SIM) IT Trends Study web-based questionnaire to conduct such an analysis. The SIM IT Trends Study survey questions offer a lengthy list of answer options to SIM members who are IT managers inside organizations that operate in various business sectors. The authors created two types of the same list question: one provided the list options in alphabetical order and the other provided the list answers in reverse-alphabetical order. The findings show statistically significant empirical evidence for list order bias by revealing that, despite their cognitive sophistication, respondents were more likely to choose the first available answer, especially in the case of reverse-alphabetical order. In light of these findings, the authors propose remedies to decrease the satisficing behaviors of such respondents. In particular, researchers could break questions with long lists into several questions with short lists and then combine those responses into the answer selection list of a final question. Researchers could also provide the answer selection lists to half of the sample alphabetically and the other half in reverse order and then combine the two subsamples into the final possible responses. Alternatively, researchers could use \"trigger\" or \"priming\" statements before displaying the question and its answer selection list to reduce the questionnaire's difficulty. In summary, this study addresses the list order response bias among respondents with cognitive sophistication in MIS research, explains why this bias occurs by employing satisficing theory, and provides remedies for reducing the relevant occurrence. Hence, this manuscript contributes to MIS research by providing insights to improve the quality of questionnaires by minimizing satisficing behaviors that lead to list order bias, and it makes MIS practitioners aware of the possible influence of question design when they respond to questionnaires.
Refined Answer Selection Method with Attentive Bidirectional Long Short-Term Memory Network and Self-Attention Mechanism for Intelligent Medical Service Robot
Answer selection, as a crucial method for intelligent medical service robots, has become more and more important in natural language processing (NLP). However, there are still some critical issues in the answer selection model. On the one hand, the model lacks semantic understanding of long questions because of noise information in a question–answer (QA) pair. On the other hand, some researchers combine two or more neural network models to improve the quality of answer selection. However, these models focus on the similarity between questions and answers without considering background information. To this end, this paper proposes a novel refined answer selection method, which uses an attentive bidirectional long short-term memory (Bi-LSTM) network and a self-attention mechanism to solve these issues. First of all, this paper constructs the required knowledge-based text as background information and converts the questions and answers from words to vectors, respectively. Furthermore, the self-attention mechanism is adopted to extract the global features from the vectors. Finally, an attentive Bi-LSTM network is designed to address long-distance dependent learning problems and calculate the similarity between the question and answer with consideration of the background knowledge information. To verify the effectiveness of the proposed method, this paper constructs a knowledge-based QA dataset including multiple medical QA pairs and conducts a series of experiments on it. The experimental results reveal that the proposed approach could achieve impressive performance on the answer selection task and reach an accuracy of 71.4%, MAP of 68.8%, and decrease the BLUE indicator to 3.10.
Interactive knowledge-enhanced attention network for answer selection
Answer selection which aims to select the most appropriate answers from a set of candidate answers plays a crucial role in various applications such as question answering (QA) and information retrieval. Recently, remarkable progress has been achieved on matching sequence pairs by deep neural networks. However, most of them focus on learning semantic representations for the contexts of QA pairs while the background information and facts beyond the context are neglected. In this paper, we propose an interactive knowledge-enhanced attention network for answer selection ( IKAAS ), which interactively learns the sentence representations of query–answer pairs by simultaneously considering the external knowledge from knowledge graphs and textual information of QA pairs. In this way, we can exploit the semantic compositionality of the input sequences and capture more comprehensive knowledge-enriched intra-document features within the question and answer. Specifically, we first propose a context-aware attentive mechanism to learn the knowledge representations guided by the corresponding context. The relations between the question and answer are then captured by computing the question–answer alignment matrix. We further employ self-attention to capture the global features of the input sequences, which are then used to calculate the relevance score of the question and answer. Experimental results on four real-life datasets demonstrate that IKAAS outperforms the compared methods. In addition, a series of analyses shows the robust superiority and the extensive applicability of the proposed method.
Educational QA System-Oriented Answer Selection Model Based on Focus Fusion of Multi-Perspective Word Matching
Question-answering systems have become an important tool for learning and knowledge acquisition. However, current answer selection models often rely on representing features using whole sentences, which leads to neglecting individual words and losing important information. To address this challenge, the paper proposes a novel answer selection model based on focus fusion of multi-perspective word matching. First, according to the different combination relationships between sentences, focus distribution in terms of words is obtained from the matching perspectives of serial, parallel, and transfer. Then, the sentence’s key position information is inferred from its focus distribution. Finally, a method of aligning key information points is designed to fuse the focus distribution for each perspective, which obtains match scores for each candidate answer to the question. Experimental results show that the proposed model significantly outperforms the Transformer encoder fine-tuned model based on contextual embedding, achieving a 4.07% and 5.51% increase in MAP and a 1.63% and 4.86% increase in MRR, respectively.
Bi-directional LSTM Model with Symptoms-Frequency Position Attention for Question Answering System in Medical Domain
Online medical intelligent question answering system plays an increasingly important role as a supplement of the traditional medical service systems. The purpose is to provide quick and concise feedback on users’ questions through natural language. The technical challenges mainly lie in symptom semantic understanding and representation of users’ description. Although the performance of phrase-level and numerous attention models have been improved, the lexical gap and position information are not emphasized enough. This paper combines word2vec and the Chinese Ci-Lin [it is a dictionary that plays an auxiliary role in word2vec where processing Chinese ( https://www.ltp-cloud.com/download )] to propose synonyms-subject replacement mechanism (i.e., map common words as kernel words) and realize the normalization of the semantic representation; Meanwhile, based on the bi-directional LSTM model, this paper introduces a method of the combination of adaptive weight assignment techniques and positional context, enhancing attention to the typical symptoms of the disease. More attention weight is given to the neighboring words and propose the Bi-directional Long Short Term Memory Model with Symptoms-Frequency Position Attention (BLSTM-SFPA). The good performance of the BLSTM-SFPA model has been demonstrated in comparative experiments on the medical field dataset (MED-QA and GD-QA).
A Novel Bidirectional LSTM and Attention Mechanism based Neural Network for Answer Selection in Community Question Answering
Deep learning models have been shown to have great advantages in answer selection tasks. The existing models, which employ encoder-decoder recurrent neural network (RNN), have been demonstrated to be effective. However, the traditional RNN-based models still suffer from limitations such as 1) high-dimensional data representation in natural language processing and 2) biased attentive weights for subsequent words in traditional time series models. In this study, a new answer selection model is proposed based on the Bidirectional Long Short-Term Memory (Bi-LSTM) and attention mechanism. The proposed model is able to generate the more effective question-answer pair representation. Experiments on a question answering dataset that includes information from multiple fields show the great advantages of our proposed model. Specifically, we achieve a maximum improvement of 3.8% over the classical LSTM model in terms of mean average precision.