Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
7,532 result(s) for "Inappropriateness"
Sort by:
Intercoder Reliability in Qualitative Research: Debates and Practical Guidelines
Evaluating the intercoder reliability (ICR) of a coding frame is frequently recommended as good practice in qualitative analysis. ICR is a somewhat controversial topic in the qualitative research community, with some arguing that it is an inappropriate or unnecessary step within the goals of qualitative analysis. Yet ICR assessment can yield numerous benefits for qualitative studies, which include improving the systematicity, communicability, and transparency of the coding process; promoting reflexivity and dialogue within research teams; and helping convince diverse audiences of the trustworthiness of the analysis. Few guidelines exist to help researchers negotiate the assessment of ICR in qualitative analysis. The current article explains what ICR is, reviews common arguments for and against its incorporation in qualitative analysis and offers guidance on the practical elements of performing an ICR assessment.
How Can We Know What Language Models Know?
Recent work has presented intriguing results examining the knowledge contained in language models (LMs) by having the LM fill in the blanks of prompts such as “ ”. These prompts are usually manually created, and quite possibly sub-optimal; another prompt such as “ __ ” may result in more accurately predicting the correct profession. Because of this, given an inappropriate prompt, we might fail to retrieve facts that the LM know, and thus any given prompt only provides a lower bound estimate of the knowledge contained in an LM. In this paper, we attempt to more accurately estimate the knowledge contained in LMs by automatically discovering better prompts to use in this querying process. Specifically, we propose mining-based and paraphrasing-based methods to automatically generate high-quality and diverse prompts, as well as ensemble methods to combine answers from different prompts. Extensive experiments on the LAMA benchmark for extracting relational knowledge from LMs demonstrate that our methods can improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what LMs know. We have released the code and the resulting LM Prompt And Query Archive (LPAQA) at .
An Equivalence Approach to Balance and Placebo Tests
Recent emphasis on credible causal designs has led to the expectation that scholars justify their research designs by testing the plausibility of their causal identification assumptions, often through balance and placebo tests. Yet current practice is to use statistical tests with an inappropriate null hypothesis of no difference, which can result in equating nonsignificant differences with significant homogeneity. Instead, we argue that researchers should begin with the initial hypothesis that the data are inconsistent with a valid research design, and provide sufficient statistical evidence in favor of a valid design. When tests are correctly specified so that difference is the null and equivalence is the alternative, the problems afflicting traditional tests are alleviated. We argue that equivalence tests are better able to incorporate substantive considerations about what constitutes good balance on covariates and placebo outcomes than traditional tests. We demonstrate these advantages with applications to natural experiments.
Misconceptions about multicollinearity in international business research
Collinearity between independent variables is a recurrent problem in quantitative empirical research in International Business (IB). We explore insufficient and inappropriate treatment of collinearity and use simulations to illustrate the potential impact on results. We also show how IB researchers doing quantitative work can avoid collinearity issues that lead to spurious and unstable results. Our six principal insights are the following: first, multicollinearity does not introduce bias. It is not an econometric problem in the sense that it would violate assumptions necessary for regression models to work. Second, variance inflation factors are indicators of standard errors that are too large, not too small. Third, coefficient instability is not a consequence of multicollinearity. Fourth, in the presence of a higher partial correlation between the variables, it can paradoxically become more problematic to omit one of these variables. Fifth, ignoring clusters in data can lead to spurious results. Sixth, accounting for country clusters does not pick up all country-level variation.
Dealing with dynamic endogeneity in international business research
Dynamic endogeneity occurs when the current values of a study’s independent variables are affected by the past values of the dependent variables, which can lead to biased estimates. Our analysis of 80 empirical papers published in the Journal of International Business Studies uncovers cases of inappropriate treatment of dynamic endogeneity. Our simulations reveal factors that lead to bias in a fixed effects estimator and highlight the advantages and disadvantages of the system generalized method of moments estimator. We demonstrate our points with an empirical study of the impact of the international experience of managers and board members on firm internationalization. We show how using a fixed effects estimator rather than a generalized method of moments estimator can lead to differences in regression results. We also show the proper use of a generalized method of moments estimator.
COVID-19 and the 5G Conspiracy Theory: Social Network Analysis of Twitter Data
Since the beginning of December 2019, the coronavirus disease (COVID-19) has spread rapidly around the world, which has led to increased discussions across online platforms. These conversations have also included various conspiracies shared by social media users. Amongst them, a popular theory has linked 5G to the spread of COVID-19, leading to misinformation and the burning of 5G towers in the United Kingdom. The understanding of the drivers of fake news and quick policies oriented to isolate and rebate misinformation are keys to combating it. The aim of this study is to develop an understanding of the drivers of the 5G COVID-19 conspiracy theory and strategies to deal with such misinformation. This paper performs a social network analysis and content analysis of Twitter data from a 7-day period (Friday, March 27, 2020, to Saturday, April 4, 2020) in which the #5GCoronavirus hashtag was trending on Twitter in the United Kingdom. Influential users were analyzed through social network graph clusters. The size of the nodes were ranked by their betweenness centrality score, and the graph's vertices were grouped by cluster using the Clauset-Newman-Moore algorithm. The topics and web sources used were also examined. Social network analysis identified that the two largest network structures consisted of an isolates group and a broadcast group. The analysis also revealed that there was a lack of an authority figure who was actively combating such misinformation. Content analysis revealed that, of 233 sample tweets, 34.8% (n=81) contained views that 5G and COVID-19 were linked, 32.2% (n=75) denounced the conspiracy theory, and 33.0% (n=77) were general tweets not expressing any personal views or opinions. Thus, 65.2% (n=152) of tweets derived from nonconspiracy theory supporters, which suggests that, although the topic attracted high volume, only a handful of users genuinely believed the conspiracy. This paper also shows that fake news websites were the most popular web source shared by users; although, YouTube videos were also shared. The study also identified an account whose sole aim was to spread the conspiracy theory on Twitter. The combination of quick and targeted interventions oriented to delegitimize the sources of fake information is key to reducing their impact. Those users voicing their views against the conspiracy theory, link baiting, or sharing humorous tweets inadvertently raised the profile of the topic, suggesting that policymakers should insist in the efforts of isolating opinions that are based on fake news. Many social media platforms provide users with the ability to report inappropriate content, which should be used. This study is the first to analyze the 5G conspiracy theory in the context of COVID-19 on Twitter offering practical guidance to health authorities in how, in the context of a pandemic, rumors may be combated in the future.
Unveiling the Everyday: Ethnographic observation of persons living with dementia in a long‐term care facility in Dubai, United Arab Emirates
Background The behaviours of four residents living with dementia was analysed using ethnographic observation techniques while they were receiving routine care in a long‐term care facility in Dubai, United Arab Emirates (UAE). The aim was to investigate if dementia care staff in Dubai are equipped with the right skills and resources to provide person‐centred dementia care to people living with dementia. Method The residents were observed during their waking hours from 7:00 to 21:00, for a time‐block of five minutes, followed by 10 minutes for note calibration, and another five minutes used for breaks before resuming observation for another block of five minutes. In total, 840 to 870 minutes of data from each resident was recorded and analysed using outputs from ©ATracker and Python on Microsoft Excel. Result Three of the residents spent the majority of their waking hours in a neutral state, showing minimal indicators of sensory stimulation, social engagement, or supervised independence. Self‐care practices were low or absent for all but one resident. While inappropriate or antisocial behaviours were rarely observed, the lack of meaningful engagement and proactive care highlights significant gaps in dementia care practices. Conclusion This study lays the groundwork for designing tailored interventions aimed at enhancing current dementia care practices by providing person‐centred dementia care and effectively responding to the needs and preferences of people living with dementia.
Explainability in music recommender systems
The most common way to listen to recorded music nowadays is via streaming platforms, which provide access to tens of millions of tracks. To assist users in effectively browsing these large catalogs, the integration of music recommender systems (MRSs) has become essential. Current real‐world MRSs are often quite complex and optimized for recommendation accuracy. They combine several building blocks based on collaborative filtering and content‐based recommendation. This complexity can hinder the ability to explain recommendations to end users, which is particularly important for recommendations perceived as unexpected or inappropriate. While pure recommendation performance often correlates with user satisfaction, explainability has a positive impact on other factors such as trust and forgiveness, which are ultimately essential to maintain user loyalty. In this article, we discuss how explainability can be addressed in the context of MRSs. We provide perspectives on how explainability could improve music recommendation algorithms and enhance user experience. First, we review common dimensions and goals of recommenders explainability and in general of eXplainable Artificial Intelligence (XAI), and elaborate on the extent to which these apply—or need to be adapted—to the specific characteristics of music consumption and recommendation. Then, we show how explainability components can be integrated within a MRS and in what form explanations can be provided. Since the evaluation of explanation quality is decoupled from pure accuracy‐based evaluation criteria, we also discuss requirements and strategies for evaluating explanations of music recommendations. Finally, we describe the current challenges for introducing explainability within a large‐scale industrial MRS and provide research perspectives.
On the stress potential of videoconferencing: definition and root causes of Zoom fatigue
As a consequence of lockdowns due to the coronavirus disease (COVID-19) and the resulting restricted social mobility, several billion people worldwide have recently had to replace physical face-to-face communication with computer-mediated interaction. Notably, the adoption rates of videoconferencing increased significantly in 2020, predominantly because videoconferencing resembles face-to-face interaction. Tools such as Zoom, Microsoft Teams, and Cisco Webex are used by hundreds of millions of people today. Videoconferencing may bring benefits (e.g., saving of travel costs, preservation of environment). However, prolonged and inappropriate use of videoconferencing may also have an enormous stress potential. A new phenomenon and term emerged, Zoom fatigue, a synonym for videoconference fatigue. This paper develops a definition for Zoom fatigue and presents a conceptual framework that explores the major root causes of videoconferencing fatigue and stress. The development of the framework draws upon media naturalness theory and its underlying theorizing is based on research published across various scientific fields, including the disciplines of both behavioral science and neuroscience. Based on this theoretical foundation, hypotheses are outlined. Moreover, implications for research and practice are discussed.
The New Statistics: Why and How
We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate dataanalytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.