Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,779 result(s) for "RESEARCH EXCELLENCE"
Sort by:
The Strategic Development for Research Excellence in Thailand 4.0 of Postgraduate students Under Council of the Graduate Studies Administrations of Thailand (CGAT)
This paper aimed to investigate the exploratory factor analysis of research excellence in Thailand 4.0 and to develop the research strategy towards research excellence in Thailand 4.0 of postgraduate students under council of the graduate studies administrations in southernmost Thailand. The mixed method research design was applied in this study. The quantitative was initiated with exploratory factor analysis, and qualitative was used by focus group to develop the research strategy towards research excellence in Thailand 4.0. The Sample size in this study were 530 students. This paper followed factor analysis methods to set the sample. Sample size data was subject to item ratio1:10, 35 items that mean the sample were 530. Furthermore, the sample of qualitative research were 25 students. Besides, a questionnaire with five points rating scale was used to collect the data needed. For qualitative Focus Group the SWOT Analysis, Position Matrix, and TOWS matrix were used. Additionally, frequencies and exploratory factor analysis were used in the study. The data was analyzed using program R version 3.6.2. The finding found that the factor analysis for this measure yielded a seven factors solution with eigenvalues greater than 1.0. These factors were named as 1) research skill, 2) innovative thinking skill, 3) learning style skill, 4) commination skill, 5) digital skill, 6) academic writing skill, and 7) social and life skill. Furthermore, the SWOT analysis has been assessed through an analysis of strengths, weaknesses, opportunities and threats. All data will be analyzed in a matrix relationship using a table called TOWS Matrix. The TOWS Matrix is an analytical table that takes the data obtained from the analysis of SWOT to be analyzed to determine. Come out into the following types of strategies.
The Role of Insurance in Reducing Losses from Extreme Events: The Need for Public–Private Partnerships
This paper describes the challenges that consumers, insurers and insurance regulators face in dealing with insurance for low-probability, high-consequence events. Given their limited experience with catastrophes, there is a tendency for all three parties often to engage in short-term intuitive thinking rather than long-term deliberative thinking when making these insurance-related decisions. Public–private partnerships can encourage investment in protective measures prior to a disaster, deal with affordability problems and provide coverage for catastrophic risks. Insurance premiums based on risk provide signals to residents and businesses as to the hazards they face and enable insurers to lower premiums for properties where steps have been taken to reduce risk. To address issues of equity and fairness, homeowners who cannot afford insurance could be given vouchers tied to loans for investing in loss reduction measures. The National Flood Insurance Program provides an opportunity to implement a public–private partnership that could eventually be extended to other extreme events.
Overcoming Barriers to Microinsurance Adoption: Evidence from the Field
This paper provides an overview of the academic literature on microinsurance adoption in emerging markets, with a particular emphasis on randomised control trials. I discuss what we know, what we can reasonably hope to know using the extensive work on microcredit as a comparator, and what the available evidence implies for public policy. Particular attention is paid to the case for a greater role for the government in supporting the development of microinsurance.
Can ChatGPT evaluate research quality?
Assess whether ChatGPT 4.0 is accurate enough to perform research evaluations on journal articles to automate this time-consuming task. Test the extent to which ChatGPT-4 can assess the quality of journal articles using a case study of the published scoring guidelines of the UK Research Excellence Framework (REF) 2021 to create a research evaluation ChatGPT. This was applied to 51 of my own articles and compared against my own quality judgements. ChatGPT-4 can produce plausible document summaries and quality evaluation rationales that match the REF criteria. Its overall scores have weak correlations with my self-evaluation scores of the same documents (averaging r=0.281 over 15 iterations, with 8 being statistically significantly different from 0). In contrast, the average scores from the 15 iterations produced a statistically significant positive correlation of 0.509. Thus, averaging scores from multiple ChatGPT-4 rounds seems more effective than individual scores. The positive correlation may be due to ChatGPT being able to extract the author’s significance, rigour, and originality claims from inside each paper. If my weakest articles are removed, then the correlation with average scores (r=0.200) falls below statistical significance, suggesting that ChatGPT struggles to make fine-grained evaluations. The data is self-evaluations of a convenience sample of articles from one academic in one field. Overall, ChatGPT does not yet seem to be accurate enough to be trusted for any formal or informal research quality evaluation tasks. Research evaluators, including journal editors, should therefore take steps to control its use. This is the first published attempt at post-publication expert review accuracy testing for ChatGPT.
Hyping the REF: promotional elements in impact submissions
The evaluation of research to allocate government funding to universities is now common across the globe. The Research Excellence Framework, introduced in the UK in 2014, marked a major change by extending assessment beyond the 'quality' of published research to include its real-world 'impact'. Impact submissions were a key determinant of the £4 billion allocated to universities following the exercise. The case studies supporting claims for impact are therefore a high stakes genre, with writers keen to make the most persuasive argument for their work. In this paper we examine 800 of these 'impact case studies' from disciplines across the academic spectrum to explore the rhetorical presentation of impact. We do this by analysing authors' use of hyperbolic and promotional language to embroider their presentations, discovering substantial hyping with a strong preference for boosting the novelty and certainty of the claims made. Chemistry and physics, the most abstract and theoretical disciplines of our selection, contained the most hyping items with fewer as we move along the hard/pure - soft/applied continuum as the real-world value of work becomes more apparent. We also show that hyping varies with the type of impact, with items targeting technological, economic and cultural areas the most prolific. (HRK / Abstract übernommen).
Neoliberal competition in higher education today: research, accountability and impact
Drawing on Foucault's elaboration of neoliberalism as a positive form of state power, the ascendancy of neoliberalism in higher education in Britain is examined in terms of the displacement of public good models of governance, and their replacement with individualised incentives and performance targets, heralding new and more stringent conceptions of accountability and monitoring across the higher education sector. After surveying the defeat of the public good models, the article seeks to better understand the deployment of neoliberal strategies of accountability and then assess the role that these changes entail for the university sector in general. Impact assessment, I claim, represents a new, more sinister phase of neoliberal control. In the concluding section it is suggested that such accountability models are not incompatible with the idea of the public good and, as a consequence, a meaningful notion of accountability can be accepted and yet prized apart from its neoliberal rationale.
Learning from the UK’s research impact assessment exercise: a case study of a retrospective impact assessment exercise and questions for the future
National governments spend significant amounts of money supporting public research. However, in an era where the international economic climate has led to budget cuts, policymakers increasingly are looking to justify the returns from public investments, including in science and innovation. The so-called ‘impact agenda’ which has emerged in many countries around the world is part of this response; an attempt to understand and articulate for the public what benefits arise from the research that is funded. The United Kingdom is the most progressed in implementing this agenda and in 2014 the national research assessment exercise, the Research Excellence Framework, for the first time included the assessment of research impact as a component. For the first time within a dual funding system, funding would be awarded not only on the basis of the academic quality of research, but also on the wider impacts of that research. In this paper we outline the context and approach taken by the UK government, along with some of the core challenges that exist in implementing such an exercise. We then synthesise, together for the first time, the results of the only two national evaluations of the exercise and offer reflections for future exercises both in the UK and internationally.
Beyond Academia – Interrogating Research Impact in the Research Excellence Framework
Big changes to the way in which research funding is allocated to UK universities were brought about in the Research Excellence Framework (REF), overseen by the Higher Education Funding Council, England. Replacing the earlier Research Assessment Exercise, the purpose of the REF was to assess the quality and reach of research in UK universities-and allocate funding accordingly. For the first time, this included an assessment of research 'impact', accounting for 20% of the funding allocation. In this article we use a text mining technique to investigate the interpretations of impact put forward via impact case studies in the REF process. We find that institutions have developed a diverse interpretation of impact, ranging from commercial applications to public and cultural engagement activities. These interpretations of impact vary from discipline to discipline and between institutions, with more broad-based institutions depicting a greater variety of impacts. Comparing the interpretations with the score given by REF, we found no evidence of one particular interpretation being more highly rewarded than another. Importantly, we also found a positive correlation between impact score and [overall research] quality score, suggesting that impact is not being achieved at the expense of research excellence.
The university research assessment dilemma: a decision support system for the next evaluation campaigns
Our study examines the UK’s Research Excellence Framework 2021 employing an algorithmic method to mimic its outcomes (expressed by their panel experts) and introduce a decision support system for evaluating research outputs. Using CrossRef, Scopus databases, and the Chartered Association of Business Schools’ journal classification, we assessed bibliometric features, finding the citation-based algorithm most effective in producing results close to the ones resulting from the REF panellists. Simulating panellists manually adjusting algorithmic paper classifications, our results closely align with actual evaluations, demonstrating the potential of algorithms to augment human assessments. We also show that the Grade Point Average metric may lead to evaluations that are far from those of panellists and should be avoided.
Mapping scholarly books: library metadata and research assessment
This paper proposes an open-science-aligned approach that uses library metadata to evaluate individual books. I analyse the suitability of this approach for individual book assessment and visibility of national books in the library catalogues, to support responsible research evaluation. WorldCat metadata offers valuable insights for the evaluation of books, but the completeness of this metadata varies. Author, contributor, and publisher data require cleaning, while languages, years, formats, editions, and translations provide rich information. Open access data is currently lacking, and national book visibility in WorldCat depends heavily on contributions from national libraries and metadata suppliers. Encouraging national library engagement could boost the global visibility of domestic research. Further exploration is needed regarding long-term preservation, metadata ownership, and technical integration for effective standardisation and improved book evaluation.