Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
4,800
result(s) for
"ACADEMIC RANKING"
Sort by:
Losing objectivity: The questionable use of surveys in the Global Ranking of Academic Subjects
The Academic Ranking of World Universities (ARWU) is one of the most well-known university rankings, recognized for its objective and reproducible methodology. In contrast, the Global Ranking of Academic Subjects (GRAS), which ranks institutions by scientific subjects and is also elaborated by Shanghai Ranking Consultancy (SRC), introduces methodological differences that deviate from the ARWU’s objectivity. This is due to the use of SRC’s Academic Excellence Survey to define two of the GRAS’s five indicators. Specifically, the Top indicator counts publications in journals determined by respondents as top tier in their field, and the Award indicator does the same for prizes. An examination of this survey suggests the presence of potential biases, especially in participant selection and journal identification, among which an Anglo-Saxon bias is prominently evident. Likewise, there is a potential risk that the selection of journals in some cases may be influenced, potentially masking conflicts of interest, such as involvement in editorial committees that could sway this selection. As a result, relying on surveys instead of adhering to established bibliometric standards can lead to inconsistencies and subjectivity, especially if not rigorously conducted. Such methodologies pose a risk to the trustworthiness of tools crucial for university policymaking.
Journal Article
Aggregate ranking of the world's leading universities
by
Moskovkin, Vladimir M
,
Serkina, Olesya V
,
Peresypkin, Andrey P
in
Aggregates
,
Colleges & universities
,
Computer programs
2015
This article presents a methodology for calculating the aggregate global university ranking (Aggregated Global University Ranking (AGUR), which consists of an automated presentation of the comparable lists of names for different universities from particular global university rankings (using Machine Learning and Mining Data algorithms) and a simple procedure of aggregating particular global university rankings (summing up the university ranking positions from different particular rankings and their subsequent ranking). The second procedure makes it possible to bring lists of universities from particular rankings, which are nonidentical by length, to one size. This article includes a sample AGUR for six particular global university rankings as of 2013, as well as cross-correlation matrices and intersection matrices for AGUR for 2011-2013, all created by means of using the Python-based software.
Journal Article
Flawed Metrics, Damaging Outcomes: A Rebuttal to the RIsup.2 Integrity Index Targeting Top Indonesian Universities
by
Iqhrammullah, Muhammad
,
Maula, Muhammad Fadhlal
,
Rampengan, Derren D. C. H
in
Analysis
,
Ethical aspects
,
Forecasts and trends
2025
The Research Integrity Risk Index (RI[sup.2] ), introduced as a tool to identify universities at risk of compromised research integrity, adopts an overly reductive methodology by combining retraction rates and delisted journal proportions into a single, equally weighted composite score. While its stated aim is to promote accountability, this commentary critiques the RI[sup.2] index for its flawed assumptions, lack of empirical validation, and disproportionate penalization of institutions in low- and middle-income countries. We examine how RI[sup.2] misinterprets retractions, misuses delisting data, and fails to account for diverse academic publishing environments, particularly in Indonesia, where many high-performing universities are unfairly categorized as “high risk” or “red flag.” The index’s uncritical reliance on opaque delisting decisions, combined with its fixed equal-weighting formula, produces volatile and context-insensitive scores that do not accurately reflect the presence or severity of research misconduct. Moreover, RI[sup.2] has gained significant media attention and policy influence despite being based on an unreviewed preprint, with no transparent mechanism for institutional rebuttal or contextual adjustment. By comparing RI[sup.2] classifications with established benchmarks such as the Scimago Institution Rankings and drawing from lessons in global development metrics, we argue that RI[sup.2] , although conceptually innovative, should remain an exploratory framework. It requires rigorous scientific validation before being adopted as a global standard. We also propose flexible weighting schemes, regional calibration, and transparent engagement processes to improve the fairness and reliability of institutional research integrity assessments.
Journal Article
Applying quantified indicators in Central Asian science: can metrics improve the regional research performance?
Quantified indicators are increasingly used for performance evaluations in the science sectors worldwide. However, relatively little information is available on the expanding use of research metrics in certain transition countries. Central Asia is a post-Soviet region where newly independent states achieved lower research performance relative to comparators in key indicators of productivity and integrity. The majority of the countries in this region showed an overall declining or stagnating research impact in the recent decade since 2008. This study discusses the implications of research metrics as applied to the transition countries based on the framework of ten principles of the Leiden Manifesto. They can guide Central Asian policymakers in creating systems for a more objective evaluation of research performance based on globally recognized indicators. Given the local conditions of authoritarianism and corruption, the broader use of transparent indicators in decision-making can help improve the positions of Central Asian science in international rankings.
Journal Article
Impact of scholarly output on university ranking
by
Mathew K., Susan
,
N.K., Sheeja
,
Cherukodan, Surendran
in
Academic libraries
,
Author productivity
,
Colleges & universities
2018
Purpose
This study aims to examine if there exists a relation between scholarly output and institutional ranking based on National Institutional Ranking Framework (NIRF) of India. This paper also aims to analyze and compare the parameters of NIRF with those of leading world ranking university rankings.
Design/methodology/approach
The data for the study were collected through Web content analysis. The major parts of data were collected from the official websites of NIRF, Times Higher Education World University Rankings and QS World University rankings.
Findings
The study found that the parameters fixed for the assessment of Indian institutions under NIRF are par with those of other world university ranking agencies. Scholarly output of a university is one of the major parameters of university ranking schemes. Indian universities who scored high for research productivity came top in NIRF. These universities were also figured in world university rankings. Universities from South India excel in NIRF and there is a close relationship between scholarly productivity and institutional ranking.
Originality/value
Correlation between h-index and scholarly productivity has been dealt with in several studies. This paper is the first attempt to find the relationship between scholarly productivity and ranking of universities in India based on NIRF.
Journal Article
Ranking Web of Universities: Is Webometrics a Reliable Academic Ranking?
by
Ibrahim Shehatta
,
Khalid Mahmood
,
Abdullah M. Al-Rubaish
in
academic rankings
,
bibliometrics
,
correlation
2021
Global university rankings continue to gain growing interest and have high visibility from all stakeholders. Of these, Webometrics Ranking (WR) faces many criticisms about its function. Some people believe WR evaluates only the websites of universities but not their global performance and impact as mentioned by WR authors. This stimulates us to examine the idea of using WR as a reliable academic ranking for the world universities. To test this hypothesis, we apply the WR results with two widely accepted indexes, i.e., the global university rankings and the bibliometrics. Therefore, the WR ranking of the Top 100 institutions are correlated with the corresponding values of six world ranking systems' 2015 edition (ARWU, USNWR, QS, THE, NTU and URAP) that commonly accepted to evaluate the academic performance of the university, as well as with the objectively bibliometric indicators gathered from the Web of Science (WOS) InCitesTM - Thomson Reuters. The findings revealed that the WR results provide a good correlation with both ranking systems' results and with 12 bibliometric variables namely: WOS Documents, Times Cited, Citation Impact (CI), Citation Impact: Category Normalized (CNCI), Citation Impact: Journal Normalized (JNCI), Impact Relative to World, % of Top 1% Documents, % of Top 10% Documents, Highly Cited Papers, h-index, International Collaborations, and % Industry Collaborations. The consistency between WR and the studied six rankings increases with increasing the weight percent of the research or bibliometric indicators in these six global rankings. Moreover, the consistency between WR and survey-based rankings (USNWR, THE and QS) increases with decreasing the weight of the subjective reputation survey indicators. The North American, especially USA universities are characterized by the extremely high visibility in WR as well as in the studied seven global rankings.
Thus, web-based indicators ranking (WR) offers results of comparable and similar quality to those of the six major global university rankings. Accordingly, they have the capability to rank institutional academic performance. Moreover, the reliability could be enhanced if each university has only one web-domain that accurately reflects its actual performance and activity. We recommend all institutions to apply all ranking systems together since their criteria and indicators complement each other and can form a comprehensive index for covering various HEIs activities/functions worldwide.
Journal Article
Are the best higher education institutions also more sustainable?
by
dos Santos, Celso Bilynkievycz
,
Wilhelm, Elizane Maria Siqueira
,
Pilatti, Luiz Alberto
in
Academic Rank (Professional)
,
Access to education
,
Addition
2025
Purpose
The purpose of this study is to analyze the integration of sustainable practices in the strategies and operations of world-class higher education institutions (HEIs) under the theoretical guidance of Max Weber's instrumental and value rationalities.
Design/methodology/approach
The results of the Quacquarelli-Symonds World University Ranking, Times Higher Education World University Rankings, THE Impact Rankings and GreenMetric World University Ranking rankings from 2019 to 2022 were paired, and the correlation between them was verified. Institutions with simultaneous occurrence in the four rankings in at least one of the years were also classified. A quantitative and qualitative methodology was used to explore how elite HEIs integrate sustainable practices into their operations and strategies, under the theoretical guidance of Max Weber's instrumental and value rationalities. Furthermore, multivariate regression models with supervised data mining techniques were applied, using the SMOReg algorithm on 368 instances with multiple attributes, to predict the numerical value of sustainability in the rankings. Coefficients were assigned to variables to determine their relative importance in predicting rankings.
Findings
The results of this study suggest that although many HEIs demonstrate a commitment to sustainability, this rarely translates into improvements in traditional rankings, indicating a disconnect between sustainable practices and global academic recognition.
Research limitations/implications
The research has limitations, including the analysis being restricted to data from specific rankings between 2019 and 2022, which may limit generalization to future editions or rankings. The predictive models used selected data and, therefore, cannot cover the full complexity of metrics from other rankings. Furthermore, internal factors of HEIs were not considered, and the correlations identified do not imply direct causality. The limited sample and potential methodological biases, together with the heterogeneity of the rankings, restrict the generalization of the results. These limitations should be considered in future studies.
Practical implications
The theoretical contributions of this study include an in-depth understanding of the intersection between academic excellence and environmental and social responsibility. From a management perspective, guidance is provided on integrating sustainability into HEI strategies to enhance visibility and classification in global rankings, while maintaining academic integrity and commitment to sustainability.
Social implications
This highlights the importance of reassessing academic rankings criteria to include sustainability assessments, thereby encouraging institutions to adopt practices that genuinely contribute to global sustainable development.
Originality/value
The originality lies in the predictive analysis between these rankings, examining the link between the level of sustainability of an HEI and its classification as a World Class University. Furthermore, it combines theories of rationality with the analysis of sustainability integration in elite HEIs, introducing new analytical perspectives that can influence future educational policies and institutional practices.
Journal Article
Flawed Metrics, Damaging Outcomes: A Rebuttal to the RI2 Integrity Index Targeting Top Indonesian Universities
by
Iqhrammullah, Muhammad
,
Maula, Muhammad Fadhlal
,
Rampengan, Derren D. C. H.
in
academic ranking
,
Bibliometrics
,
Colleges & universities
2025
The Research Integrity Risk Index (RI2), introduced as a tool to identify universities at risk of compromised research integrity, adopts an overly reductive methodology by combining retraction rates and delisted journal proportions into a single, equally weighted composite score. While its stated aim is to promote accountability, this commentary critiques the RI2 index for its flawed assumptions, lack of empirical validation, and disproportionate penalization of institutions in low- and middle-income countries. We examine how RI2 misinterprets retractions, misuses delisting data, and fails to account for diverse academic publishing environments, particularly in Indonesia, where many high-performing universities are unfairly categorized as “high risk” or “red flag.” The index’s uncritical reliance on opaque delisting decisions, combined with its fixed equal-weighting formula, produces volatile and context-insensitive scores that do not accurately reflect the presence or severity of research misconduct. Moreover, RI2 has gained significant media attention and policy influence despite being based on an unreviewed preprint, with no transparent mechanism for institutional rebuttal or contextual adjustment. By comparing RI2 classifications with established benchmarks such as the Scimago Institution Rankings and drawing from lessons in global development metrics, we argue that RI2, although conceptually innovative, should remain an exploratory framework. It requires rigorous scientific validation before being adopted as a global standard. We also propose flexible weighting schemes, regional calibration, and transparent engagement processes to improve the fairness and reliability of institutional research integrity assessments.
Journal Article
Ranking Web of Universities
by
Ibrahim Shehatta
,
Khalid Mahmood
,
Abdullah M. Al-Rubaish
in
Academic Rankings
,
ARWU
,
Bibliometrics
2020
Global university rankings continue to gain growing interest and have high visibility from all stakeholders. Of these, Webometrics Ranking (WR) faces many criticisms about its function. Some people believe WR evaluates only the websites of universities but not their global performance and impact as mentioned by WR authors. This stimulates us to examine the idea of using WR as a reliable academic ranking for the world universities. To test this hypothesis, we apply the WR results with two widely accepted indexes, i.e., the global university rankings and the bibliometrics. Therefore, the WR ranking of the Top 100 institutions are correlated with the corresponding values of six world ranking systems’ 2015 edition (ARWU, USNWR, QS, THE, NTU and URAP) that commonly accepted to evaluate the academic performance of the university, as well as with the objectively bibliometric indicators gathered from the Web of Science (WOS) InCitesTM - Thomson Reuters. The findings revealed that the WR results provide a good correlation with both ranking systems’ results and with 12 bibliometric variables namely: WOS Documents, Times Cited, Citation Impact (CI), Citation Impact: Category Normalized (CNCI), Citation Impact: Journal Normalized (JNCI), Impact Relative to World, % of Top 1% Documents, % of Top 10% Documents, Highly Cited Papers, h-index, International Collaborations, and % Industry Collaborations. The consistency between WR and the studied six rankings increases with increasing the weight percent of the research or bibliometric indicators in these six global rankings. Moreover, the consistency between WR and survey-based rankings (USNWR, THE and QS) increases with decreasing the weight of the subjective reputation survey indicators. The North American, especially USA universities are characterized by the extremely high visibility in WR as well as in the studied seven global rankings. Thus, web-based indicators ranking (WR) offers results of comparable and similar quality to those of the six major global university rankings. Accordingly, they have the capability to rank institutional academic performance. Moreover, the reliability could be enhanced if each university has only one web-domain that accurately reflects its actual performance and activity. We recommend all institutions to apply all ranking systems together since their criteria and indicators complement each other and can form a comprehensive index for covering various HEIs activities/functions worldwide.
Journal Article
Role of citation and non-citation metrics in predicting the educational impact of textbooks
by
Sotudeh, Hajar
,
Abbaspour, Javad
,
Maleki, Ashraf
in
Academic disciplines
,
Authority
,
Bibliometrics
2024
PurposeThe main objective of the present study is to determine the role of citation-based metrics (PageRank and HITS’ authority and hub scores) and non-citation metrics (Goodreads readers, reviews and ratings, textbook edition counts) in predicting educational ranks of textbooks.Design/methodology/approachThe rankings of 1869 academic textbooks of various disciplines indexed in Scopus were extracted from the Open Syllabus Project (OSP) and compared with normalized counts of Scopus citations, scores of PageRank, authority and hub (HITS) in Scopus book-to-book citation network, Goodreads ratings and reviews, review sentiment scores and WorldCat book editions.FindingsPrediction of the educational rank of scholarly syllabus books ranged from 32% in technology to 68% in philosophy, psychology and religion. WorldCat editions in social sciences, medicine and technology, Goodreads ratings in humanities, and book-citation-network authority scores in law and political science accounted for the strongest predictions of the educational score. Thus, each indicator of editions, Goodreads ratings, and book citation authority score alone can be used to show the rank of the academic textbooks, and if used in combination, they will help explain the educational uptake of books even better.Originality/valueThis is the first study examining the role of citation indicators, Goodreads readers, reviews and ratings in predicting the OSP rank of academic books.
Journal Article