Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
6,740
result(s) for
"Marginalized groups"
Sort by:
The challenges of institutionalizing community-level social accountability mechanisms for health and nutrition: a qualitative study in Odisha, India
2018
Background
India has been at the forefront of innovations around social accountability mechanisms in improving the delivery of public services, including health and nutrition. Yet little is known about how such initiatives are faring now that they are incorporated formally into government programmes and implemented at scale. This brings greater impetus to understand their effectiveness. This formative qualitative study focuses on how such mechanisms have sought to strengthen community-level nutrition and health services (the Integrated Child Development Services and the National Rural Health Mission) in the state of Odisha. It fills a gap in the literature on considering how such initiatives are running when institutionalised at scale. The primary research questions were ‘what kinds of community level mechanisms are functioning in randomly selected villages in 3 districts of state of Odisha' and 'how are they perceived to function by their members and frontline workers’.
Methods
The study is based on focus group discussions with pregnant women and mothers of children below the age of 2 (
n
= 12) and with women’s self-help groups (
n
= 12); interviews with frontline health workers (
n
= 24) and with members of community committees (
n
= 36). Interviews were analysed thematically using a priori coding derived from wider literature on key accountability themes.
Results
Four main types of community-based mechanisms were examined – Mothers committees, Jaanch committees, Village Health and Sanitation Committees and Self-Help Groups. The degree of their effectiveness varied depending on their ability to offer meaningful avenues for participation of their members and empower women for autonomous action. Notably, in most of these mechanisms community participation is very weak, with committees largely controlled by the frontline workers who are supposed to be held to account. However, self-help groups showed real levels of autonomy and collective power. Despite not having an explicit accountability role, these groups were nevertheless effective in advocating for better service delivery and the broader needs of their members to a level not seen in institutional committees.
Conclusions
The study points to the need for community-level mechanisms in India to adequately address issues of participation and empowerment of community members to be successful in contributing to service improvements in health and nutrition.
Journal Article
Foundation models for generalist medical artificial intelligence
by
Topol, Eric J.
,
Abad, Zahra Shakeri Hossein
,
Leskovec, Jure
in
631/114
,
692/700
,
Artificial Intelligence
2023
The exceptionally rapid development of highly flexible, reusable artificial intelligence (AI) models is likely to usher in newfound capabilities in medicine. We propose a new paradigm for medical AI, which we refer to as generalist medical AI (GMAI). GMAI models will be capable of carrying out a diverse set of tasks using very little or no task-specific labelled data. Built through self-supervision on large, diverse datasets, GMAI will flexibly interpret different combinations of medical modalities, including data from imaging, electronic health records, laboratory results, genomics, graphs or medical text. Models will in turn produce expressive outputs such as free-text explanations, spoken recommendations or image annotations that demonstrate advanced medical reasoning abilities. Here we identify a set of high-impact potential applications for GMAI and lay out specific technical capabilities and training datasets necessary to enable them. We expect that GMAI-enabled applications will challenge current strategies for regulating and validating AI devices for medicine and will shift practices associated with the collection of large medical datasets.
This review discusses generalist medical artificial intelligence, identifying potential applications and setting out specific technical capabilities and training datasets necessary to enable them, as well as highlighting challenges to its implementation.
Journal Article
WHICH IS THE FAIREST ELECTORAL SYSTEM?
2024
[...]a second or third place finish in a district will still result in some representation and districts can in effect be represented by candidates from multiple parties. Proportional-representation systems seem to have more policy congruence than do majoritarian ones, which are responsive to public preferences but not always congruent, according to research based on data from the Organisation for Economic Co-operation and Development2. For decades, researchers have also been exploring the crucial question of whether voters are satisfied with their own democracies and electoral systems, using national election study surveys in dozens of countries (see 'Election systems around the world'). New Zealand is often cited as a case study that saw higher turnout after it switched from a UK-style majoritarian system to a German-style mixed-member one.
Journal Article
Long COVID: major findings, mechanisms and recommendations
by
Davis, Hannah E
,
McCorkell, Lisa
,
Vogel, Julia Moore
in
Chronic fatigue syndrome
,
Clinical trials
,
Coronaviruses
2023
Long COVID is an often debilitating illness that occurs in at least 10% of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections. More than 200 symptoms have been identified with impacts on multiple organ systems. At least 65 million individuals worldwide are estimated to have long COVID, with cases increasing daily. Biomedical research has made substantial progress in identifying various pathophysiological changes and risk factors and in characterizing the illness; further, similarities with other viral-onset illnesses such as myalgic encephalomyelitis/chronic fatigue syndrome and postural orthostatic tachycardia syndrome have laid the groundwork for research in the field. In this Review, we explore the current literature and highlight key findings, the overlap with other conditions, the variable onset of symptoms, long COVID in children and the impact of vaccinations. Although these key findings are critical to understanding long COVID, current diagnostic and treatment options are insufficient, and clinical trials must be prioritized that address leading hypotheses. Additionally, to strengthen long COVID research, future studies must account for biases and SARS-CoV-2 testing issues, build on viral-onset research, be inclusive of marginalized populations and meaningfully engage patients throughout the research process.Long COVID is an often debilitating illness of severe symptoms that can develop during or following COVID-19. In this Review, Davis, McCorkell, Vogel and Topol explore our knowledge of long COVID and highlight key findings, including potential mechanisms, the overlap with other conditions and potential treatments. They also discuss challenges and recommendations for long COVID research and care.
Journal Article
AI models collapse when trained on recursively generated data
2024
Stable diffusion revolutionized image creation from descriptive text. GPT-2 (ref.
1
), GPT-3(.5) (ref.
2
) and GPT-4 (ref.
3
) demonstrated high performance across a variety of language tasks. ChatGPT introduced such language models to the public. It is now clear that generative artificial intelligence (AI) such as large language models (LLMs) is here to stay and will substantially change the ecosystem of online text and images. Here we consider what may happen to GPT-{
n
} once LLMs contribute much of the text found online. We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’ and show that it can occur in LLMs as well as in variational autoencoders (VAEs) and Gaussian mixture models (GMMs). We build theoretical intuition behind the phenomenon and portray its ubiquity among all learned generative models. We demonstrate that it must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet.
Analysis shows that indiscriminately training generative artificial intelligence on real and generated content, usually done by scraping data from the Internet, can lead to a collapse in the ability of the models to generate diverse high-quality output.
Journal Article
Women are credited less in science than men
by
Ross, Matthew B.
,
Glennon, Britta M.
,
Weinberg, Bruce A.
in
706/689/159
,
706/689/523
,
Authorship
2022
There is a well-documented gap between the observed number of works produced by women and by men in science, with clear consequences for the retention and promotion of women
1
. The gap might be a result of productivity differences
2
–
5
, or it might be owing to women’s contributions not being acknowledged
6
,
7
. Here we find that at least part of this gap is the result of unacknowledged contributions: women in research teams are significantly less likely than men to be credited with authorship. The findings are consistent across three very different sources of data. Analysis of the first source—large-scale administrative data on research teams, team scientific output and attribution of credit—show that women are significantly less likely to be named on a given article or patent produced by their team relative to their male peers. The gender gap in attribution is present across most scientific fields and almost all career stages. The second source—an extensive survey of authors—similarly shows that women’s scientific contributions are systematically less likely to be recognized. The third source—qualitative responses—suggests that the reason that women are less likely to be credited is because their work is often not known, is not appreciated or is ignored. At least some of the observed gender gap in scientific output may be owing not to differences in scientific contribution, but rather to differences in attribution.
The difference between the number of men and women listed as authors on scientific papers and inventors on patents is at least partly attributable to unacknowledged contributions by women scientists.
Journal Article
Safe and just Earth system boundaries
by
Prodani, Klaudia
,
Kanie, Norichika
,
Stewart-Koster, Ben
in
704/106/694/1108
,
704/158/670
,
704/172/4081
2023
The stability and resilience of the Earth system and human well-being are inseparably linked
1
–
3
, yet their interdependencies are generally under-recognized; consequently, they are often treated independently
4
,
5
. Here, we use modelling and literature assessment to quantify safe and just Earth system boundaries (ESBs) for climate, the biosphere, water and nutrient cycles, and aerosols at global and subglobal scales. We propose ESBs for maintaining the resilience and stability of the Earth system (safe ESBs) and minimizing exposure to significant harm to humans from Earth system change (a necessary but not sufficient condition for justice)
4
. The stricter of the safe or just boundaries sets the integrated safe and just ESB. Our findings show that justice considerations constrain the integrated ESBs more than safety considerations for climate and atmospheric aerosol loading. Seven of eight globally quantified safe and just ESBs and at least two regional safe and just ESBs in over half of global land area are already exceeded. We propose that our assessment provides a quantitative foundation for safeguarding the global commons for all people now and into the future.
We find that justice considerations constrain the integrated Earth system boundaries more than safety considerations for climate and atmospheric aerosol loading, and our assessment provides a foundation for safeguarding the global commons for all people.
Journal Article
Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care
2023
Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification, and allocation of resources. Bias in the development and use of algorithms can lead to worse outcomes for racial and ethnic minoritized groups and other historically marginalized populations such as individuals with lower income.
To provide a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms to promote health and health care equity.
The Agency for Healthcare Research and Quality and the National Institute for Minority Health and Health Disparities convened a diverse panel of experts to review evidence, hear from stakeholders, and receive community feedback.
The panel developed a conceptual framework to apply guiding principles across an algorithm's life cycle, centering health and health care equity for patients and communities as the goal, within the wider context of structural racism and discrimination. Multiple stakeholders can mitigate and prevent bias at each phase of the algorithm life cycle, including problem formulation (phase 1); data selection, assessment, and management (phase 2); algorithm development, training, and validation (phase 3); deployment and integration of algorithms in intended settings (phase 4); and algorithm monitoring, maintenance, updating, or deimplementation (phase 5). Five principles should guide these efforts: (1) promote health and health care equity during all phases of the health care algorithm life cycle; (2) ensure health care algorithms and their use are transparent and explainable; (3) authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness; (4) explicitly identify health care algorithmic fairness issues and trade-offs; and (5) establish accountability for equity and fairness in outcomes from health care algorithms.
Multiple stakeholders must partner to create systems, processes, regulations, incentives, standards, and policies to mitigate and prevent algorithmic bias. Reforms should implement guiding principles that support promotion of health and health care equity in all phases of the algorithm life cycle as well as transparency and explainability, authentic community engagement and ethical partnerships, explicit identification of fairness issues and trade-offs, and accountability for equity and fairness.
Journal Article