Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
18,012
result(s) for
"learning bias"
Sort by:
Detection and Evaluation of Machine Learning Bias
2021
Machine learning models are built using training data, which is collected from human experience and is prone to bias. Humans demonstrate a cognitive bias in their thinking and behavior, which is ultimately reflected in the collected data. From Amazon’s hiring system, which was built using ten years of human hiring experience, to a judicial system that was trained using human judging practices, these systems all include some element of bias. The best machine learning models are said to mimic humans’ cognitive ability, and thus such models are also inclined towards bias. However, detecting and evaluating bias is a very important step for better explainable models. In this work, we aim to explain bias in learning models in relation to humans’ cognitive bias and propose a wrapper technique to detect and evaluate bias in machine learning models using an openly accessible dataset from UCI Machine Learning Repository. In the deployed dataset, the potentially biased attributes (PBAs) are gender and race. This study introduces the concept of alternation functions to swap the values of PBAs, and evaluates the impact on prediction using KL divergence. Results demonstrate females and Asians to be associated with low wages, placing some open research questions for the research community to ponder over.
Journal Article
The cultural evolution of cultural evolution
by
Birch, Jonathan
,
Heyes, Cecilia
in
Opinion Piece
,
Part II: Unravelling the Mechanisms Underlying Cultural Evolution
2021
What makes fast, cumulative cultural evolution work? Where did it come from? Why is it the sole preserve of humans? We set out a self-assembly hypothesis: cultural evolution evolved culturally. We present an evolutionary account that shows this hypothesis to be coherent, plausible, and worthy of further investigation. It has the following steps: (0) in common with other animals, early hominins had significant capacity for social learning; (1) knowledge and skills learned by offspring from their parents began to spread because bearers had more offspring, a process we call CS1 (or Cultural Selection 1); (2) CS1 shaped attentional learning biases; (3) these attentional biases were augmented by explicit learning biases (judgements about what should be copied from whom). Explicit learning biases enabled (4) the high-fidelity, exclusive copying required for fast cultural accumulation of knowledge and skills by a process we call CS2 (or Cultural Selection 2) and (5) the emergence of cognitive processes such as imitation, mindreading and metacognition—'cognitive gadgets' specialized for cultural learning. This self-assembly hypothesis is consistent with archaeological evidence that the stone tools used by early hominins were not dependent on fast, cumulative cultural evolution, and suggests new priorities for research on 'animal culture'.
This article is part of the theme issue 'Foundations of cultural evolution'.
Journal Article
Challenging the negative learning bias hypothesis of depression: reversal learning in a naturalistic psychiatric sample
by
van Eijndhoven, Philip F.
,
Cools, Roshan
,
Collard, Rose M.
in
Addictions
,
Anxiety
,
Attention deficit hyperactivity disorder
2022
Classic theories posit that depression is driven by a negative learning bias. Most studies supporting this proposition used small and selected samples, excluding patients with comorbidities. However, comorbidity between psychiatric disorders occurs in up to 70% of the population. Therefore, the generalizability of the negative bias hypothesis to a naturalistic psychiatric sample as well as the specificity of the bias to depression, remain unclear. In the present study, we tested the negative learning bias hypothesis in a large naturalistic sample of psychiatric patients, including depression, anxiety, addiction, attention-deficit/hyperactivity disorder, and/or autism. First, we assessed whether the negative bias hypothesis of depression generalized to a heterogeneous (and hence more naturalistic) depression sample compared with controls. Second, we assessed whether negative bias extends to other psychiatric disorders. Third, we adopted a dimensional approach, by using symptom severity as a way to assess associations across the sample.
We administered a probabilistic reversal learning task to 217 patients and 81 healthy controls. According to the negative bias hypothesis, participants with depression should exhibit enhanced learning and flexibility based on punishment v. reward. We combined analyses of traditional measures with more sensitive computational modeling.
In contrast to previous findings, this sample of depressed patients with psychiatric comorbidities did not show a negative learning bias.
These results speak against the generalizability of the negative learning bias hypothesis to depressed patients with comorbidities. This study highlights the importance of investigating unselected samples of psychiatric patients, which represent the vast majority of the psychiatric population.
Journal Article
Advancing Algorithmic Adaptability in Hyperspectral Anomaly Detection with Stacking-Based Ensemble Learning
2024
Anomaly detection in hyperspectral imaging is crucial for remote sensing, driving the development of numerous algorithms. However, systematic studies reveal a dichotomy where algorithms generally excel at either detecting anomalies in specific datasets or generalizing across heterogeneous datasets (i.e., lack adaptability). A key source of this dichotomy may center on the singular and like biases frequently employed by existing algorithms. Current research lacks experimentation into how integrating insights from diverse biases might counteract problems in singularly biased approaches. Addressing this gap, we propose stacking-based ensemble learning for hyperspectral anomaly detection (SELHAD). SELHAD introduces the integration of hyperspectral anomaly detection algorithms with diverse biases (e.g., Gaussian, density, partition) into a singular ensemble learning model and learns the factor to which each bias should contribute so anomaly detection performance is optimized. Additionally, it introduces bootstrapping strategies into hyperspectral anomaly detection algorithms to further increase robustness. We focused on five representative algorithms embodying common biases in hyperspectral anomaly detection and demonstrated how they result in the previously highlighted dichotomy. Subsequently, we demonstrated how SELHAD learns the interplay between these biases, enabling their collaborative utilization. In doing so, SELHAD transcends the limitations inherent in individual biases, thereby alleviating the dichotomy and advancing toward more adaptable solutions.
Journal Article
Drug Target Identification with Machine Learning: How to Choose Negative Examples
2021
Identification of the protein targets of hit molecules is essential in the drug discovery process. Target prediction with machine learning algorithms can help accelerate this search, limiting the number of required experiments. However, Drug-Target Interactions databases used for training present high statistical bias, leading to a high number of false positives, thus increasing time and cost of experimental validation campaigns. To minimize the number of false positives among predicted targets, we propose a new scheme for choosing negative examples, so that each protein and each drug appears an equal number of times in positive and negative examples. We artificially reproduce the process of target identification for three specific drugs, and more globally for 200 approved drugs. For the detailed three drug examples, and for the larger set of 200 drugs, training with the proposed scheme for the choice of negative examples improved target prediction results: the average number of false positives among the top ranked predicted targets decreased, and overall, the rank of the true targets was improved.Our method corrects databases’ statistical bias and reduces the number of false positive predictions, and therefore the number of useless experiments potentially undertaken.
Journal Article
Surveying Racial Bias in Facial Recognition: Balancing Datasets and Algorithmic Enhancements
2024
Facial recognition systems frequently exhibit high accuracies when evaluated on standard test datasets. However, their performance tends to degrade significantly when confronted with more challenging tests, particularly involving specific racial categories. To measure this inconsistency, many have created racially aware datasets to evaluate facial recognition algorithms. This paper analyzes facial recognition datasets, categorizing them as racially balanced or unbalanced while limiting racially balanced datasets to have each race be represented within five percentage points of all other represented races. We investigate methods to address concerns about racial bias due to uneven datasets by using generative adversarial networks and latent diffusion models to balance the data, and we also assess the impact of these techniques. In an effort to mitigate accuracy discrepancies across different racial groups, we investigate a range of network enhancements in facial recognition performance across human races. These improvements encompass architectural improvements, loss functions, training methods, data modifications, and incorporating additional data. Additionally, we discuss the interrelation of racial and gender bias. Lastly, we outline avenues for future research in this domain.
Journal Article
Concern for Others Leads to Vicarious Optimism
2018
An optimistic learning bias leads people to update their beliefs in response to better-than-expected good news but neglect worse-than-expected bad news. Because evidence suggests that this bias arises from self-concern, we hypothesized that a similar bias may affect beliefs about other people’s futures, to the extent that people care about others. Here, we demonstrated the phenomenon of vicarious optimism and showed that it arises from concern for others. Participants predicted the likelihood of unpleasant future events that could happen to either themselves or others. In addition to showing an optimistic learning bias for events affecting themselves, people showed vicarious optimism when learning about events affecting friends and strangers. Vicarious optimism for strangers correlated with generosity toward strangers, and experimentally increasing concern for strangers amplified vicarious optimism for them. These findings suggest that concern for others can bias beliefs about their future welfare and that optimism in learning is not restricted to oneself.
Journal Article
Learning biases in proper nouns
2023
It has been proposed that there are cognitive biases in language learning that favour certain patterns over others. This study examines the effects of such bias factors on the learning of the phonology of proper nouns. I take up the phenomenon of compound voicing in Japanese surnames. The results of two judgment experiments show that, while Japanese speakers replicate various kinds of statistical regularities in existing names, they tend to extend only phonologically motivated patterns to novel names. This suggests that phonological naturalness plays a role even in the learning of a highly faithful category of words, namely proper nouns, and provides evidence for the relevance of learning biases in synchronic grammar.
Journal Article
The Obligatory Contour Principle as a substantive bias in phonological learning
by
Gong, Shuxiao
,
Zhang, Jie
in
Artificial grammar learning
,
Learning bias
,
Obligatory Contour Principle
2025
Understanding how native speakers acquire the phonological patterns in their language is a key task for the field of phonology. Numerous studies have suggested that phonological learning is a biased process: certain phonological patterns are more easily accessed and learned by the speakers and thus more likely to appear in languages, while others show acquisition difficulties and may occur less frequently. Therefore, an important aspect of understanding phonological learning and typology is to understand the nature of these learning biases. Obligatory Contour Principle (OCP), i.e., the avoidance of adjacent similar units in the lexicon, is one of the typologically well-attested phenomena that may originate from phonological learning biases. Using an artificial grammar learning experiment testing the learnability of several phonotactic patterns, we present evidence that the OCP can directly modulate phonological learning, in that similarity avoidance is easier to learn compared to other phonotactic patterns. Specifically, an OCP-based phonotactic pattern was better learned than a complexity-matching consonant major place harmony phonotactic pattern as well as an arbitrary control pattern. Based on the AGL experiment results and the phonetic foundation of similarity avoidance, we argue that the OCP can serve as a substantive bias that influences phonological learning and, eventually, linguistic typology.
Journal Article
Characterizing Veteran suicide decedents that were not classified as high-suicide-risk
2024
Although the Department of Veterans Affairs (VA) has made important suicide prevention advances, efforts primarily target high-risk patients with documented suicide risk, such as suicidal ideation, prior suicide attempts, and recent psychiatric hospitalization. Approximately 90% of VA patients that go on to die by suicide do not meet these high-risk criteria and therefore do not receive targeted suicide prevention services. In this study, we used national VA data to focus on patients that were not classified as high-risk, but died by suicide.
Our sample included all VA patients who died by suicide in 2017 or 2018. We determined whether patients were classified as high-risk using the VA's machine learning risk prediction algorithm. After excluding these patients, we used principal component analysis to identify moderate-risk and low-risk patients and investigated demographics, service-usage, diagnoses, and social determinants of health differences across high-, moderate-, and low-risk subgroups.
High-risk (
= 452) patients tended to be younger, White, unmarried, homeless, and have more mental health diagnoses compared to moderate- (
= 2149) and low-risk (
= 2209) patients. Moderate- and low-risk patients tended to be older, married, Black, and Native American or Pacific Islander, and have more physical health diagnoses compared to high-risk patients. Low-risk patients had more missing data than higher-risk patients.
Study expands epidemiological understanding about non-high-risk suicide decedents, historically understudied and underserved populations. Findings raise concerns about reliance on machine learning risk prediction models that may be biased by relative underrepresentation of racial/ethnic minorities within health system.
Journal Article