Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
387
result(s) for
"Computer algorithms Psychological aspects."
Sort by:
Rhetorical code studies : discovering arguments in and around code
\"In Rhetorical Code Studies, Kevin Brock explores how software code serves as a means of meaningful communication through which amateur and professional software developers construct arguments--arguments that are not only made up of logical procedures but also of implicit and explicit claims about how a given program works (or should work). These claims appear as procedures and as conventional discourse in the form of code comments and in email messages, forum posts, and other venues for conversation with other developers. To investigate the rhetorical qualities of code, Brock extends ongoing conversations in rhetoric and composition on software by turning to a number of case examples ranging from large, well-known projects like Mozilla Firefox to small-scale programs like the \"FizzBuzz\" test common in many programming job interviews. These examples, which involve specific examination of code texts as well as the contexts surrounding their composition, demonstrate the variety and depth of rhetorical activity taking place in and around code, from individual differences in style to changes in large-scale community norms\"-- Provided by publisher.
Algorithms of Oppression
2018
A revealing look at how negative biases against women of color are embedded in search engine results and algorithms
Run a Google search for \"black girls\"—what will you find? \"Big Booty\" and other sexually explicit terms are likely to come up as top search terms. But, if you type in \"white girls,\" the results are radically different. The suggested porn sites and un-moderated discussions about \"why black women are so sassy\" or \"why black women are so angry\" presents a disturbing portrait of black womanhood in modern society.
In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.
Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance.
An original, surprising and, at times, disturbing account of bias on the internet, Algorithms of Oppression contributes to our understanding of how racism is created, maintained, and disseminated in the 21st century.
Human-level control through deep reinforcement learning
by
Fidjeland, Andreas K.
,
Veness, Joel
,
Sadik, Amir
in
639/705/117
,
Algorithms
,
Artificial Intelligence
2015
An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.
Self-taught AI agent masters Atari arcade games
For an artificial agent to be considered truly intelligent it needs to excel at a variety of tasks considered challenging for humans. To date, it has only been possible to create individual algorithms able to master a single discipline — for example, IBM's Deep Blue beat the human world champion at chess but was not able to do anything else. Now a team working at Google's DeepMind subsidiary has developed an artificial agent — dubbed a deep Q-network — that learns to play 49 classic Atari 2600 'arcade' games directly from sensory experience, achieving performance on a par with that of an expert human player. By combining reinforcement learning (selecting actions that maximize reward — in this case the game score) with deep learning (multilayered feature extraction from high-dimensional data — in this case the pixels), the game-playing agent takes artificial intelligence a step nearer the goal of systems capable of learning a diversity of challenging tasks from scratch.
The theory of reinforcement learning provides a normative account
1
, deeply rooted in psychological
2
and neuroscientific
3
perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems
4
,
5
, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms
3
. While reinforcement learning agents have achieved some successes in a variety of domains
6
,
7
,
8
, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks
9
,
10
,
11
to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games
12
. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Journal Article
Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model
by
Sîrbu, Alina
,
Pedreschi, Dino
,
Giannotti, Fosca
in
Algorithms
,
Bias
,
Biology and Life Sciences
2019
The flow of information reaching us via the online media platforms is optimized not by the information content or relevance but by popularity and proximity to the target. This is typically performed in order to maximise platform usage. As a side effect, this introduces an algorithmic bias that is believed to enhance fragmentation and polarization of the societal debate. To study this phenomenon, we modify the well-known continuous opinion dynamics model of bounded confidence in order to account for the algorithmic bias and investigate its consequences. In the simplest version of the original model the pairs of discussion participants are chosen at random and their opinions get closer to each other if they are within a fixed tolerance level. We modify the selection rule of the discussion partners: there is an enhanced probability to choose individuals whose opinions are already close to each other, thus mimicking the behavior of online media which suggest interaction with similar peers. As a result we observe: a) an increased tendency towards opinion fragmentation, which emerges also in conditions where the original model would predict consensus, b) increased polarisation of opinions and c) a dramatic slowing down of the speed at which the convergence at the asymptotic state is reached, which makes the system highly unstable. Fragmentation and polarization are augmented by a fragmented initial population.
Journal Article
Evolution and impact of bias in human and machine learning algorithm interaction
2020
Traditionally, machine learning algorithms relied on reliable labels from experts to build predictions. More recently however, algorithms have been receiving data from the general population in the form of labeling, annotations, etc. The result is that algorithms are subject to bias that is born from ingesting unchecked information, such as biased samples and biased labels. Furthermore, people and algorithms are increasingly engaged in interactive processes wherein neither the human nor the algorithms receive unbiased data. Algorithms can also make biased predictions, leading to what is now known as algorithmic bias. On the other hand, human's reaction to the output of machine learning methods with algorithmic bias worsen the situations by making decision based on biased information, which will probably be consumed by algorithms later. Some recent research has focused on the ethical and moral implication of machine learning algorithmic bias on society. However, most research has so far treated algorithmic bias as a static factor, which fails to capture the dynamic and iterative properties of bias. We argue that algorithmic bias interacts with humans in an iterative manner, which has a long-term effect on algorithms' performance. For this purpose, we present an iterated-learning framework that is inspired from human language evolution to study the interaction between machine learning algorithms and humans. Our goal is to study two sources of bias that interact: the process by which people select information to label (human action); and the process by which an algorithm selects the subset of information to present to people (iterated algorithmic bias mode). We investigate three forms of iterated algorithmic bias (personalization filter, active learning, and random) and how they affect the performance of machine learning algorithms by formulating research questions about the impact of each type of bias. Based on statistical analyses of the results of several controlled experiments, we found that the three different iterated bias modes, as well as initial training data class imbalance and human action, do affect the models learned by machine learning algorithms. We also found that iterated filter bias, which is prominent in personalized user interfaces, can lead to more inequality in estimated relevance and to a limited human ability to discover relevant data. Our findings indicate that the relevance blind spot (items from the testing set whose predicted relevance probability is less than 0.5 and who thus risk being hidden from humans) amounted to 4% of all relevant items when using a content-based filter that predicts relevant items. A similar simulation using a real-life rating data set found that the same filter resulted in a blind spot size of 75% of the relevant testing set.
Journal Article
Medial prefrontal cortex as an action-outcome predictor
2011
The authors present a computational model based on standard learning rules that can simulate and account for a large range of known effects in the medial prefrontal cortex (mPFC), including dorsal anterior cingulate cortex (dACC). Their model suggests that this region is involved in learning and predicting the likely outcomes of actions and detecting when those predicted outcomes fail to occur.
The medial prefrontal cortex (mPFC) and especially anterior cingulate cortex is central to higher cognitive function and many clinical disorders, yet its basic function remains in dispute. Various competing theories of mPFC have treated effects of errors, conflict, error likelihood, volatility and reward, using findings from neuroimaging and neurophysiology in humans and monkeys. No single theory has been able to reconcile and account for the variety of findings. Here we show that a simple model based on standard learning rules can simulate and unify an unprecedented range of known effects in mPFC. The model reinterprets many known effects and suggests a new view of mPFC, as a region concerned with learning and predicting the likely outcomes of actions, whether good or bad. Cognitive control at the neural level is then seen as a result of evaluating the probable and actual outcomes of one's actions.
Journal Article
Multidimensional correlates of psychological stress: Insights from traditional statistical approaches and machine learning using a nationally representative Canadian sample
2025
Approximately one-fifth of Canadians report high levels of psychological stress. This is alarming, as chronic stress is associated with non-communicable diseases and premature mortality. In order to create effective interventions and public policy for stress reduction, factors associated with stress must be identified and understood. We analyzed data from the 2012 ‘Canadian Community Health Survey - Mental Health’ (CCHS-MH), including 66 potential correlates, drawn from a range of domains (e.g., psychological, physical, social, demographic factors). First, we used a random forest algorithm to determine the most important predictors of psychological stress, then we used linear regressions to quantify the linear associations between the important predictors and psychological stress. In total, 23,089 Canadian adults responded to the 2012 CCHS-MH, which was weighted to be nationally representative. Random forest analyses found that, after accounting for variance from other factors and considering complex interactions, life satisfaction (relative importance = 1.00), negative social interactions (0.99), primary stress source (0.85), and age (0.77) were the most important correlates of psychological stress. To a lesser extent, employment status (0.36), was also an important variable. Univariable linear regression suggested that these variables had effects ranging from small to medium-to-large. Multiple linear regression showed that lower life satisfaction, being younger, greater negative social interaction, reporting a primary stressor, and being employed were all found to be associated with greater psychological stress (beta range = 0.03 to 0.84, all p < 0.001, R 2 = 0.264). Further, these factors accounted for 26% of the variance of psychological stress. This study highlights that the most important correlates of stress reflect diverse psychological, social, and demographic factors. These findings highlight that stress reduction interventions may require a multidisciplinary approach. However, further longitudinal and experimental studies are required.
Journal Article
On-line anxiety level detection from biosignals: Machine learning based on a randomized controlled trial with spider-fearful individuals
2020
We present performance results concerning the validation for anxiety level detection based on trained mathematical models using supervised machine learning techniques. The model training is based on biosignals acquired in a randomized controlled trial. Wearable sensors were used to collect electrocardiogram, electrodermal activity, and respiration from spider-fearful individuals. We designed and applied ten approaches for data labeling considering individual biosignals as well as subjective ratings. Performance results revealed a selection of trained models adapted for two-level (low and high) and three-level (low, medium and high) classification of anxiety using a minimal set of six features. We obtained a remarkable accuracy of 89.8% for the two-level classification and of 74.4% for the three-level classification using a short time window length of ten seconds when applying the approach that uses subjective ratings for data labeling. Bagged Trees proved to be the most suitable classifier type among the classification models studied. The trained models will have a practical impact on the feasibility study of an augmented reality exposure therapy based on a therapeutic game for the treatment of arachnophobia.
Journal Article