Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
6,106
result(s) for
"Visual representation"
Sort by:
Surprise me with the visual representation of the brand in social commerce! An eye-tracking study based on user characteristics
by
Liébana-Cabanillas, Francisco
,
Muñoz-Leiva, Francisco
,
Sánchez-Borrego, Ismael Ramón
in
Advertisements
,
Advertising
,
Advertising campaigns
2025
PurposeThis study examines the role of logotypes in advertising effectiveness on s-commerce platforms by analyzing the visual attention paid by the consumer to fashion branding – wordmarks or combination marks – and their subsequent recall.Design/methodology/approachThe study examines the main areas of visual representation of the brand (VRB) on the Instagram network and the user’s corresponding areas of interest on a mobile-device screen. Attention and recall of the VRB are assessed in light of different classification variables (users’ gender, age and level of experience in s-commerce tools) to better understand how VRB may be leveraged by fashion retailers to encourage purchasing behavior. To achieve this objective, a mixed experiment design based on the eye-tracking methodology and a self-administered questionnaire is carried out.FindingsThe results indicate that visual attention, gender, age and s-commerce experience all contribute to determining users’ recall of the brand logo to which they are exposed on-screen. By considering the different s-commerce user profiles that exhibit different visualization behaviors, fashion retailers will be better placed to improve their online advertising campaigns and, ultimately, increase brand sales. The findings also point to promising future research directions on the effectiveness of branding strategies.Originality/valueThis highly innovative study provides in-depth insights into advertising effectiveness in terms of attention and recall, according to the main types of VRB for two specific s-commerce tools used by a high-street fashion brand, namely, its profile on Instagram Shop and its profile on Instagram Stories.
Journal Article
Unwatchable
\"We all have images that we find unwatchable, whether for ethical, political, or sensory-affective reasons. From news coverage of terror attacks to viral videos of police brutality, and from graphic horror films to incendiary artworks that provoke mass boycotts, many of the images in our media culture strike as beyond the pale of consumption. Yet what does it mean to proclaim a media object \"unwatchable\": disturbing, revolting, poor, tedious, or literally inaccessible? Appealing to a broad academic and general readership, Unwatchable offers multidisciplinary approaches to the vast array of troubling images that circulate in our global visual culture, from cinema, television, and video games through museums and classrooms to laptops, smart phones, and social media platforms. This anthology assembles 60 original essays by scholars, theorists, critics, archivists, curators, artists, and filmmakers who offer their own responses to the broadly suggestive question: What do you find unwatchable? The diverse answers include iconoclastic artworks that have been hidden from view, dystopian images from the political sphere, horror movies, TV advertisements, classic films, and recent award-winners\"-- Provided by publisher.
End-to-End Learning of Deep Visual Representations for Image Retrieval
by
Larlus, Diane
,
Revaud, Jerome
,
Gordo, Albert
in
Artificial Intelligence
,
Computer architecture
,
Computer Imaging
2017
While deep learning has become a key ingredient in the top performing methods for many computer vision tasks, it has failed so far to bring similar improvements to instance-level image retrieval. In this article, we argue that reasons for the underwhelming results of deep methods on image retrieval are threefold: (1) noisy training data, (2) inappropriate deep architecture, and (3) suboptimal training procedure. We address all three issues. First, we leverage a large-scale but noisy landmark dataset and develop an automatic cleaning method that produces a suitable training set for deep retrieval. Second, we build on the recent R-MAC descriptor, show that it can be interpreted as a deep and differentiable architecture, and present improvements to enhance it. Last, we train this network with a siamese architecture that combines three streams with a triplet loss. At the end of the training process, the proposed architecture produces a global image representation in a single forward pass that is well suited for image retrieval. Extensive experiments show that our approach significantly outperforms previous retrieval approaches, including state-of-the-art methods based on costly local descriptor indexing and spatial verification. On Oxford 5k, Paris 6k and Holidays, we respectively report 94.7, 96.6, and 94.8 mean average precision. Our representations can also be heavily compressed using product quantization with little loss in accuracy.
Journal Article
Using visual representations to enhance isiXhosa home language learners’ mathematical understanding
by
Livingston, Candice
,
Coetzer, Tanja
,
Barnard, Elna
in
Academic achievement
,
African languages
,
Classrooms
2023
Background Several isiXhosa home language (HL) learners are excluded from meaningful mathematics learning because they are taught in English. Not only do teachers lack epistemological and pedagogical confidence in using multiple languages when teaching mathematics, but there are no mathematical registers for African languages that allow for adequate mathematical teaching and learning. There is a scarcity of research on what constitutes effective mathematics instruction for isiXhosa HL learners in South African language of learning and teaching (LoLT) Grade 1 classrooms. Aim The purpose of this study was to explore the experiences of Grade 1 teachers using visual representations to enhance isiXhosa HL learners' understanding of mathematics in the English- LoLT in Grade 1 classrooms. Setting This study was conducted at four primary schools in the Western Cape's Metro East Education District. Methods This study employs a qualitative research approach in conjunction with an adapted interactive qualitative analysis (IQA) systems method to collect in-depth data about current mathematics practices in English LoLT in Grade 1 classrooms. The data were analysed using John Stuart Mill's analytical comparison technique. Results This study found that semiotics such as visual (and concrete) representations assist isiXhosa HL learners to grasp and understand mathematical concepts easily. Conclusion This study emphasises the significance of using sufficient visual representation strategies to enhance isiXhosa HL learners' mathematical understanding in the English LoLT in Grade 1 classrooms. Contribution The outcomes of this study can make a positive contribution to current mathematics practice in terms of supporting isiXhosa HL learners in English LoLT in Grade 1 classrooms.
Journal Article
Mapping definitions of co‐production and co‐design in health and social care: A systematic scoping review providing lessons for the future
by
Nylander, Elisabeth
,
Robert, Glenn
,
Masterson, Daniel
in
Analysis
,
Applied research
,
Citations
2022
Objectives This study aimed to explore how the concepts of co‐production and co‐design have been defined and applied in the context of health and social care and to identify the temporal adoption of the terms. Methods A systematic scoping review of CINAHL with Full Text, Cochrane Central Register of Controlled Trials, MEDLINE, PsycINFO, PubMed and Scopus was conducted to identify studies exploring co‐production or co‐design in health and social care. Data regarding date and conceptual definitions were extracted. From the 2933 studies retrieved, 979 articles were included in this review. Results A network map of the sixty most common definitions and—through exploration of citations—eight definition clusters and a visual representation of how they interconnect and have informed each other over time are presented. Additional findings were as follows: (i) an increase in research exploring co‐production and co‐design in health and social care contexts; (ii) an increase in the number of new definitions during the last decade, despite just over a third of included articles providing no definition or explanation for their chosen concept; and (iii) an increase in the number of publications using the terms co‐production or co‐design while not involving citizens/patients/service users. Conclusions Co‐production and co‐design are conceptualized in a wide range of ways. Rather than seeking universal definitions of these terms, future applied research should focus on articulating the underlying principles and values that need to be translated and explored in practice. Patient and Public Contribution The search strategy and pilot results were presented at a workshop in May 2019 with patient and public contributors and researchers. Discussion here informed our next steps. During the analysis phase of the review, informal discussions were held once a month with a patient who has experience in patient and public involvement. As this involvement was conducted towards the end of the review, we agreed together that inclusion as an author would risk being tokenistic. Instead, acknowledgements were preferred. The next phase involves working as equal contributors to explore the values and principles of co‐production reported within the most common definitions.
Journal Article
Visual Presentation Effects on Identification of Multiple Environmental Sounds
by
Masakura, Yuko
,
Shimono, Koichi
,
Nakatsuka, Reio
in
Animal behavior
,
Auditory stimuli
,
Environmental effects
2016
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio-visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing.
Journal Article
Atoms of recognition in human and computer vision
by
Ullman, Shimon
,
Assif, Liav
,
Harari, Daniel
in
Biological Sciences
,
Brain
,
Brain - physiology
2016
Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at theminimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.
Journal Article
More than meets the eye: a longitudinal analysis of climate change imagery in the print media
2020
Images are ubiquitous in everyday life. They are a key part of the communication process, shaping peoples’ attitudes and policy preferences on climate change. Images which have come to dominate visual portrayals of climate change (and conversely, those that are marginalised or excluded) influence how we interact with climate change in our everyday lives. This paper presents the first in-depth, cross-cultural and longitudinal study of climate change visual discourse. It examines over a thousand images associated with articles about climate change in UK and US newspapers between 2001 and 2009, a pivotal decade for climate change engagement. Content, frame and iconographic analyses reveal a remarkably consistent visual discourse in the UK and US newspapers. The longitudinal analysis shows how the visual representation of climate changed mid-decade. Before 2005, a distancing frame was common. Imagery of polar landscapes acted as a visual synecdoche for distant climate risk. After 2005, there was a rapid increase in visual coverage, an increase in use of the contested visual frame, alongside an increase in climate cartoons, protest imagery and visual synecdoches. These synecdoches began to be subverted and parodied, particularly in the right-leaning press. These results illustrate the rise of climate change scepticism during the mid-2000s. This study has implications for public engagement with climate change. It shows that the contested and distancing visual frames are deeply and historically embedded in the meaning-making of climate change. Additionally, it showcases the importance of visual synecdoches, used by newspapers in particular circumstances to engage particular audiences. Knowing and understanding visual use is imperative to enable an evidence-based approach to climate engagement endeavours.
Journal Article
Causal Reasoning Meets Visual Representation Learning: A Prospective Study
2022
Visual representation learning is ubiquitous in various real-world applications, including visual comprehension, video understanding, multi-modal analysis, human-computer interaction, and urban computing. Due to the emergence of huge amounts of multimodal heterogeneous spatial/temporal/spatial-temporal data in the big data era, the lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models. The majority of the existing methods tend to fit the original data/variable distributions and ignore the essential causal relations behind the multi-modal knowledge, which lacks unified guidance and analysis about why modern visual representation learning methods easily collapse into data bias and have limited generalization and cognitive abilities. Inspired by the strong inference ability of human-level agents, recent years have therefore witnessed great effort in developing causal reasoning paradigms to realize robust representation and model learning with good cognitive ability. In this paper, we conduct a comprehensive review of existing causal reasoning methods for visual representation learning, covering fundamental theories, models, and datasets. The limitations of current methods and datasets are also discussed. Moreover, we propose some prospective challenges, opportunities, and future research directions for benchmarking causal reasoning algorithms in visual representation learning. This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods, publicly available benchmarks, and consensus-building standards for reliable visual representation learning and related real-world applications more efficiently.
Journal Article