Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
33 result(s) for "Avarguès-Weber, Aurore"
Sort by:
Numerical ordering of zero in honey bees
It has been said that the development of an understanding of zero by society initiated a major intellectual advance in humans, and we have been thought to be unique in this understanding. Although recent research has shown that some other vertebrates understand the concept of the “empty set,” Howard et al. now show that an understanding of this concept is present in untrained honey bees (see the Perspective by Nieder). This finding suggests that such an understanding has evolved independently in distantly related species that deal with complexity in their environments, and that it may be more widespread than previously appreciated. Science , this issue p. 1124 ; see also p. 1069 Honey bees display an understanding that zero is an empty set at the base of the number line. Some vertebrates demonstrate complex numerosity concepts—including addition, sequential ordering of numbers, or even the concept of zero—but whether an insect can develop an understanding for such concepts remains unknown. We trained individual honey bees to the numerical concepts of “greater than” or “less than” using stimuli containing one to six elemental features. Bees could subsequently extrapolate the concept of less than to order zero numerosity at the lower end of the numerical continuum. Bees demonstrated an understanding that parallels animals such as the African grey parrot, nonhuman primates, and even preschool children.
Perception of contextual size illusions by honeybees in restricted and unrestricted viewing conditions
How different visual systems process images and make perceptual errors can inform us about cognitive and visual processes. One of the strongest geometric errors in perception is a misperception of size depending on the size of surrounding objects, known as the Ebbinghaus or Titchener illusion. The ability to perceive the Ebbinghaus illusion appears to vary dramatically among vertebrate species, and even populations, but this may depend on whether the viewing distance is restricted. We tested whether honeybees perceive contextual size illusions, and whether errors in perception of size differed under restricted and unrestricted viewing conditions. When the viewing distance was unrestricted, there was an effect of context on size perception and thus, similar to humans, honeybees perceived contrast size illusions. However, when the viewing distance was restricted, bees were able to judge absolute size accurately and did not succumb to visual illusions, despite differing contextual information. Our results show that accurate size perception depends on viewing conditions, and thus may explain the wide variation in previously reported findings across species. These results provide insight into the evolution of visual mechanisms across vertebrate and invertebrate taxa, and suggest convergent evolution of a visual processing solution.
Motion cues from the background influence associative color learning of honey bees in a virtual-reality scenario
Honey bees exhibit remarkable visual learning capacities, which can be studied using virtual reality (VR) landscapes in laboratory conditions. Existing VR environments for bees are imperfect as they provide either open-loop conditions or 2D displays. Here we achieved a true 3D environment in which walking bees learned to discriminate a rewarded from a punished virtual stimulus based on color differences. We included ventral or frontal background cues, which were also subjected to 3D updating based on the bee movements. We thus studied if and how the presence of such motion cues affected visual discrimination in our VR landscape. Our results showed that the presence of frontal, and to a lesser extent, of ventral background motion cues impaired the bees’ performance. Whenever these cues were suppressed, color discrimination learning became possible. We analyzed the specific contribution of foreground and background cues and discussed the role of attentional interference and differences in stimulus salience in the VR environment to account for these results. Overall, we show how background and target cues may interact at the perceptual level and influence associative learning in bees. In addition, we identify issues that may affect decision-making in VR landscapes, which require specific control by experimenters.
Visual learning in a virtual reality environment upregulates immediate early gene expression in the mushroom bodies of honey bees
Free-flying bees learn efficiently to solve numerous visual tasks. Yet, the neural underpinnings of this capacity remain unexplored. We used a 3D virtual reality (VR) environment to study visual learning and determine if it leads to changes in immediate early gene (IEG) expression in specific areas of the bee brain. We focused on kakusei, Hr38 and Egr1 , three IEGs that have been related to bee foraging and orientation, and compared their relative expression in the calyces of the mushroom bodies, the optic lobes and the rest of the brain after color discrimination learning. Bees learned to discriminate virtual stimuli displaying different colors and retained the information learned. Successful learners exhibited Egr1 upregulation only in the calyces of the mushroom bodies, thus uncovering a privileged involvement of these brain regions in associative color learning and the usefulness of Egr1 as a marker of neural activity induced by this phenomenon. The neural bases of visual learning in bees have been understudied, relative to the olfactory learning process. Using a 3D virtual reality environment and gene expression analyses, the neural underpinnings of visual learning are explored here.
Aversive reinforcement improves visual discrimination learning in free-flying honeybees
Learning and perception of visual stimuli by free-flying honeybees has been shown to vary dramatically depending on the way insects are trained. Fine color discrimination is achieved when both a target and a distractor are present during training (differential conditioning), whilst if the same target is learnt in isolation (absolute conditioning), discrimination is coarse and limited to perceptually dissimilar alternatives. Another way to potentially enhance discrimination is to increase the penalty associated with the distractor. Here we studied whether coupling the distractor with a highly concentrated quinine solution improves color discrimination of both similar and dissimilar colors by free-flying honeybees. As we assumed that quinine acts as an aversive stimulus, we analyzed whether aversion, if any, is based on an aversive sensory input at the gustatory level or on a post-ingestional malaise following quinine feeding. We show that the presence of a highly concentrated quinine solution (60 mM) acts as an aversive reinforcer promoting rejection of the target associated with it, and improving discrimination of perceptually similar stimuli but not of dissimilar stimuli. Free-flying bees did not use remote cues to detect the presence of quinine solution; the aversive effect exerted by this substance was mediated via a gustatory input, i.e. via a distasteful sensory experience, rather than via a post-ingestional malaise. The present study supports the hypothesis that aversion conditioning is important for understanding how and what animals perceive and learn. By using this form of conditioning coupled with appetitive conditioning in the framework of a differential conditioning procedure, it is possible to uncover discrimination capabilities that may remain otherwise unsuspected. We show, therefore, that visual discrimination is not an absolute phenomenon but can be modulated by experience.
The Neural Signature of Visual Learning Under Restrictive Virtual-Reality Conditions
Honey bees are reputed for their remarkable visual learning and navigation capabilities. These capacities can be studied in virtual reality (VR) environments, which allow studying performances of tethered animals in stationary flight or walk under full control of the sensory environment. Here we used a 2D VR setup in which a tethered bee walking stationary under restrictive closed-loop conditions learned to discriminate vertical rectangles differing in color and reinforcing outcome. Closed-loop conditions restricted stimulus control to lateral displacements. Consistently with prior VR analyses, bees learned to discriminate the trained stimuli. Ex vivo analyses on the brains of learners and non-learners showed that successful learning led to a downregulation of three immediate early genes in the main regions of the visual circuit, the optic lobes (OLs) and the calyces of the mushroom bodies (MBs). While Egr1 was downregulated in the OLs, Hr38 and kakusei were coincidently downregulated in the calyces of the MBs. Our work thus reveals that color discrimination learning induced a neural signature distributed along the sequential pathway of color processing that is consistent with an inhibitory trace. This trace may relate to the motor patterns required to solve the discrimination task, which are different from those underlying pathfinding in 3D VR scenarios allowing for navigation and exploratory learning and which lead to IEG upregulation.
Associative visual learning by tethered bees in a controlled visual environment
Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS−). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS− after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS− also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.
Conceptual learning by miniature brains
Concepts act as a cornerstone of human cognition. Humans and non-human primates learn conceptual relationships such as ‘same’, ‘different’, ‘larger than’, ‘better than’, among others. In all cases, the relationships have to be encoded by the brain independently of the physical nature of objects linked by the relation. Consequently, concepts are associated with high levels of cognitive sophistication and are not expected in an insect brain. Yet, various works have shown that the miniature brain of honeybees rapidly learns conceptual relationships involving visual stimuli. Concepts such as ‘same’, ‘different’, ‘above/below of’ or ‘left/right are well mastered by bees. We review here evidence about concept learning in honeybees and discuss both its potential adaptive advantage and its possible neural substrates. The results reviewed here challenge the traditional view attributing supremacy to larger brains when it comes to the elaboration of concepts and have wide implications for understanding how brains can form conceptual relations.
Symbolic representation of numerosity by honeybees (Apis mellifera)
The assignment of a symbolic representation to a specific numerosity is a fundamental requirement for humans solving complex mathematical calculations used in diverse applications such as algebra, accounting, physics and everyday commerce. Here we show that honeybees are able to learn to match a sign to a numerosity, or a numerosity to a sign, and subsequently transfer this knowledge to novel numerosity stimuli changed in colour properties, shape and configuration. While honeybees learned the associations between two quantities (two; three) and two signs (N-shape; inverted T-shape), they failed at reversing their specific task of sign-to-numerosity matching to numerosity-to-sign matching and vice versa (i.e. a honeybee that learned to match a sign to a number of elements was not able to invert this learning to match the numerosity of elements to a sign). Thus, while bees could learn the association between a symbol and numerosity, it was linked to the specific task and bees could not spontaneously extrapolate the association to a novel, reversed task. Our study therefore reveals that the basic requirement for numerical symbolic representation can be fulfilled by an insect brain, suggesting that the absence of its spontaneous emergence in animals is not due to cognitive limitation.
Different mechanisms underlie implicit visual statistical learning in honey bees and humans
SignificanceDo animals encode statistical information about visual patterns the same way as humans do? If so, humans’ superior visual cognitive skills must depend on some other factors; if not, the nature of the differences can provide hints about what makes human learning so versatile. We provide a systematic comparison of automatic visual learning in humans and honey bees, showing that while bees do extract statistical information about co-occurrence contingencies of visual scenes, in contrast to humans, they do not automatically encode conditional information. Thus, acquiring implicit knowledge about the statistical properties of the visual environment may be a general mechanism in animals, but the richer representation developed automatically by humans might require specific probabilistic computational faculties. The ability of developing complex internal representations of the environment is considered a crucial antecedent to the emergence of humans’ higher cognitive functions. Yet it is an open question whether there is any fundamental difference in how humans and other good visual learner species naturally encode aspects of novel visual scenes. Using the same modified visual statistical learning paradigm and multielement stimuli, we investigated how human adults and honey bees (Apis mellifera) encode spontaneously, without dedicated training, various statistical properties of novel visual scenes. We found that, similarly to humans, honey bees automatically develop a complex internal representation of their visual environment that evolves with accumulation of new evidence even without a targeted reinforcement. In particular, with more experience, they shift from being sensitive to statistics of only elemental features of the scenes to relying on co-occurrence frequencies of elements while losing their sensitivity to elemental frequencies, but they never encode automatically the predictivity of elements. In contrast, humans involuntarily develop an internal representation that includes single-element and co-occurrence statistics, as well as information about the predictivity between elements. Importantly, capturing human visual learning results requires a probabilistic chunk-learning model, whereas a simple fragment-based memory-trace model that counts occurrence summary statistics is sufficient to replicate honey bees’ learning behavior. Thus, humans’ sophisticated encoding of sensory stimuli that provides intrinsic sensitivity to predictive information might be one of the fundamental prerequisites of developing higher cognitive abilities.