Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
1,754 result(s) for "Feature integration"
Sort by:
How feature integration theory integrated cognitive psychology, neurophysiology, and psychophysics
Anne Treisman’s Feature Integration Theory (FIT) is a landmark in cognitive psychology and vision research. While many have discussed how Treisman’s theory has fared since it was first proposed, it is less common to approach FIT from the other side in time: to examine what experimental findings, theoretical concepts, and ideas inspired it. The theory did not enter into a theoretical vacuum. Treisman’s ideas were inspired by a large literature on a number of topics within visual psychophysics, cognitive psychology, and visual neurophysiology. Several key ideas developed contemporaneously within these fields that inspired FIT, and the theory involved an attempt at integrating them. Our aim here was to highlight the conceptual problems, experimental findings, and theoretical positions that Treisman was responding to with her theory and that the theory was intended to explain. We review a large number of findings from the decades preceding the proposal of feature integration theory showing how the theory integrated many ideas that developed in parallel within neurophysiology, visual psychophysics, and cognitive psychology. Our conclusion is that FIT made sense of many preceding findings, integrating them in an elegant way within a single theoretical account.
Multisensory feature integration in (and out) of the focus of spatial attention
Anne Treisman transformed the way in which we think about visual feature integration. However, that does not mean that she was necessarily right, nor that she looked much beyond vision when considering how features might be bound together into perceptual objects. While such a narrow focus undoubtedly makes sense, given the complexity of human multisensory information processing, it is nevertheless somewhat surprising to find that Treisman herself never extended her feature integration theory outside of the visual modality. After all, she first cut her ‘attentional teeth’ thinking about problems of auditory and audiovisual selective attention. In this article, we review the literature concerning feature integration beyond the visual modality, concentrating, in particular, on the integration of features from different sensory modalities. We highlight a number of the challenges, as far as any straightforward attempt to extend feature integration to the non-visual (i.e. auditory and tactile) and cross-modal (or multisensory) cases, is concerned. These challenges include the problem of how basic features should be defined, the question of whether it even makes sense to talk of objects of perception in the auditory and olfactory modalities, the possibility of integration outside of the focus of spatial attention, and the integration of features from different sensory modalities in the control of action. Nevertheless, despite such limitations, Treisman’s feature integration theory still stands as the standard approach against which alternatives are assessed, be it in the visual case or, increasingly, beyond.
The structure of illusory conjunctions reveals hierarchical binding of multipart objects
The world around us is filled with complex objects, full of color, motion, shape, and texture, and these features seem to be represented separately in the early visual system. Anne Treisman pointed out that binding these separate features together into coherent conscious percepts is a serious challenge, and she argued that selective attention plays a critical role in this process. Treisman also showed that, consistent with this view, outside the focus of attention we suffer from illusory conjunctions: misperceived pairings of features into objects. Here we used Treisman’s logic to study the structure of pre-attentive representations of multipart, multicolor objects, by exploring the patterns of illusory conjunctions that arise outside the focus of attention. We found consistent evidence of some pre-attentive binding of colors to their parts, and weaker evidence of binding multiple colors of the same object. The extent to which such hierarchical binding occurs seems to depend on the geometric structure of multipart objects: Objects whose parts are easier to separate seem to exhibit greater pre-attentive binding. Together, these results suggest that representations outside the focus of attention are not entirely a “shapeless bundles of features,” but preserve some meaningful object structure.
Feature integration theory in non-humans: Spotlight on the archerfish
The ability to visually search, quickly and accurately, for designated items in cluttered environments is crucial for many species to ensure survival. Feature integration theory, one of the most influential theories of attention, suggests that certain visual features that facilitate this search are extracted pre-attentively in a parallel fashion across the visual field during early visual processing. Hence, if some objects of interest possess such a feature uniquely, it will pop out from the background during the integration stage and draw visual attention immediately and effortlessly. For years, visual search research has explored these ideas by investigating the conditions (and visual features) that characterize efficient versus inefficient visual searches. The bulk of research has focused on human vision, though ecologically there are many reasons to believe that feature integration theory is applicable to other species as well. Here we review the main findings regarding the relevance of feature integration theory to non-human species and expand it to new research on one particular animal model – the archerfish. Specifically, we study both archerfish and humans in an extensive and comparative set of visual-search experiments. The findings indicate that both species exhibit similar behavior in basic feature searches and in conjunction search tasks. In contrast, performance differed in searches defined by shape. These results suggest that evolution pressured many visual features to pop out for both species despite cardinal differences in brain anatomy and living environment, and strengthens the argument that aspects of feature integration theory may be generalizable across the animal kingdom.
Medium versus difficult visual search: How a quantitative change in the functional visual field leads to a qualitative difference in performance
The dominant theories of visual search assume that search is a process involving comparisons of individual items against a target description that is based on the properties of the target in isolation. Here, we present four experiments that demonstrate that this holds true only in difficult search. In medium search it seems that the relation between the target and neighbouring items is also part of the target description. We used two sets of oriented lines to construct the search items. The cardinal set contained horizontal and vertical lines, the diagonal set contained left diagonal and right diagonal lines. In all experiments, participants knew the identity of the target and the line set used to construct it. In difficult search this knowledge allowed performance to improve in displays where only half of the search items came from the same line set as the target (50% eligibility), relative to displays where all items did (100% eligibility). However, in medium search, performance was actually poorer for 50% eligibility, especially on target-absent trials. This opposite effect of ineligible items in medium search and difficult search is hard to reconcile with theories based on individual items. It is more in line with theories that conceive search as a sequence of fixations where the number of items processed during a fixation depends on the difficulty of the search task: When search is medium, multiple items are processed per fixation. But when search is difficult, only a single item is processed.
Intrusions into the shadow of attention: A new take on illusory conjunctions
We present new evidence about illusory conjunctions (ICs) suggesting that their current explanation requires revision. According to Feature Integration Theory (FIT; Treisman & Gelade Cognitive Psychology, 12 , 97–136, 1980 ), focal attention to a single stimulus is required to bind its features into an integrated percept. FIT predicts that if attention is spread over multiple stimuli, features of these different stimuli can be combined into a single percept and produce ICs. Treisman and Schmidt ( Cognitive Psychology, 14 , 107–141, 1982 ) and Cohen & Ivry ( Journal of Experimental Psychology: Human Perception and Performance, 15 (4), 650–663, 1989 ) supported this prediction. In the latter study, participants viewed brief displays containing two digits and two colored letters. Digit locations were pre-cued, and participants were instructed to prioritize the digits and to spread their attention across the region encompassed by the digits. Cohen & Ivry found that reports of one letter (the ‘target’) produced ICs when both letters appeared between the digits. Expanding on Cohen & Ivry’s paradigm, we find that both letters do not need to appear between the digits to produce ICs. While the target letter was highly susceptible to ICs if the target appeared inside the position of a nearby digit, the position of the other letter was largely irrelevant. Our experimental results also argue that these ICs were not due to mnemonic errors occurring while the digits are being reported. Based on our findings, we propose that attention to the digits casts an attentional ‘shadow’ projecting towards fixation, interfering with processing of target letters in that shadow and allowing color information from elsewhere in the display to be included in the resulting percept.
Which search are you on? Adapting to color while searching for shape
Human observers adjust their attentional control settings when searching for a target in the presence of predictable changes in the target-defining feature dimension. We investigated whether observers also adapt to changes in a nondefining target dimension. According to feature integration theory, stimuli that are unique in their environment in a single feature dimension can be detected with little effort. In two experiments, we studied how observers searching for such singletons adapt their attentional control settings to a dynamical change in a nondefining target dimension. Participants searched for a shape singleton and freely chose between two targets in each trial. The two targets differed in color, and the ratio of distractors colored like each target varied dynamically across trials. A model-based analysis with a Bayesian estimation approach showed that participants adapted their target choices to the color ratio: They tended to select the target from the smaller color subset, and switched their preference both when the color ratio changed between gray and heterogeneous colors (Exp. 1 ) and when it changed between red and blue (Exp. 2 ). Participants thus tuned their attentional control settings toward color, although the target was defined by shape. We concluded that observers spontaneously adapted their behavior to changing regularities in the environment. Because adaptation was more pronounced when color homogeneity allowed for element grouping, we suggest that observers adapt to regularities that can be registered without attentional resources. They do so even if the changes are not relevant for accomplishing the task—a process presumably based on statistical learning.
Effects of changing object identity on location working memory
It is widely accepted that features and locations are represented independently in an initial stage of visual processing. But to what degree are they represented separately at a later stage, after objects enter visual working memory (VWM)? In one of her last studies on VWM, Treisman raised an open question about how people represent locations in VWM, suggesting that locations may be remembered independently of what occupies them. Using photographs of real-world objects, we tested the independence of location memory from object identity in a location change detection task. We introduced changes to object identities between the encoding and test arrays, but instructed participants to treat the objects as placeholders. Three experiments showed that location memory was disrupted when the placeholders changed shape or orientation. The disruption was more noticeable for elongated than for round placeholders and was comparable between real-world objects and rectangles of similar aspect ratio. These findings suggest that location representation is sensitive to the placeholders’ geometric properties. Though they contradict the idea that objects are just placeholders in location working memory (WM), the findings support Treisman’s proposal that the items in VWM are bound to the global configuration of the memory array.
Color–Texture Pattern Classification Using Global–Local Feature Extraction, an SVM Classifier, with Bagging Ensemble Post-Processing
Many applications in image analysis require the accurate classification of complex patterns including both color and texture, e.g., in content image retrieval, biometrics, and the inspection of fabrics, wood, steel, ceramics, and fruits, among others. A new method for pattern classification using both color and texture information is proposed in this paper. The proposed method includes the following steps: division of each image into global and local samples, texture and color feature extraction from samples using a Haralick statistics and binary quaternion-moment-preserving method, a classification stage using support vector machine, and a final stage of post-processing employing a bagging ensemble. One of the main contributions of this method is the image partition, allowing image representation into global and local features. This partition captures most of the information present in the image for colored texture classification allowing improved results. The proposed method was tested on four databases extensively used in color–texture classification: the Brodatz, VisTex, Outex, and KTH-TIPS2b databases, yielding correct classification rates of 97.63%, 97.13%, 90.78%, and 92.90%, respectively. The use of the post-processing stage improved those results to 99.88%, 100%, 98.97%, and 95.75%, respectively. We compared our results to the best previously published results on the same databases finding significant improvements in all cases.
Integrated population models
Population dynamics models have long assumed that populations are composed of a restricted number of groups, where individuals in each group have identical demographic rates and where all groups are similarly affected by density-dependent and -independent effects. However, individuals usually vary tremendously in performance and in their sensitivity to environmental conditions or resource limitation, such that individual contributions to population growth will be highly variable. Recent efforts to integrate individual processes in population models open up new opportunities for the study of eco-evolutionary processes, such as the density-dependent influence of environmental conditions on the evolution of morphological, behavioral, and life-history traits. We review recent advances that demonstrate how including individual mechanisms in models of population dynamics contributes to a better understanding of the drivers of population dynamics within the framework of integrated population models (IPMs). IPMs allow for the integration in a single inferential framework of different data types as well as variable population structure including sex, social group, or territory, all of which can be formulated to include individual-level processes. Through a series of examples, we first show how IPMs can be beneficial for getting more accurate estimates of demographic traits than classic matrix population models by including basic population structure and their influence on population dynamics. Second, the integration of individual- and population-level data allows estimating density-dependent effects along with their inherent uncertainty by directly using the population structure and size to feedback on demography. Third, we show how IPMs can be used to study the influence of the dynamics of continuous individual traits and individual quality on population dynamics. We conclude by discussing the benefits and limitations of IPMs for integrating data at different spatial, temporal, and organismal levels to build more mechanistic models of population dynamics.