Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
1,187 result(s) for "Robinson, Maria"
Sort by:
Measuring memory is harder than you think: How to avoid problematic measurement practices in memory research
We argue that critical areas of memory research rely on problematic measurement practices and provide concrete suggestions to improve the situation. In particular, we highlight the prevalence of memory studies that use tasks (like the “old/new” task: “have you seen this item before? yes/no”) where quantifying performance is deeply dependent on counterfactual reasoning that depends on the (unknowable) distribution of underlying memory signals. As a result of this difficulty, different literatures in memory research (e.g., visual working memory, eyewitness identification, picture memory, etc.) have settled on a variety of fundamentally different metrics to get performance measures from such tasks (e.g., A′ , corrected hit rate, percent correct, d′ , diagnosticity ratios, K values, etc.), even though these metrics make different, contradictory assumptions about the distribution of latent memory signals, and even though all of their assumptions are frequently incorrect. We suggest that in order for the psychology and neuroscience of memory to become a more cumulative, theory-driven science, more attention must be given to measurement issues. We make a concrete suggestion: The default memory task for those simply interested in performance should change from old/new (“did you see this item’?”) to two-alternative forced-choice (“which of these two items did you see?”). In situations where old/new variants are preferred (e.g., eyewitness identification; theoretical investigations of the nature of memory signals), receiver operating characteristic (ROC) analysis should be performed rather than a binary old/new task.
Conrad Kain : letters from a wandering mountain guide, 1906-1933
\"Conrad Kain is a titan amongst climbers in Canada and is well-known in mountaineering circles all over the world. His letters to Amelie Malek--a life-long friend--offer a candid view into the deepest thoughts of the Austrian mountain guide, and are a perfect complement to his autobiography, Where the Clouds Can Go. The 144 letters provide a unique and personal view of what it meant to immigrate to Canada in the early part of the twentieth century. Kain's letters are ordered chronologically with annotations, keeping the sections in English untouched, while those in German have been carefully translated. Historians and mountain culture enthusiasts worldwide will appreciate Kain's genius for description, his passion for nature, his opinions, and his musings about his life.\"-- Provided by publisher.
Local but not global graph theoretic measures of semantic networks generalize across tasks
“Dogs” are connected to “cats” in our minds, and “backyard” to “outdoors.” Does the structure of this semantic knowledge differ across people? Network-based approaches are a popular representational scheme for thinking about how relations between different concepts are organized. Recent research uses graph theoretic analyses to examine individual differences in semantic networks for simple concepts and how they relate to other higher-level cognitive processes, such as creativity. However, it remains ambiguous whether individual differences captured via network analyses reflect true differences in measures of the structure of semantic knowledge, or differences in how people strategically approach semantic relatedness tasks. To test this, we examine the reliability of local and global metrics of semantic networks for simple concepts across different semantic relatedness tasks. In four experiments, we find that both weighted and unweighted graph theoretic representations reliably capture individual differences in local measures of semantic networks (e.g., how related pot is to pan versus lion ). In contrast, we find that metrics of global structural properties of semantic networks, such as the average clustering coefficient and shortest path length, are less robust across tasks and may not provide reliable individual difference measures of how people represent simple concepts. We discuss the implications of these results and offer recommendations for researchers who seek to apply graph theoretic analyses in the study of individual differences in semantic memory.
Immune Dysregulation in Autism Spectrum Disorder: What Do We Know about It?
Autism spectrum disorder (ASD) is a group of complex multifactorial neurodevelopmental disorders characterized by a wide and variable set of neuropsychiatric symptoms, including deficits in social communication, narrow and restricted interests, and repetitive behavior. The immune hypothesis is considered to be a major factor contributing to autism pathogenesis, as well as a way to explain the differences of the clinical phenotypes and comorbidities influencing disease course and severity. Evidence highlights a link between immune dysfunction and behavioral traits in autism from several types of evidence found in both cerebrospinal fluid and peripheral blood and their utility to identify autistic subgroups with specific immunophenotypes; underlying behavioral symptoms are also shown. This review summarizes current insights into immune dysfunction in ASD, with particular reference to the impact of immunological factors related to the maternal influence of autism development; comorbidities influencing autism disease course and severity; and others factors with particular relevance, including obesity. Finally, we described main elements of similarities between immunopathology overlapping neurodevelopmental and neurodegenerative disorders, taking as examples autism and Parkinson Disease, respectively.
Zooming in on what counts as core and auxiliary: A case study on recognition models of visual working memory
Research on best practices in theory assessment highlights that testing theories is challenging because they inherit a new set of assumptions as soon as they are linked to a specific methodology. In this article, we integrate and build on this work by demonstrating the breadth of these challenges. We show that tracking auxiliary assumptions is difficult because they are made at different stages of theory testing and at multiple levels of a theory. We focus on these issues in a reanalysis of a seminal study and its replications, both of which use a simple working-memory paradigm and a mainstream computational modeling approach. These studies provide the main evidence for “all-or-none” recognition models of visual working memory and are still used as the basis for how to measure performance in popular visual working-memory tasks. In our reanalysis, we find that core practical auxiliary assumptions were unchecked and violated; the original model comparison metrics and data were not diagnostic in several experiments. Furthermore, we find that models were not matched on “theory general” auxiliary assumptions, meaning that the set of tested models was restricted, and not matched in theoretical scope. After testing these auxiliary assumptions and identifying diagnostic testing conditions, we find evidence for the opposite conclusion. That is, continuous resource models outperform all-or-none models. Together, our work demonstrates why tracking and testing auxiliary assumptions remains a fundamental challenge, even in prominent studies led by careful, computationally minded researchers. Our work also serves as a conceptual guide on how to identify and test the gamut of auxiliary assumptions in theory assessment, and we discuss these ideas in the context of contemporary approaches to scientific discovery.
A quantitative model of ensemble perception as summed activation in feature space
Ensemble perception is a process by which we summarize complex scenes. Despite the importance of ensemble perception to everyday cognition, there are few computational models that provide a formal account of this process. Here we develop and test a model in which ensemble representations reflect the global sum of activation signals across all individual items. We leverage this set of minimal assumptions to formally connect a model of memory for individual items to ensembles. We compare our ensemble model against a set of alternative models in five experiments. Our approach uses performance on a visual memory task for individual items to generate zero-free-parameter predictions of interindividual and intraindividual differences in performance on an ensemble continuous-report task. Our top-down modelling approach formally unifies models of memory for individual items and ensembles and opens a venue for building and comparing models of distinct memory processes and representations. Robinson and Brady present a computational model of ensemble perception as the global sum of feature activations of individual items.
Noisy and hierarchical visual memory across timescales
Both in everyday life and in memory research, people tend to think that items are ‘held’ in mind, in the same way that a real-world object can be held in one’s hand. Inspired by this metaphor, traditional work on visual working memory and visual long-term memory focuses on understanding how many objects are remembered or forgotten, or held or lost, in particular circumstances. By contrast, newer computational and empirical work on visual memory focuses on the role of noise in memory representations — in which memories are thought to vary continually in ‘strength’ or ‘precision’ — as well as the role of the visual hierarchy and priors in structuring memory. In this Review, we merge these contemporary theories and evidence. We describe how fundamentally noisy memory representations are instantiated at different levels of the visual hierarchy and support both visual working memory and long-term memory. We also discuss how thinking of memory in this way can direct further research and illuminate the nature of cognitive function more broadly.Visual memory has traditionally been thought of as all-or-none, with items remembered perfectly or completely forgotten. In this Review, Brady and colleagues synthesize work that indicates that visual memory representations in working memory and long-term memory are not all-or-none but are instead noisy and hierarchical.