Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
24
result(s) for
"Chaigneau, Sergio E."
Sort by:
Using agreement probability to study differences in types of concepts and conceptualizers
by
Chaigneau, Sergio E.
,
Moreno, Sebastián
,
Canessa, Enrique
in
Behavioral Science and Psychology
,
Cognitive Psychology
,
Psychology
2024
Agreement probability
p(a)
is a homogeneity measure of lists of properties produced by participants in a Property Listing Task (PLT) for a concept. Agreement probability’s mathematical properties allow a rich analysis of property-based descriptions. To illustrate, we use
p(a)
to delve into the differences between concrete and abstract concepts in sighted and blind populations. Results show that concrete concepts are more homogeneous within sighted and blind groups than abstract ones (i.e., exhibit a higher
p(a)
than abstract ones) and that concrete concepts in the blind group are less homogeneous than in the sighted sample. This supports the idea that listed properties for concrete concepts should be more similar across subjects due to the influence of visual/perceptual information on the learning process. In contrast, abstract concepts are learned based mainly on social and linguistic information, which exhibit more variability among people, thus, making the listed properties more dissimilar across subjects. Relative to abstract concepts, the difference in
p(a)
between sighted and blind is not statistically significant. Though this is a null result, and should be considered with care, it is expected because abstract concepts should be learned by paying attention to the same social and linguistic input in both, blind and sighted, and thus, there is no reason to expect that the respective lists of properties should differ. Finally, we used
p(a)
to classify concrete and abstract concepts with a good level of certainty. All these analyses suggest that
p(a)
can be fruitfully used to study data obtained in a PLT.
Journal Article
A mathematical model of semantic access in lexical and semantic decisions
2024
In this work, we use a mathematical model of the property listing task dynamics and test its ability to predict processing time in semantic and lexical decision tasks. The study aims at exploring the temporal dynamics of semantic access in these tasks and showing that the mathematical model captures essential aspects of semantic access, beyond the original task for which it was developed. In two studies using the semantic and lexical decision tasks, we used the mathematical model’s coefficients to predict reaction times. Results showed that the model was able to predict processing time in both tasks, accounting for an independent portion of the total variance, relative to variance predicted by traditional psycholinguistic variables (i.e., frequency, familiarity, concreteness imageability). Overall, this study provides evidence of the mathematical model’s validity and generality, and offers insights regarding the characterization of concrete and abstract words.
Journal Article
CPNCoverageAnalysis: An R package for parameter estimation in conceptual properties norming studies
by
Chaigneau, Sergio E.
,
Lagos, Rodrigo
,
Moreno, Sebastián
in
Behavioral Science and Psychology
,
Cognitive Psychology
,
Comment
2023
In conceptual properties norming studies (CPNs), participants list properties that describe a set of concepts. From CPNs, many different parameters are calculated, such as semantic richness. A generally overlooked issue is that those values are only point estimates of the true unknown population parameters. In the present work, we present an R package that allows us to treat those values as population parameter estimates. Relatedly, a general practice in CPNs is using an equal number of participants who list properties for each concept (i.e., standardizing sample size). As we illustrate through examples, this procedure has negative effects on data’s statistical analyses. Here, we argue that a better method is to standardize coverage (i.e., the proportion of sampled properties to the total number of properties that describe a concept), such that a similar coverage is achieved across concepts. When standardizing coverage rather than sample size, it is more likely that the set of concepts in a CPN all exhibit a similar representativeness. Moreover, by computing coverage the researcher can decide whether the CPN reached a sufficiently high coverage, so that its results might be generalizable to other studies. The R package we make available in the current work allows one to compute coverage and to estimate the necessary number of participants to reach a target coverage. We show this sampling procedure by using the R package on real and simulated CPN data.
Journal Article
AC-PLT: An algorithm for computer-assisted coding of semantic property listing data
by
Chaigneau, Sergio E.
,
Ramos, Diego
,
Moreno, Sebastián
in
Algorithms
,
Behavioral Science and Psychology
,
Cognitive Psychology
2024
In this paper, we present a novel algorithm that uses machine learning and natural language processing techniques to facilitate the coding of feature listing data. Feature listing is a method in which participants are asked to provide a list of features that are typically true of a given concept or word. This method is commonly used in research studies to gain insights into people's understanding of various concepts. The standard procedure for extracting meaning from feature listings is to manually code the data, which can be time-consuming and prone to errors, leading to reliability concerns. Our algorithm aims at addressing these challenges by automatically assigning human-created codes to feature listing data that achieve a quantitatively good agreement with human coders. Our preliminary results suggest that our algorithm has the potential to improve the efficiency and accuracy of content analysis of feature listing data. Additionally, this tool is an important step toward developing a fully automated coding algorithm, which we are currently preliminarily devising.
Journal Article
Single and multiple systems in probabilistic categorization
2025
Rev. Psychol. 3, 536–551 (2024))1 discussed the past 30 years of categorization research and concluded that a specific multiple-system account — the competition between verbal and implicit systems (COVIS) model — is the best available account of most categorization phenomena. [...]the authors also do not address how the COVIS model would handle probabilistic categorization. Given these considerations, we are not convinced that COVIS (and multiple-systems accounts more generally) are the best explanation of the data from probabilistic categorization tasks and accurately model all relevant category learning.
Journal Article
Describing and understanding the time course of the property listing task
by
Chaigneau, Sergio E.
,
Moreno, Sebastián
,
Canessa, Enrique
in
Artificial Intelligence
,
Behavioral Sciences
,
Biomedical and Life Sciences
2024
To study linguistically coded concepts, researchers often resort to the Property Listing Task (PLT). In a PLT, participants are asked to list properties that describe a concept (e.g., for DOG, subjects may list “is a pet”, “has four legs”, etc.). When PLT data is collected for many concepts, researchers obtain Conceptual Properties Norms (CPNs), which are used to study semantic content and as a source of control variables. Though the PLT and CPNs are widely used across psychology, only recently a model that describes the listing course of a PLT has been developed and validated. That original model describes the listing course using order of production of properties. Here we go a step beyond and validate the model using response times (RT), i.e., the time from cue onset to property listing. Our results show that RT data exhibits the same regularities observed in the previous model, but now we can also analyze the time course, i.e., dynamics of the PLT. As such, the RT validated model may be applied to study several similar memory retrieval tasks, such as the Free Listing Task, Verbal Fluidity Task, and to research related cognitive processes. To illustrate those kinds of analyses, we present a brief example of the difference in PLT’s dynamics between listing properties for abstract versus concrete concepts, which shows that the model may be fruitfully applied to study concepts.
Journal Article
Validity of a One-Stop Automatic Algorithm for Counting Clusters and Shifts in the Semantic Fluency Task
We introduce PROXIS, a computational algorithm for the Semantic Fluency Task (SFT), which automatically counts clusters and shifts. We compared its output relative to human coders and to another cluster/shift counting algorithm (Forager), and its performance in predicting executive functions (EF), intelligence, processing speed, and semantic retrieval, also against human coders and to Forager. Correlations with EF subdomains and other cognitive factors closely resemble those of human coders, evidencing convergent validity. We also used Naïve Bayes and Decision Tree for age classification, with PROXIS outputs successfully discriminating age groups, evidence of the meaning and interpretability of those counts. Clusters and shifts were found to be more important than word counts. PROXIS’s consistency extended across semantic categories (animals, clothing, foods), suggesting its robustness and generalizability. Comparing PROXIS convergent validity with Forager’s, we found that they are on par. However, PROXIS ability to discriminate between participants’ age groups is substantially higher than Forager’s. We believe that PROXIS is applicable beyond the specifics of the SFT, and to many tasks in which people list items from semantic memory (e.g., tasks like free associates, top-of-mind, feature listing). Practical implications of the algorithm’s ease of implementation and relevance for studying the relation of the SFT to EFs and other research problems are discussed.
Journal Article
Computational Cognitive models of Categorization: Predictions under Conditions of Classification Uncertainty
by
Chaigneau, Sergio E.
,
Marchant, Nicolás
in
categorization
,
category learning
,
computational cognitive models
2022
In the category learning literature, similarity models have monopolized a good deal of research. The prototype and exemplar models are both based on the idea that people represent the structure of categories and category instances in the physical world in a mental similarity space. However, evidence for these models comes mainly from paradigms that provide subjects with deterministic feedback (i.e., exemplars belong to their corresponding categories with probability = 1). There is evidence that results obtained with deterministic feedback paradigms may not generalize well under probabilistic feedback conditions (i.e., where exemplars belong to their corresponding categories with probability less than 1). In this current work, we also suggest that probabilistic feedback may better reflect natural conditions, which is another important reason to pursue probabilistic feedback research. Thus, in the current work we set up a category learning experiment with probabilistic feedback and use it to evaluate different models. In addition to the two similarity models discussed above, we also use an associationist model that does not rely on the similarity construct. To compare our three models, we rely on computational modeling, which is a standard way of model comparison in cognitive psychology. Our results show that our associationist model outperforms similarity models on all our model evaluation measures. After presenting our results, we discuss why the similarity-based models fail, and also suggest some future lines of research that are possible using probabilistic feedback procedures.
Journal Article
When are concepts comparable across minds?
2016
In communication, people cannot resort to direct reference (e.g., pointing) when using diffuse concepts like democracy. Given that concepts reside in individuals’ minds, how can people share those concepts? We argue that concepts are comparable across a social group if they afford agreement for those who use it; and that agreement occurs whenever individuals receive evidence that others conceptualize a given situation similarly to them. Based on Conceptual Agreement Theory, we show how to compute an agreement probability based on the sets of properties belonging to concepts. If that probability is sufficiently high, this shows that concepts afford an adequate level of agreement, and one may say that concepts are comparable across individuals’ minds. In contrast to other approaches, our method considers that inter-individual variability in naturally occurring conceptual content exists and is a fact that must be taken into account, whereas other theories treat variability as error that should be cancelled out. Given that conceptual variability will exist, our approach may establish whether concepts are comparable across individuals’ minds more soundly than previous methods.
Journal Article