Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
2,912
result(s) for
"Silva, Claudio T."
Sort by:
Few-shot genes selection: subset of PAM50 genes for breast cancer subtypes classification
by
Mendonca-Neto, Rayol
,
Silva, Claudio T.
,
Nakamura, Eduardo F.
in
Accuracy
,
Algorithms
,
Bioinformatics
2024
Background
In recent years, researchers have made significant strides in understanding the heterogeneity of breast cancer and its various subtypes. However, the wealth of genomic and proteomic data available today necessitates efficient frameworks, instruments, and computational tools for meaningful analysis. Despite its success as a prognostic tool, the PAM50 gene signature’s reliance on many genes presents challenges in terms of cost and complexity. Consequently, there is a need for more efficient methods to classify breast cancer subtypes using a reduced gene set accurately.
Results
This study explores the potential of achieving precise breast cancer subtype categorization using a reduced gene set derived from the PAM50 gene signature. By employing a “Few-Shot Genes Selection” method, we randomly select smaller subsets from PAM50 and evaluate their performance using metrics and a linear model, specifically the Support Vector Machine (SVM) classifier. In addition, we aim to assess whether a more compact gene set can maintain performance while simplifying the classification process. Our findings demonstrate that certain reduced gene subsets can perform comparable or superior to the full PAM50 gene signature.
Conclusions
The identified gene subsets, with 36 genes, have the potential to contribute to the development of more cost-effective and streamlined diagnostic tools in breast cancer research and clinical settings.
Journal Article
Spline-based feature curves from point-sampled geometry
by
Ochotta, Tilo
,
Ha, Linh K.
,
Silva, Cláudio T.
in
Algorithms
,
Artificial Intelligence
,
Computer Graphics
2008
Defining sharp features in a 3D model facilitates a better understanding of the surface and aids geometric processing and graphics applications, such as reconstruction, filtering, simplification, reverse engineering, visualization, and non-photo realism. We present a robust method that identifies sharp features in a point-based model by returning a set of smooth spline curves aligned along the edges. Our feature extraction leverages the concepts of robust moving least squares to locally project points to potential features. The algorithm processes these points to construct arc-length parameterized spline curves fit using an iterative refinement method, aligning smooth and continuous curves through the feature points. We demonstrate the benefits of our method with three applications: surface segmentation, surface meshing and point-based compression.
Journal Article
Spatial distribution of graffiti types: a complex network approach
by
Comin, Cesar H.
,
Silva, Claudio T.
,
da F. Costa, Luciano
in
Accessibility
,
Complex Systems
,
Condensed Matter Physics
2021
Urban art constitutes an important issue in urbanism. Previous studies on the spatial distribution of graffiti rarely consider visual categories and how the city topology can impact graffiti production. In this work, after assigning graffiti occurrences to three categories, we analyzed their spatial distribution while searching for possible biases. Concepts from complex networks were adopted. First, communities defined by the connectivity profiles of the city network were obtained and the prevalence of each type of graffiti over these regions was analyzed. Next, to study the relationship between the density of graffiti occurrences with the visibility of specific city localities, a measurement (
accessibility
) based on the dynamics of the network was related to the distribution of the occurrences of graffiti of each type. Our case study considered the city of São Paulo, Brazil. The results showed good agreement of the detected communities with the main city areas and no biases between the relative density of each type of graffiti. Relatively high correlations were obtained between the density of graffiti and the accessibility when calculated inside each of the identified communities, suggesting that graffiti tends to appear more frequently in certain regions of the city with high accessibility.
Graphic abstract
Journal Article
Efficient Probabilistic and Geometric Anatomical Mapping Using Particle Mesh Approximation on GPUs
2011
Deformable image registration in the presence of considerable contrast differences and large size and shape changes presents significant research challenges. First, it requires a robust registration framework that does not depend on intensity measurements and can handle large nonlinear shape variations. Second, it involves the expensive computation of nonlinear deformations with high degrees of freedom. Often it takes a significant amount of computation time and thus becomes infeasible for practical purposes. In this paper, we present a solution based on two key ideas: a new registration method that generates a mapping between anatomies represented as a multicompartment model of class posterior images and geometries and an implementation of the algorithm using particle mesh approximation on Graphical Processing Units (GPUs) to fulfill the computational requirements. We show results on the registrations of neonatal to 2-year old infant MRIs. Quantitative validation demonstrates that our proposed method generates registrations that better maintain the consistency of anatomical structures over time and provides transformations that better preserve structures undergoing large deformations than transformations obtained by standard intensity-only registration. We also achieve the speedup of three orders of magnitudes compared to a CPU reference implementation, making it possible to use the technique in time-critical applications.
Journal Article
Template-based quadrilateral mesh generation from imaging data
by
Siqueira, Marcelo F.
,
Silva, Claudio T.
,
Nonato, L. Gustavo
in
Artificial Intelligence
,
Computer Graphics
,
Computer Science
2011
This paper describes a novel template-based meshing approach for generating good quality quadrilateral meshes from 2D digital images. This approach builds upon an existing image-based mesh generation technique called
Imeshp
, which enables us to create a segmented triangle mesh from an image without the need for an image segmentation step. Our approach generates a quadrilateral mesh using an indirect scheme, which converts the segmented triangle mesh created by the initial steps of the
Imesh
technique into a quadrilateral one. The triangle-to-quadrilateral conversion makes use of template meshes of triangles. To ensure good element quality, the conversion step is followed by a smoothing step, which is based on a new optimization-based procedure. We show several examples of meshes generated by our approach, and present a thorough experimental evaluation of the quality of the meshes given as examples.
Journal Article
Reimagining TaxiVis through an Immersive Space-Time Cube metaphor and reflecting on potential benefits of Immersive Analytics for urban data exploration
2024
Current visualization research has identified the potential of more immersive settings for data exploration, leveraging VR and AR technologies. To explore how a traditional visualization system could be adapted into an immersive framework, and how it could benefit from this, we decided to revisit a landmark paper presented ten years ago at IEEE VIS. TaxiVis, by Ferreira et al., enabled interactive spatio-temporal querying of a large dataset of taxi trips in New York City. Here, we reimagine how TaxiVis' functionalities could be implemented and extended in a 3D immersive environment. Among the unique features we identify as being enabled by the Immersive TaxiVis prototype are alternative uses of the additional visual dimension, a fully visual 3D spatio-temporal query framework, and the opportunity to explore the data at different scales and frames of reference. By revisiting the case studies from the original paper, we demonstrate workflows that can benefit from this immersive perspective. Through reporting on our experience, and on the vision and reasoning behind our design decisions, we hope to contribute to the debate on how conventional and immersive visualization paradigms can complement each other and on how the exploration of urban datasets can be facilitated in the coming years.
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
by
Ortigossa, Evandro S
,
Nonato, Luis Gustavo
,
Dias, Fábio F
in
Artificial intelligence
,
Black boxes
,
Complexity
2025
The development of machine learning applications has increased significantly in recent years, motivated by the remarkable ability of learning-powered systems to discover and generalize intricate patterns hidden in massive datasets. Modern learning models, while powerful, often exhibit a complexity level that renders them opaque black boxes, lacking transparency and hindering our understanding of their decision-making processes. Opacity challenges the practical application of machine learning, especially in critical domains requiring informed decisions. Explainable Artificial Intelligence (XAI) addresses that challenge, unraveling the complexity of black boxes by providing explanations. Feature attribution/importance XAI stands out for its ability to delineate the significance of input features in predictions. However, most attribution methods have limitations, such as instability, when divergent explanations result from similar or the same instance. This work introduces T-Explainer, a novel additive attribution explainer based on the Taylor expansion that offers desirable properties such as local accuracy and consistency. We demonstrate T-Explainer's effectiveness and stability over multiple runs in quantitative benchmark experiments against well-known attribution methods. Additionally, we provide several tools to evaluate and visualize explanations, turning T-Explainer into a comprehensive XAI framework.
Software Infrastructure for exploratory visualization and data analysis: past, present, and future
2008
Future advances in science depend on our ability to comprehend the vast amounts of data being produced and acquired, and scientific visualization is a key enabling technology in this endeavor. We posit that visualization should be better integrated with the data exploration process instead of being done after the fact - when all the science is done - simply to generate presentations of the findings. An important barrier to a wider adoption of visualization is complexity: the design of effective visualizations is a complex, multistage process that requires deep understanding of existing techniques, and how they relate to human cognition. We envision visualization software tools evolving into scientific discovery environments that support the creative tasks in the discovery pipeline, from data acquisition and simulation to hypothesis testing and evaluation, and that enable the publication of results that can be reproduced and verified.
Journal Article
FlowSense: A Natural Language Interface for Visual Data Exploration within a Dataflow System
2019
Dataflow visualization systems enable flexible visual data exploration by allowing the user to construct a dataflow diagram that composes query and visualization modules to specify system functionality. However learning dataflow diagram usage presents overhead that often discourages the user. In this work we design FlowSense, a natural language interface for dataflow visualization systems that utilizes state-of-the-art natural language processing techniques to assist dataflow diagram construction. FlowSense employs a semantic parser with special utterance tagging and special utterance placeholders to generalize to different datasets and dataflow diagrams. It explicitly presents recognized dataset and diagram special utterances to the user for dataflow context awareness. With FlowSense the user can expand and adjust dataflow diagrams more conveniently via plain English. We apply FlowSense to the VisFlow subset-flow visualization system to enhance its usability. We evaluate FlowSense by one case study with domain experts on a real-world data analysis problem and a formal user study.
Exploring the Relationship Between Feature Attribution Methods and Model Performance
by
Silva, Priscylla
,
Silva, Claudio T
,
Nonato, Luis Gustavo
in
Correlation
,
Deep learning
,
Machine learning
2024
Machine learning and deep learning models are pivotal in educational contexts, particularly in predicting student success. Despite their widespread application, a significant gap persists in comprehending the factors influencing these models' predictions, especially in explainability within education. This work addresses this gap by employing nine distinct explanation methods and conducting a comprehensive analysis to explore the correlation between the agreement among these methods in generating explanations and the predictive model's performance. Applying Spearman's correlation, our findings reveal a very strong correlation between the model's performance and the agreement level observed among the explanation methods.