Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
87 result(s) for "biologically-inspired artificial intelligence"
Sort by:
VTSNN: a virtual temporal spiking neural network
Spiking neural networks (SNNs) have recently demonstrated outstanding performance in a variety of high-level tasks, such as image classification. However, advancements in the field of low-level assignments, such as image reconstruction, are rare. This may be due to the lack of promising image encoding techniques and corresponding neuromorphic devices designed specifically for SNN-based low-level vision problems. This paper begins by proposing a simple yet effective undistorted weighted-encoding-decoding technique, which primarily consists of an Undistorted Weighted-Encoding (UWE) and an Undistorted Weighted-Decoding (UWD). The former aims to convert a gray image into spike sequences for effective SNN learning, while the latter converts spike sequences back into images. Then, we design a new SNN training strategy, known as Independent-Temporal Backpropagation (ITBP) to avoid complex loss propagation in spatial and temporal dimensions, and experiments show that ITBP is superior to Spatio-Temporal Backpropagation (STBP). Finally, a so-called Virtual Temporal SNN (VTSNN) is formulated by incorporating the above-mentioned approaches into U-net network architecture, fully utilizing the potent multiscale representation capability. Experimental results on several commonly used datasets such as MNIST, F-MNIST, and CIFAR10 demonstrate that the proposed method produces competitive noise-removal performance extremely which is superior to the existing work. Compared to ANN with the same architecture, VTSNN has a greater chance of achieving superiority while consuming ~1/274 of the energy. Specifically, using the given encoding-decoding strategy, a simple neuromorphic circuit could be easily constructed to maximize this low-carbon strategy.
Spiking Autoencoders With Temporal Coding
Spiking neural networks with temporal coding schemes process information based on the relative timing of neuronal spikes. In supervised learning tasks, temporal coding allows learning through backpropagation with exact derivatives, and achieves accuracies on par with conventional artificial neural networks. Here we introduce spiking autoencoders with temporal coding and pulses, trained using backpropagation to store and reconstruct images with high fidelity from compact representations. We show that spiking autoencoders with a single layer are able to effectively represent and reconstruct images from the neuromorphically-encoded MNIST and FMNIST datasets. We explore the effect of different spike time target latencies, data noise levels and embedding sizes, as well as the classification performance from the embeddings. The spiking autoencoders achieve results similar to or better than conventional non-spiking autoencoders. We find that inhibition is essential in the functioning of the spiking autoencoders, particularly when the input needs to be memorised for a longer time before the expected output spike times. To reconstruct images with a high target latency, the network learns to accumulate negative evidence and to use the pulses as excitatory triggers for producing the output spikes at the required times. Our results highlight the potential of spiking autoencoders as building blocks for more complex biologically-inspired architectures. We also provide open-source code for the model.
Swarm intelligence and bio-inspired computation : theory and applications
Swarm Intelligence and bio-inspired computation have become increasing popular in the last two decades.Bio-inspired algorithms such as ant colony algorithms, bat algorithms, bee algorithms, firefly algorithms, cuckoo search and particle swarm optimization have been applied in almost every area of science and engineering with a dramatic increase.
Feedback and Surround Modulated Boundary Detection
Edges are key components of any visual scene to the extent that we can recognise objects merely by their silhouettes. The human visual system captures edge information through neurons in the visual cortex that are sensitive to both intensity discontinuities and particular orientations. The “classical approach” assumes that these cells are only responsive to the stimulus present within their receptive fields, however, recent studies demonstrate that surrounding regions and inter-areal feedback connections influence their responses significantly. In this work we propose a biologically-inspired edge detection model in which orientation selective neurons are represented through the first derivative of a Gaussian function resembling double-opponent cells in the primary visual cortex (V1). In our model we account for four kinds of receptive field surround, i.e. full, far, iso- and orthogonal-orientation, whose contributions are contrast-dependant. The output signal from V1 is pooled in its perpendicular direction by larger V2 neurons employing a contrast-variant centre-surround kernel. We further introduce a feedback connection from higher-level visual areas to the lower ones. The results of our model on three benchmark datasets show a big improvement compared to the current non-learning and biologically-inspired state-of-the-art algorithms while being competitive to the learning-based methods.
Attention Based Detection and Recognition of Hand Postures Against Complex Backgrounds
A system for the detection, segmentation and recognition of multi-class hand postures against complex natural backgrounds is presented. Visual attention, which is the cognitive process of selectively concentrating on a region of interest in the visual field, helps human to recognize objects in cluttered natural scenes. The proposed system utilizes a Bayesian model of visual attention to generate a saliency map , and to detect and identify the hand region. Feature based visual attention is implemented using a combination of high level (shape, texture) and low level (color) image features. The shape and texture features are extracted from a skin similarity map, using a computational model of the ventral stream of visual cortex. The skin similarity map, which represents the similarity of each pixel to the human skin color in HSI color space, enhanced the edges and shapes within the skin colored regions. The color features used are the discretized chrominance components in HSI, YCbCr color spaces, and the similarity to skin map. The hand postures are classified using the shape and texture features, with a support vector machines classifier. A new 10 class complex background hand posture dataset namely NUS hand posture dataset-II is developed for testing the proposed algorithm (40 subjects, different ethnicities, various hand sizes, 2750 hand postures and 2000 background images). The algorithm is tested for hand detection and hand posture recognition using 10 fold cross-validation. The experimental results show that the algorithm has a person independent performance, and is reliable against variations in hand sizes and complex backgrounds. The algorithm provided a recognition rate of 94.36 %. A comparison of the proposed algorithm with other existing methods evidences its better performance.
Herding stochastic autonomous agents via local control rules and online target selection strategies
We propose a simple yet effective set of local control rules to make a small group of “herder agents” collect and contain in a desired region a large ensemble of non-cooperative, non-flocking stochastic “target agents” in the plane. We investigate the robustness of the proposed strategies to variations of the number of target agents and the strength of the repulsive force they feel when in proximity of the herders. The effectiveness of the proposed approach is confirmed in both simulations in ROS and experiments on real robots.
Comparing the performance of Hebbian against backpropagation learning using convolutional neural networks
In this paper, we investigate Hebbian learning strategies applied to Convolutional Neural Network (CNN) training. We consider two unsupervised learning approaches, Hebbian Winner-Takes-All (HWTA), and Hebbian Principal Component Analysis (HPCA). The Hebbian learning rules are used to train the layers of a CNN in order to extract features that are then used for classification, without requiring backpropagation (backprop). Experimental comparisons are made with state-of-the-art unsupervised (but backprop-based) Variational Auto-Encoder (VAE) training. For completeness,we consider two supervised Hebbian learning variants (Supervised Hebbian Classifiers—SHC, and Contrastive Hebbian Learning—CHL), for training the final classification layer, which are compared to Stochastic Gradient Descent training. We also investigate hybrid learning methodologies, where some network layers are trained following the Hebbian approach, and others are trained by backprop. We tested our approaches on MNIST, CIFAR10, and CIFAR100 datasets. Our results suggest that Hebbian learning is generally suitable for training early feature extraction layers, or to retrain higher network layers in fewer training epochs than backprop. Moreover, our experiments show that Hebbian learning outperforms VAE training, with HPCA performing generally better than HWTA.
An overview of space-variant and active vision mechanisms for resource-constrained human inspired robotic vision
In order to explore and understand the surrounding environment in an efficient manner, humans have developed a set of space-variant vision mechanisms that allow them to actively attend different locations in the surrounding environment and compensate for memory, neuronal transmission bandwidth and computational limitations in the brain. Similarly, humanoid robots deployed in everyday environments have limited on-board resources, and are faced with increasingly complex tasks that require interaction with objects arranged in many possible spatial configurations. The main goal of this work is to describe and overview biologically inspired, space-variant human visual mechanism benefits, when combined with state-of-the-art algorithms for different visual tasks (e.g. object detection), ranging from low-level hardwired attention vision (i.e. foveal vision) to high-level visual attention mechanisms. We overview the state-of-the-art in biologically plausible space-variant resource-constrained vision architectures, namely for active recognition and localization tasks.
DentoMorph-LDMs: diffusion models based on novel adaptive 8-connected gum tissue and deciduous teeth loss for dental image augmentation
Pediatric dental image analysis faces critical challenges in disease detection due to missing or corrupted pixel regions and the unique developmental characteristics of deciduous teeth, with current Latent Diffusion Models (LDMs) failing to preserve anatomical integrity during reconstruction of pediatric oral structures. We developed two novel biologically-inspired loss functions integrated within LDMs specifically designed for pediatric dental imaging: Gum-Adaptive Pixel Imputation (GAPI) utilizing adaptive 8-connected pixel neighborhoods that mimic pediatric gum tissue adaptive behavior, and Deciduous Transition-Based Reconstruction (DTBR) incorporating developmental stage awareness based on primary teeth transition patterns observed in children aged 2–12 years. These algorithms guide the diffusion process toward developmentally appropriate reconstructions through specialized loss functions that preserve structural continuity of deciduous dentition and age-specific anatomical features crucial for accurate pediatric diagnosis. Experimental validation on 2,255 pediatric dental images across six conditions (caries, calculus, gingivitis, tooth discoloration, ulcers, and hypodontia) demonstrated superior image generation performance with Inception Score of 9.87, Fréchet Inception Distance of 4.21, Structural Similarity Index of 0.952, and Peak Signal-to-Noise Ratio of 34.76, significantly outperforming eleven competing diffusion models. Pediatric disease detection using enhanced datasets achieved statistically significant improvements across five detection models: +0.0694 in mean Average Precision [95% CI: 0.0608–0.0780], + 0.0606 in Precision [0.0523–0.0689], + 0.0736 in Recall [0.0651–0.0821], and + 0.0678 in F1-Score [0.0597–0.0759] (all p  < 0.0001), enabling pediatric dentists to detect early-stage caries, developmental anomalies, and eruption disorders with unprecedented accuracy. This framework revolutionizes pediatric dental diagnosis by providing pediatric dentists with AI-enhanced imaging tools that account for the unique biological characteristics of developing dentition, significantly improving early detection of oral diseases in children and establishing a foundation for age-specific dental AI applications that enhance clinical decision-making in pediatric dental practice.
Training Neural Networks with Krill Herd Algorithm
In recent times, several new metaheuristic algorithms based on natural phenomena have been made available to researchers. One of these is that of the Krill Herd Algorithm (KHA) procedure. It contains many interesting mechanisms. The purpose of this article is to compare the KHA optimization algorithm used for learning an artificial neural network (ANN), with other heuristic methods and with more conventional procedures. The proposed ANN training method has been verified for the classification task. For that purpose benchmark examples drawn from the UCI Machine Learning Repository were employed with Classification Error and Sum of Square Errors being used as evaluation criteria. It has been concluded that the application of KHA offers promising performance—both in terms of aforementioned metrics, as well as time needed for ANN training.