Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
947
result(s) for
"network simulation tools"
Sort by:
Internet of Things: A Comprehensive Overview on Protocols, Architectures, Technologies, Simulation Tools, and Future Directions
by
Herencsar, Norbert
,
Elbaz, Abdelmoniem
,
Soltan, Ahmed
in
application layer protocols
,
Central processing units
,
Communication
2023
The Internet of Things (IoT) is a global network of interconnected computing, sensing, and networking devices that can exchange data and information via various network protocols. It can connect numerous smart devices thanks to recent advances in wired, wireless, and hybrid technologies. Lightweight IoT protocols can compensate for IoT devices with restricted hardware characteristics in terms of storage, Central Processing Unit (CPU), energy, etc. Hence, it is critical to identify the optimal communication protocol for system architects. This necessitates an evaluation of next-generation networks with improved characteristics for connectivity. This paper highlights significant wireless and wired IoT technologies and their applications, offering a new categorization for conventional IoT network protocols. It provides an in-depth analysis of IoT communication protocols with detailed technical information about their stacks, limitations, and applications. The study further compares industrial IoT-compliant devices and software simulation tools. Finally, the study provides a summary of the current challenges, along with a broad overview of the future directions to tackle the challenges, in the next IoT generation. This study aims to provide a comprehensive primer on IoT concepts, protocols, and future insights that academics and professionals can use in various contexts.
Journal Article
Modelling and simulating worm propagation in static and dynamic traffic
2014
Vehicular ad hoc networks (VANETs) have no fixed infrastructure and instead relies on the vehicles themselves to provide network functionality. An attack scenario with potentially catastrophic consequences is the outbreak of mobile worm epidemic in these networks. This paper analyses the snapshot spreading results under an urban scenario with equilibrium traffic through modelling the mobility pattern, the communication channel, the medium access control (MAC) mechanism and the worm propagation process. The extensive Monte Carlo simulations uncovered the effects of the transmission range (from a typical minimum to a maximum), the minimum velocity and the maximum velocity (from the free flow to the congested traffic), the vehicle density (from a sparse topology to a dense spatial relation) and the MAC mechanism (from presence to absence) on epidemic spreading of such worms in VANETs. Furthermore, the authors simulate the wireless worm propagation in dynamic traffic with the same scenario as the static traffic by using a network simulation tool. The authors discuss the correlation between snapshot results and evolutive outcome, also analyse the reasons resulting in the local differences and finally uncover the interrelations between the affected rate and network parameters. The results are expected to help engineers design intelligent and automatic detection prevention strategies for VANETs.
Journal Article
Internet of Underwater Things: A Survey on Simulation Tools and 5G-Based Underwater Networks
by
Nkenyereye, Lewis
,
Nkenyereye, Lionel
,
Ndibanje, Bruce
in
Acoustics
,
Animal health
,
Aquatic animals
2024
The term “Internet of Underwater Things (IoUT)” refers to a network of intelligent interconnected underwater devices designed to monitor various underwater activities. The IoUT allows for a network of autonomous underwater vehicles (AUVs) to communicate with each other, sense their surroundings, collect data, and transmit them to control centers on the surface at typical Internet speeds. These data serve as a valuable resource for various tasks, including conducting crash surveys, discovering shipwrecks, detecting early signs of tsunamis, monitoring animal health, obtaining real-time aquatic information, and conducting archaeological expeditions. This paper introduces an additional set of alternative simulation tools for underwater networks. We categorize these tools into open-source and licensed simulator options and recommend that students consider using open-source simulators for monitoring underwater networks. There has not been widespread deployment or extensive research on underwater 5G-based networks. However, simulation tools provide some general insights into the challenges and potential issues associated with evaluating such networks, based on the characteristics of underwater communication and 5G, by surveying 5G-based underwater networks and 5G key aspects addressed by the research community in underwater network systems. Through an extensive review of the literature, we discuss the architecture of both Internet of Underwater application-assisted AUVs and Internet of Underwater Things communications in the 5G-based system.
Journal Article
Performance Analysis of IEEE 802.15.4 Bootstrap Process
by
Alberto Gallegos Ramonet
,
Taku Noguchi
in
Access control
,
association
,
association; scanning; beacon-enabled mode; bootstrap; IEEE 802.15.4; IoT; LR-WPAN; ns-3; simulations; Zigbee; WSN; network tools
2022
The IEEE 802.15.4 is a popular standard used in wireless sensor networks (WSNs) and the Internet of Things (IoT) applications. In these networks, devices are organized into groups formally known as personal area networks (PAN) which require a bootstrap procedure to become operational. Bootstrap plays a key role in the initialization and maintenance of these networks. For this reason, this work presents our implementation and performance analysis for the ns-3 network simulator. Specifically, this bootstrap implementation includes the support of three types of scanning mechanisms (energy scan, passive scan, and active scan) and the complete classic association mechanism described by the standard. Both of these mechanisms can be used independently by higher layers protocols to support network initialization, network joining, and maintenance tasks. Performance evaluation is conducted in total network association time and packet overhead terms. Our source code is documented and publicly available in the latest ns-3 official release.
Journal Article
Comparative analysis of network-on-chip simulation tools
2018
Network-on-chip (NoC) is a reliable and scalable communication paradigm deemed as an alternative to classic bus systems in modern systems-on-chip designs. Consequently, one can observe extensive multidimensional research related to the design and implementation of NoC-based systems. A basic requirement for most of these activities is the availability of NoC simulators that enable the study and comparison of different technologies. This study targets the analysis of different NoC simulators and highlights its contributions towards NoC research. Various NoC tools such as NoCTweak, Noxim, Nirgam, Nostrum, BookSim, WormSim, NOCMAP and ORION are evaluated and their strengths and weaknesses are highlighted. The comparative analysis includes methods for estimation of latency, throughput and energy consumption. Further, the exemplary real world application, video object plane decoder is mapped on a 2D mesh NoC using different mapping algorithms under NOCMAP and NoCTweak simulators for comparative analysis of the NoC simulators and their embedded mapping algorithms.
Journal Article
Brian 2, an intuitive and efficient neural simulator
by
Stimberg, Marcel
,
Goodman, Dan FM
,
Brette, Romain
in
Artificial neural networks
,
Brain
,
Cognitive science
2019
Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol. Brian addresses these issues using runtime code generation. Scientists write code with simple and concise high-level descriptions, and Brian transforms them into efficient low-level code that can run interleaved with their code. We illustrate this with several challenging examples: a plastic model of the pyloric network, a closed-loop sensorimotor model, a programmatic exploration of a neuron model, and an auditory model with real-time input. Simulating the brain starts with understanding the activity of a single neuron. From there, it quickly gets very complicated. To reconstruct the brain with computers, neuroscientists have to first understand how one brain cell communicates with another using electrical and chemical signals, and then describe these events using code. At this point, neuroscientists can begin to build digital copies of complex neural networks to learn more about how those networks interpret and process information. To do this, computational neuroscientists have developed simulators that take models for how the brain works to simulate neural networks. These simulators need to be able to express many different models, simulate these models accurately, and be relatively easy to use. Unfortunately, simulators that can express a wide range of models tend to require technical expertise from users, or perform poorly; while those capable of simulating models efficiently can only do so for a limited number of models. An approach to increase the range of models simulators can express is to use so-called ‘model description languages’. These languages describe each element within a model and the relationships between them, but only among a limited set of possibilities, which does not include the environment. This is a problem when attempting to simulate the brain, because a brain is precisely supposed to interact with the outside world. Stimberg et al. set out to develop a simulator that allows neuroscientists to express several neural models in a simple way, while preserving high performance, without using model description languages. Instead of describing each element within a specific model, the simulator generates code derived from equations provided in the model. This code is then inserted into the computational experiments. This means that the simulator generates code specific to each model, allowing it to perform well across a range of models. The result, Brian 2, is a neural simulator designed to overcome the rigidity of other simulators while maintaining performance. Stimberg et al. illustrate the performance of Brian 2 with a series of computational experiments, showing how Brian 2 can test unconventional models, and demonstrating how users can extend the code to use Brian 2 beyond its built-in capabilities.
Journal Article
A Survey on LoRaWAN Technology: Recent Trends, Opportunities, Simulation Tools and Future Directions
by
Almuhaya, Mukarram A. M.
,
Jabbar, Waheb A.
,
Sulaiman, Noorazliza
in
Access control
,
Bandwidths
,
Communication
2022
Low-power wide-area network (LPWAN) technologies play a pivotal role in IoT applications, owing to their capability to meet the key IoT requirements (e.g., long range, low cost, small data volumes, massive device number, and low energy consumption). Between all obtainable LPWAN technologies, long-range wide-area network (LoRaWAN) technology has attracted much interest from both industry and academia due to networking autonomous architecture and an open standard specification. This paper presents a comparative review of five selected driving LPWAN technologies, including NB-IoT, SigFox, Telensa, Ingenu (RPMA), and LoRa/LoRaWAN. The comparison shows that LoRa/LoRaWAN and SigFox surpass other technologies in terms of device lifetime, network capacity, adaptive data rate, and cost. In contrast, NB-IoT technology excels in latency and quality of service. Furthermore, we present a technical overview of LoRa/LoRaWAN technology by considering its main features, opportunities, and open issues. We also compare the most important simulation tools for investigating and analyzing LoRa/LoRaWAN network performance that has been developed recently. Then, we introduce a comparative evaluation of LoRa simulators to highlight their features. Furthermore, we classify the recent efforts to improve LoRa/LoRaWAN performance in terms of energy consumption, pure data extraction rate, network scalability, network coverage, quality of service, and security. Finally, although we focus more on LoRa/LoRaWAN issues and solutions, we introduce guidance and directions for future research on LPWAN technologies.
Journal Article
Catalyst: Fast and flexible modeling of reaction networks
by
Ma, Yingbo
,
Rackauckas, Chris
,
Gowda, Shashi
in
Algorithms
,
Artificial intelligence
,
BASIC BIOLOGICAL SCIENCES
2023
We introduce Catalyst.jl, a flexible and feature-filled Julia library for modeling and high-performance simulation of chemical reaction networks (CRNs). Catalyst supports simulating stochastic chemical kinetics (jump process), chemical Langevin equation (stochastic differential equation), and reaction rate equation (ordinary differential equation) representations for CRNs. Through comprehensive benchmarks, we demonstrate that Catalyst simulation runtimes are often one to two orders of magnitude faster than other popular tools. More broadly, Catalyst acts as both a domain-specific language and an intermediate representation for symbolically encoding CRN models as Julia-native objects. This enables a pipeline of symbolically specifying, analyzing, and modifying CRNs; converting Catalyst models to symbolic representations of concrete mathematical models; and generating compiled code for numerical solvers. Leveraging ModelingToolkit.jl and Symbolics.jl, Catalyst models can be analyzed, simplified, and compiled into optimized representations for use in numerical solvers. Finally, we demonstrate Catalyst’s broad extensibility and composability by highlighting how it can compose with a variety of Julia libraries, and how existing open-source biological modeling projects have extended its intermediate representation.
Journal Article
Control chart pattern recognition using the convolutional neural network
by
Liu, Zhihao
,
Wang, Min
,
Wang, Hui
in
Advanced manufacturing technologies
,
Artificial neural networks
,
Automatic control
2020
Unnatural control chart patterns (CCPs) usually correspond to the specific factors in a manufacturing process, so the control charts have become important means of the statistical process control. Therefore, an accurate and automatic control chart pattern recognition (CCPR) is of great significance for manufacturing enterprises. In order to improve the CCPR accuracy, experts have designed various complex features, which undoubtedly increases the workload and difficulty of the quality control. To solve these problems, a CCPR method based on a one-dimensional convolutional neural network (1D-CNN) is proposed. The proposed method does not require to extract complex features manually; instead, it uses a 1D-CNN to obtain the optimal feature set from the raw data of the CCPs through the feature learning and completes the CCPR. The dataset for training and validation, containing six typical CCPs, is generated by the Monte-Carlo simulation. Then, the influence of the network structural parameters and activation functions on the recognition performance is analyzed and discussed, and some suggestions for parameter selection are given. Finally, the performance of the proposed method is compared with that of the traditional multi-layer perceptron method using the same dataset. The comparison results show that the proposed 1D-CNN method has obvious advantages in the CCPR tasks. Compared with the related literature, the features extracted by the 1D-CNN are of higher quality. Furthermore, the 1D-CNN trained with simulation dataset still perform well in recognizing the real dataset from the production environment.
Journal Article
Likelihood approximation networks (LANs) for fast inference of simulation models in cognitive neuroscience
by
Frank, Michael J
,
Chen, Tony
,
Govindarajan, Lakshmi N
in
Algorithms
,
Analysis
,
approximate bayesian computation
2021
In cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training. Cognitive neuroscience studies the links between the physical brain and cognition. Computational models that attempt to describe the relationships between the brain and specific behaviours quantitatively is becoming increasingly popular in this field. This approach may help determine the causes of certain behaviours and make predictions about what behaviours will be triggered by specific changes in the brain. Many of the computational models used in cognitive neuroscience are built based on experimental data. A good model can predict the results of new experiments given a specific set of conditions with few parameters. Candidate models are often called ‘generative’: models that can simulate data. However, typically, cognitive neuroscience studies require going the other way around: they need to infer models and their parameters from experimental data. Ideally, it should also be possible to properly assess the remaining uncertainty over the parameters after having access to the experimental data. To facilitate this, the Bayesian approach to statistical analysis has become popular in cognitive neuroscience. Common software tools for Bayesian estimation require a ‘likelihood function’, which measures how well a model fits experimental data for given values of the unknown parameters. A major obstacle is that for all but the most common models, obtaining precise likelihood functions is computationally costly. In practice, this requirement limits researchers to evaluating and comparing a small subset of neurocognitive models for which a likelihood function is known. As a result, it is convenience, rather than theoretical interest, that guides this process. In order to provide one solution for this problem, Fengler et al. developed a method that allows users to perform Bayesian inference on a larger number of models without high simulation costs. This method uses likelihood approximation networks (LANs), a computational tool that can estimate likelihood functions for a broad class of models of decision making, allowing researchers to estimate parameters and to measure how well models fit the data. Additionally, Fengler et al. provide both the code needed to build networks using their approach and a tutorial for users. The new method, along with the user-friendly tool, may help to discover more realistic brain dynamics underlying cognition and behaviour as well as alterations in brain function.
Journal Article