Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
8,189 result(s) for "software modeling and simulation"
Sort by:
Simulating the Software Development Lifecycle: The Waterfall Model
This study employs a simulation-based approach, adapting the waterfall model, to provide estimates for software project and individual phase completion times. Additionally, it pinpoints potential efficiency issues stemming from suboptimal resource levels. We implement our software development lifecycle simulation using SimPy, a Python discrete-event simulation framework. Our model is executed within the context of a software house on 100 projects of varying sizes examining two scenarios. The first provides insight based on an initial set of resources, which reveals the presence of resource bottlenecks, particularly a shortage of programmers for the implementation phase. The second scenario uses a level of resources that would achieve zero-wait time, identified using a stepwise algorithm. The findings illustrate the advantage of using simulations as a safe and effective way to experiment and plan for software development projects. Such simulations allow those managing software development projects to make accurate, evidence-based projections as to phase and project completion times as well as explore the interplay with resources.
Modelado exploratorio del rendimiento y la confiabilidad de software sobre middleware orientado a mensajes
El rendimiento es un importante atributo de calidad de un sistema de software. La Ingeniería de rendimiento del software comprende las actividades de análisis, diseño, construcción, medición y validación, que atienden los requerimientos de rendimiento a lo largo del proceso de desarrollo de software.  En los sistemas de software que utilizan comunicación basada en mensajes, el rendimiento depende en gran medida del middleware orientado a mensajes (Message-Oriented Middleware – MOM). Los arquitectos de software necesitan considerar su organización, configuración y uso para predecir el comportamiento de un sistema que use tal plataforma. La inclusión de un MOM en una arquitectura de software requiere conocer el impacto de la mensajería y de la infraestructura utilizada. Omitir la influencia del MOM llevaría a la generación de predicciones erróneas. En este artículo se explora tal influencia, mediante el modelado y la simulación basados en componentes, utilizando el enfoque Palladio Component Model – PCM. En particular, una aplicación modelada en PCM fue adaptada para incluir comunicación basada en mensajes. Las simulaciones sobre el modelo, mediciones sistemáticas y pruebas de carga sobre la aplicación permitieron determinar cómo cambios introducidos en el modelo influyen en las predicciones del comportamiento de la aplicación en cuanto a rendimiento y confiabilidad. Fue posible identificar un cuello de botella que impacta negativamente el rendimiento y la confiabilidad del sistema original. La introducción de MOM mejoró la confiabilidad del sistema, a expensas del rendimiento. La simulación del rendimiento basado en componentes reveló diferencias significativas respecto de los experimentos basados en pruebas de carga y mediciones.
A simulation-based approach to study the influence of different production flows on manufacturing of customized products
Manufacturing products tailored to the individual requirements of customers is a must if companies want to compete effectively on the market. The production of customized goods poses new challenges for all areas of functioning of production systems. It is necessary to adopt such rules and methods that will allow a flexible response to product design changes and their demand In the organization of production flow (materials and information). The article presents research carried out in the SmartFactory laboratory of the Poznań University of Technology regarding the impact of the structure of products (customization) on the realization of current production orders. The research was carried out using the FlexSim simulation environment. Based on simulation experiments for three forms of organization of production flow with varying degrees of flexibility of production resources, an analysis was made of the time of execution of various sets of production orders and the level of use of available working time. The results of research indicate that in the production of products with low and high planned labor consumption, the use of universal production station is the most advantageous. For such a solution, the degree of utilization of the available working time of production stations is also the highest. It was also found that the principles of scheduling production orders affect the effectiveness of the production system. The best results were obtained for the production schedule, where the sequence of production orders was established from the lowest planned time of resource loading.
Software process simulation modelling: A survey of practice
In recent years, simulation modelling of software development processes has attracted considerable interest in software engineering. Despite the growing interest, there is little literature available that reports on the state-of-practice in software process simulation modelling (SPSM). We report results of a survey of simulation in SPSM and relate it to simulation practice in general. The results of this survey indicate that software process simulation (SPS) modellers are generally methodical, work on large complex problems, develop large models, and have a systematic simulation modelling process in place. However, on the other hand, the simulation modelling process and simulation model evaluation have been identified as the most urgent problems to be addressed in SPSM. The results from this investigation are interesting and bring many problems into focus. The paper helps understand the characteristics of the SPSM and SPS modellers, and highlights areas of interest for further in-depth research in the SPSM.
A Dynamic Integrated Framework for Software Process Improvement
Current software process models (CMM, SPICE, etc.) strongly recommend the application of statistical control and measure guides to define, implement, and evaluate the effects of different process improvements. However, whilst quantitative modeling has been widely used in other fields, it has not been considered enough in the field of software process improvement. During the last decade software process simulation has been used to address a wide diversity of management problems. Some of these problems are related to strategic management, technology adoption, understanding, training and learning, and risk management, among others. In this work a dynamic integrated framework for software process improvement is presented. This framework combines traditional estimation models with an intensive utilization of dynamic simulation models of the software process. The aim of this framework is to support a qualitative and quantitative assessment for software process improvement and decision making to achieve a higher software development process capability according to the Capability Maturity Model. The concepts underlying this framework have been implemented in a software process improvement tool that has been used in a local software organization. The results obtained and the lessons learned are also presented in this paper.
SLiM 3: Forward Genetic Simulations Beyond the Wright–Fisher Model
With the desire to model population genetic processes under increasingly realistic scenarios, forward genetic simulations have become a critical part of the toolbox of modern evolutionary biology. The SLiM forward genetic simulation framework is one of the most powerful and widely used tools in this area. However, its foundation in the Wright–Fisher model has been found to pose an obstacle to implementing many types of models; it is difficult to adapt the Wright–Fisher model, with its many assumptions, to modeling ecologically realistic scenarios such as explicit space, overlapping generations, individual variation in reproduction, density-dependent population regulation, individual variation in dispersal or migration, local extinction and recolonization, mating between subpopulations, age structure, fitness-based survival and hard selection, emergent sex ratios, and so forth. In response to this need, we here introduce SLiM 3, which contains two key advancements aimed at abolishing these limitations. First, the new non-Wright–Fisher or “nonWF” model type provides a much more flexible foundation that allows the easy implementation of all of the above scenarios and many more. Second, SLiM 3 adds support for continuous space, including spatial interactions and spatial maps of environmental variables. We provide a conceptual overview of these new features, and present several example models to illustrate their use.
Brian 2, an intuitive and efficient neural simulator
Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol. Brian addresses these issues using runtime code generation. Scientists write code with simple and concise high-level descriptions, and Brian transforms them into efficient low-level code that can run interleaved with their code. We illustrate this with several challenging examples: a plastic model of the pyloric network, a closed-loop sensorimotor model, a programmatic exploration of a neuron model, and an auditory model with real-time input. Simulating the brain starts with understanding the activity of a single neuron. From there, it quickly gets very complicated. To reconstruct the brain with computers, neuroscientists have to first understand how one brain cell communicates with another using electrical and chemical signals, and then describe these events using code. At this point, neuroscientists can begin to build digital copies of complex neural networks to learn more about how those networks interpret and process information. To do this, computational neuroscientists have developed simulators that take models for how the brain works to simulate neural networks. These simulators need to be able to express many different models, simulate these models accurately, and be relatively easy to use. Unfortunately, simulators that can express a wide range of models tend to require technical expertise from users, or perform poorly; while those capable of simulating models efficiently can only do so for a limited number of models. An approach to increase the range of models simulators can express is to use so-called ‘model description languages’. These languages describe each element within a model and the relationships between them, but only among a limited set of possibilities, which does not include the environment. This is a problem when attempting to simulate the brain, because a brain is precisely supposed to interact with the outside world. Stimberg et al. set out to develop a simulator that allows neuroscientists to express several neural models in a simple way, while preserving high performance, without using model description languages. Instead of describing each element within a specific model, the simulator generates code derived from equations provided in the model. This code is then inserted into the computational experiments. This means that the simulator generates code specific to each model, allowing it to perform well across a range of models. The result, Brian 2, is a neural simulator designed to overcome the rigidity of other simulators while maintaining performance. Stimberg et al. illustrate the performance of Brian 2 with a series of computational experiments, showing how Brian 2 can test unconventional models, and demonstrating how users can extend the code to use Brian 2 beyond its built-in capabilities.
PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems
Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal \"virtual laboratory\" for such multicellular systems simulates both the biochemical microenvironment (the \"stage\") and many mechanically and biochemically interacting cells (the \"players\" upon the stage). PhysiCell-physics-based multicellular simulator-is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility \"out of the box.\" The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a \"cellular cargo delivery\" system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net.
SLiM 2: Flexible, Interactive Forward Genetic Simulations
Modern population genomic datasets hold immense promise for revealing the evolutionary processes operating in natural populations, but a crucial prerequisite for this goal is the ability to model realistic evolutionary scenarios and predict their expected patterns in genomic data. To that end, we present SLiM 2: an evolutionary simulation framework that combines a powerful, fast engine for forward population genetic simulations with the capability of modeling a wide variety of complex evolutionary scenarios. SLiM achieves this flexibility through scriptability, which provides control over most aspects of the simulated evolutionary scenarios with a simple R-like scripting language called Eidos. An example SLiM simulation is presented to illustrate the power of this approach. SLiM 2 also includes a graphical user interface for simulation construction, interactive runtime control, and dynamic visualization of simulation output, facilitating easy and fast model development with quick prototyping and visual debugging. We conclude with a performance comparison between SLiM and two other popular forward genetic simulation packages.
Assessing the overall fit of composite models estimated by partial least squares path modeling
Purpose This study aims to examine the role of an overall model fit assessment in the context of partial least squares path modeling (PLS-PM). In doing so, it will explain when it is important to assess the overall model fit and provides ways of assessing the fit of composite models. Moreover, it will resolve major concerns about model fit assessment that have been raised in the literature on PLS-PM. Design/methodology/approach This paper explains when and how to assess the fit of PLS path models. Furthermore, it discusses the concerns raised in the PLS-PM literature about the overall model fit assessment and provides concise guidelines on assessing the overall fit of composite models. Findings This study explains that the model fit assessment is as important for composite models as it is for common factor models. To assess the overall fit of composite models, researchers can use a statistical test and several fit indices known through structural equation modeling (SEM) with latent variables. Research limitations/implications Researchers who use PLS-PM to assess composite models that aim to understand the mechanism of an underlying population and draw statistical inferences should take the concept of the overall model fit seriously. Practical implications To facilitate the overall fit assessment of composite models, this study presents a two-step procedure adopted from the literature on SEM with latent variables. Originality/value This paper clarifies that the necessity to assess model fit is not a question of which estimator will be used (PLS-PM, maximum likelihood, etc). but of the purpose of statistical modeling. Whereas, the model fit assessment is paramount in explanatory modeling, it is not imperative in predictive modeling.