Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
14,279 result(s) for "Middleware"
Sort by:
Streamline Intelligent Crowd Monitoring with IoT Cloud Computing Middleware
This article introduces a novel middleware that utilizes cost-effective, low-power computing devices like Raspberry Pi to analyze data from wireless sensor networks (WSNs). It is designed for indoor settings like historical buildings and museums, tracking visitors and identifying points of interest. It serves as an evacuation aid by monitoring occupancy and gauging the popularity of specific areas, subjects, or art exhibitions. The middleware employs a basic form of the MapReduce algorithm to gather WSN data and distribute it across available computer nodes. Data collected by RFID sensors on visitor badges is stored on mini-computers placed in exhibition rooms and then transmitted to a remote database after a preset time frame. Utilizing MapReduce for data analysis and a leader election algorithm for fault tolerance, this middleware showcases its viability through metrics, demonstrating applications like swift prototyping and accurate validation of findings. Despite using simpler hardware, its performance matches resource-intensive methods involving audiovisual and AI techniques. This design’s innovation lies in its fault-tolerant, distributed setup using budget-friendly, low-power devices rather than resource-heavy hardware or methods. Successfully tested at a historical building in Greece (M. Hatzidakis’ residence), it is tailored for indoor spaces. This paper compares its algorithmic application layer with other implementations, highlighting its technical strengths and advantages. Particularly relevant in the wake of the COVID-19 pandemic and general monitoring middleware for indoor locations, this middleware holds promise in tracking visitor counts and overall building occupancy.
MCPLOTS: a particle physics resource based on volunteer computing
The mcplots.cem.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC@HOME 2.0 platform.
A Systematic Exploration on Challenges and Limitations in Middleware Programming for IoT Technology
In a distributed environment, such as IoT, the requirement for constant sensing and actuating from a diverse source of devices increases the complexity and therefore, the operational cost of the software required to keep the system running. The article covers the conceptual and technological aspects, together with a series of previous experiences, findings, and literature that constitute the essence of the body of knowledge related to the issues and challenges for developing a middleware that supports the IoT domain's independent functionality. The article provides the foundation to understand the challenges faced in the development of IoT middleware, focusing on five sensitizing elements, namely, IoT evolution, architecture, security, middleware, and programming. The systematic exploration on limitations for IoT software development revealed the need for programming methods and language abstractions to cope with the demands of IoT scenarios, specifically to deal with the challenges of massive communications, limited infrastructure, and multiplicity of devices.
Addressing tokens dynamic generation, propagation, storage and renewal to secure the GlideinWMS pilot based jobs and system
GlideinWMS has been one of the first middleware in the WLCG community to transition from X.509 to support also tokens. The first step was to get from the prototype in 2019 to using tokens in production in 2022. This paper will present the challenges introduced by the wider adoption of tokens and the evolution plans for securing the pilot infrastructure of GlideinWMS and supporting the new requirements. In the last couple of years, the GlideinWMS team supported the migration of experiments and resources to tokens. Inadequate support in the current infrastructure, more stringent requirements, and the higher spatial and temporal granularity forced GlideinWMS to revisit once more how credentials are generated, used, and propagated. The new credential modules have been designed to be used in multiple systems (GlideinWMS, HEPCloud) and use a model where credentials have type, purpose, and different flows. Credentials are dynamically generated in order to customize the duration and limit the scope to the targeted resource. This allows to enforce the least privilege principle. Finally, we also considered adding credential storage, renewal, and invalidation mechanisms within the GlideinWMS infrastructure to better serve the experiments’ needs.
Nuevos Aportes de las Tecnologias de Informacion para el Desarrollo de Simulacion Distribuida/New Contributions of Information Technologies to Develop Distributed Simulation
Se entiende por simulación al proceso por medio del cual se representa, reproduce o imita el comportamiento observable de un proceso o sistema real a lo largo del tiempo y el espacio. La simulación distribuida tiene la capacidad de acelerar la ejecución de un único modelo, vincular y reutilizar múltiples modelos para simular modelos más grandes y acelerar la ejecución de etapas de experimentación. En este contexto, la construcción de simulaciones distribuidas ha mejorado en los últimos años gracias al surgimiento de nuevas tecnologías de la información. En este artículo se describen los principios, modos de trabajo y enfoques de administración de tiempo asociados a esta técnica junto con las herramientas de software que, en la actualidad, brindan soporte a su aplicación. Además, se presenta una revisión bibliográfica que evidencia el crecimiento (y la importancia) de su uso como técnica de estudio en diferentes dominios. Palabras-clave: computación distribuida y paralela; gestión del tiempo. Simulation is the process by which the observable behavior of a real process or system is represented, reproduced or imitated in time and space. Distributed simulation can be used for accelerate the execution of models, reuse models in larger models, and accelerate the execution of experiments. Given the emergence of new information technologies, the use of distributed simulation has grown. This paper describes the fundamentals, modes and time management approaches used in distributed simulations along with the software tools that improves its development. Also, a literature review is presented to show how this technique is applied in distinct domains. Keywords: parallel and distribute computing; time management.
ILM/Isup.2IK/I Model for Hosting an Application Based on Microservices in Multi-Cloud
Cloud computing has become a popular delivery model service, offering several advantages. However, there are still challenges that need to be addressed when applying the cloud model to specific scenarios. Two of such challenges involve deploying and executing applications across multiple providers, each comprising several services with similar functionalities and different capabilities. Therefore, dealing with application distributions across various providers can be a complex task for a software architect due to the differing characteristics of the application components. Some works have proposed solutions to address the challenges discussed here, but most of them focus on service providers. To facilitate the decision-making process of software architects, we previously presented PacificClouds, an architecture for managing the deployment and execution of applications based on microservices and distributed in a multi-cloud environment. Therefore, in this work, we focus on the challenges of selecting multiple clouds for PacificClouds and choosing providers that best meet the microservices and software architect requirements. We propose a selection model and three approaches to address various scenarios. We evaluate the performance of the approaches and conduct a comparative analysis of them. The results demonstrate their feasibility regarding performance.
Application and platform decoupling strategy of substation integrated monitoring system
Aiming at the problems of poor data opening and sharing ability and closed application ecology of substation integrated monitoring system, the open monitoring system of a substation based on platform + APP architecture is proposed, which realizes the decoupling of APP and the monitoring system vendor platform through the security reinforced container technology, unified data interaction technology, and data service-oriented middleware technology. The decoupling of APP and platform promotes the construction of an open and shared industrial ecosystem for substation-integrated monitoring systems, which in turn improves the level of substation intelligence.