Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
17 result(s) for "Zemmouchi-Ghomari, Leila"
Sort by:
Selecting an appropriate supervised machine learning algorithm for predictive maintenance
Predictive maintenance refers to predicting malfunctions using data from monitoring equipment and process performance measurements. Machine learning algorithms and techniques are often used to analyze equipment monitoring data. Machine learning is the process in which a computer can work more precisely by collecting and analyzing data. It is often the case that machine learning algorithms use supervised learning, in which labelled data is used to feed the algorithm. However, there are many supervised machine learning algorithms available. Therefore, choosing the best-supervised machine learning algorithm to resolve predictive maintenance issues is not trivial. This paper aims to increase the performance of predictive maintenance and achieve its goals by selecting the most suitable supervised machine learning algorithm. Based on the most commonly used criteria in research articles, we selected three supervised machine learning algorithms from a comparative study: Random forest, Decision tree and KNN. We then tested selected algorithms on data from real-world and simulation scenarios. Finally, we conducted the experiment based on vibration analysis and reliability evaluation. We noticed that Random forests and Decision trees obtained slightly the same performance. KNN is a better classification algorithm for extensive volumes of data; on the contrary, Random forest performs better in the case of small datasets.
Artificial intelligence in intelligent transportation systems
PurposeThis article examines the contribution of artificial intelligence to augmenting Intelligent Transportation Systems (ITS) to enhance traffic flow, safety, and sustainability.Design/methodology/approachThe research investigates using AI technologies in ITS, including machine learning, computer vision, and deep learning. It analyzes case studies on ITS projects in Poznan, Mysore, Austin, New York City, and Beijing to identify essential components, advantages, and obstacles.FindingsUsing AI in Intelligent Transportation Systems has considerable opportunities for enhancing traffic efficiency, minimizing accidents, and fostering sustainable urban growth. Nonetheless, issues like data quality, real-time processing, security, public acceptability, and privacy concerns need resolution.Originality/valueThis article thoroughly examines AI-driven ITS, emphasizing successful applications and pinpointing significant difficulties. It underscores the need for a sustainable economic strategy for extensive adoption and enduring success.
How Industry 4.0 Can Benefit From Semantic Web Technologies and Artefacts
Industry 4.0 is a technology-driven manufacturing process that heavily relies on technologies, such as the internet of things (IoT), cloud computing, web services, and big real-time data. Industry 4.0 has significant potential if the challenges currently being faced by introducing these technologies are effectively addressed. Some of these challenges consist of deficiencies in terms of interoperability and standardization. Semantic Web technologies can provide useful solutions for several problems in this new industrial era, such as systems integration and consistency checks of data processing and equipment assemblies and connections. This paper discusses what contribution the Semantic Web can make to Industry 4.0.
Failure Case Studies and Challenges in ERP Integration
The 21st century recognizes the importance of information as a vital resource, so it is studied, researched, and improved to accomplish the performance objectives set by an organization. To sustain cohesive business reliability and growth, information sharing and communication are essential. enterprise resource planning (ERP) software integration is seen by many organizations as a way to manage their integrated information flow more effectively under increased competitive pressure. The implementation of such a project for a company is very beneficial because it revolves around performance and formality, both of which are highly important and complex for organizations. However, due to the ERP project implementation budget, time, and resources, this type of project is tricky. In this context, and based on the principle that people learn more from failures than from successes, the authors are interested in this article about the difficulties, the stakes, and the causes of ERPs integration failures reported in the literature to make recommendations and learn from these case studies.
Current Development of Ontology-Based Context Modeling
Any information used to characterize the situation of an entity: a person, a place, or an object, can be considered as context. Indeed, context is crucial to avoid semantic ambiguity in data interpretation. However, linking data to its context is a recognized research issue. Adopting an ontology-based approach to model formally the context enables automatic interpretation and reasoning capabilities. This article discusses the main context modeling approaches based ontology by highlighting their principles, scenarios, use cases, benefits, and challenges to explore the use of ontologies to represent contexts.
Cohabitation of Relational Databases and Domain Ontologies in the Semantic Web Context
Despite the fact that database technology is mature, it is not yet compatible with Semantic Web requirements, such as Semantic Web language concordance. On the other hand, ontology technology is the most promising solution for concretizing the Semantic Web vision; however, it has not been recognized as the Semantic Web data model thus far. This paper aims to bridge the gap between ontologies and databases in the context of the Semantic Web by highlighting their advantages and disadvantages, their complementarity, and the most popular means of their mutual conversion.
Ontology assessment based on linked data principles
PurposeThe purpose of this paper is to evaluate ontologies with respect to the linked data principles. This paper presents a concrete interpretation of the four linked data principles applied to ontologies, along with an implementation that automatically detects violations of these principles and fixes them (semi-automatically). The implementation is applied to a number of state-of-the-art ontologies.Design/methodology/approachBased on a precise and detailed interpretation of the linked data principles in the context of ontologies (to become as reusable as possible), the authors propose a set of algorithms to assess ontologies according to the four linked data principles along with means to implement them using a Java/Jena framework. All ontology elements are extracted and examined taking into account particular cases, such as blank nodes and literals. The authors also provide propositions to fix some of the detected anomalies.FindingsThe experimental results are consistent with the proven quality of popular ontologies of the linked data cloud because these ontologies obtained good scores from the linked data validator tool.Originality/valueThe proposed approach and its implementation takes into account the assessment of the four linked data principles and propose means to correct the detected anomalies in the assessed data sets, whereas most LD validator tools focus on the evaluation of principle 2 (URI dereferenceability) and principle 3 (RDF validation); additionally, they do not tackle the issue of fixing detected errors.