Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
187
result(s) for
"Le Traon Yves"
Sort by:
A Case Driven Study of the Use of Time Series Classification for Flexibility in Industry 4.0
2020
With the Industry 4.0 paradigm comes the convergence of the Internet Technologies and Operational Technologies, and concepts, such as Industrial Internet of Things (IIoT), cloud manufacturing, Cyber-Physical Systems (CPS), and so on. These concepts bring industries into the big data era and allow for them to have access to potentially useful information in order to optimise the Overall Equipment Effectiveness (OEE); however, most European industries still rely on the Computer-Integrated Manufacturing (CIM) model, where the production systems run as independent systems (i.e., without any communication with the upper levels). Those production systems are controlled by a Programmable Logic Controller, in which a static and rigid program is implemented. This program is static and rigid in a sense that the programmed routines cannot evolve over the time unless a human modifies it. However, to go further in terms of flexibility, we are convinced that it requires moving away from the aforementioned old-fashioned and rigid automation to a ML-based automation, i.e., where the control itself is based on the decisions that were taken by ML algorithms. In order to verify this, we applied a time series classification method on a scale model of a factory using real industrial controllers, and widened the variety of parts the production line has to treat. This study shows that satisfactory results can be obtained only at the expense of the human expertise (i.e., in the industrial process and in the ML process).
Journal Article
A Systematic Approach for Evaluating Artificial Intelligence Models in Industrial Settings
2021
Artificial Intelligence (AI) is one of the hottest topics in our society, especially when it comes to solving data-analysis problems. Industry are conducting their digital shifts, and AI is becoming a cornerstone technology for making decisions out of the huge amount of (sensors-based) data available in the production floor. However, such technology may be disappointing when deployed in real conditions. Despite good theoretical performances and high accuracy when trained and tested in isolation, a Machine-Learning (M-L) model may provide degraded performances in real conditions. One reason may be fragility in treating properly unexpected or perturbed data. The objective of the paper is therefore to study the robustness of seven M-L and Deep-Learning (D-L) algorithms, when classifying univariate time-series under perturbations. A systematic approach is proposed for artificially injecting perturbations in the data and for evaluating the robustness of the models. This approach focuses on two perturbations that are likely to happen during data collection. Our experimental study, conducted on twenty sensors’ datasets from the public University of California Riverside (UCR) repository, shows a great disparity of the models’ robustness under data quality degradation. Those results are used to analyse whether the impact of such robustness can be predictable—thanks to decision trees—which would prevent us from testing all perturbations scenarios. Our study shows that building such a predictor is not straightforward and suggests that such a systematic approach needs to be used for evaluating AI models’ robustness.
Journal Article
Selecting fault revealing mutants
by
Thierry, Titcheu Chekam
,
Bissyandé, Tegawendé F
,
Sen Koushik
in
Faults
,
Machine learning
,
Mutation
2020
Mutant selection refers to the problem of choosing, among a large number of mutants, the (few) ones that should be used by the testers. In view of this, we investigate the problem of selecting the fault revealing mutants, i.e., the mutants that are killable and lead to test cases that uncover unknown program faults. We formulate two variants of this problem: the fault revealing mutant selection and the fault revealing mutant prioritization. We argue and show that these problems can be tackled through a set of ‘static’ program features and propose a machine learning approach, named FaRM, that learns to select and rank killable and fault revealing mutants. Experimental results involving 1,692 real faults show the practical benefits of our approach in both examined problems. Our results show that FaRM achieves a good trade-off between application cost and effectiveness (measured in terms of faults revealed). We also show that FaRM outperforms all the existing mutant selection methods, i.e., the random mutant sampling, the selective mutation and defect prediction (mutating the code areas pointed by defect prediction). In particular, our results show that with respect to mutant selection, our approach reveals 23% to 34% more faults than any of the baseline methods, while, with respect to mutant prioritization, it achieves higher average percentage of revealed faults with a median difference between 4% and 9% (from the random mutant orderings).
Journal Article
Global Observations of Fine-Scale Ocean Surface Topography With the Surface Water and Ocean Topography (SWOT) Mission
by
Ardhuin, Fabrice
,
Ponte, Aurélien
,
Pascual, Ananda
in
Altimeters
,
Atmospheric boundary layer
,
Biogeochemical cycle
2019
The future international Surface Water and Ocean Topography (SWOT) Mission, planned for launch in 2021, will make high-resolution 2D observations of sea-surface height using SAR radar interferometric techniques. SWOT will map the global and coastal oceans up to 77.6° latitude every 21 days over a swath of 120 km (20 km nadir gap). Today’s 2D mapped altimeter data can resolve ocean scales of 150 km wavelength whereas the SWOT measurement will extend our 2D observations down to 15-30 km, depending on sea state. SWOT will offer new opportunities to observe the oceanic dynamic processes at these scales, that are important in the generation and dissipation of kinetic energy in the ocean, and act as one of the main gateways connecting the interior of the ocean to the upper layer. The active vertical exchanges linked to these scales have impacts on the local and global budgets of heat and carbon, and on nutrients for biogeochemical cycles. This review paper highlights the issues being addressed by the SWOT science community to understand SWOT’s very precise SSH / surface pressure observations, and it explores how SWOT data will be combined with other satellite and in-situ data and models to better understand the upper ocean 4D circulation (x,y,z,t) over the next decade. SWOT’s new SAR-interferometry technology aims to observe ocean SSH scales down to 15-30 km in wavelength. At these scales, SSH includes “balanced” geostrophic eddy motions and high-frequency internal tides and internal waves. This presents both a challenge in reconstructing the 4D upper ocean circulation, or in the assimilation of SSH in models, but also an opportunity to have global observations of the 2D structure of these phenomena, and to learn more about their interactions. At these small scales, the ocean dynamics evolve rapidly, and combining SWOT 2D SSH data with other satellite or in-situ data with different space-time coverage is also a challenge. SWOT’s new technology will be a forerunner for the future altimetric observing system, and so advancing on these issues today will pave the way for our future.
Journal Article
How Deep Argo Will Improve the Deep Ocean in an Ocean Reanalysis
by
Gasparin, Florent
,
Le Traon, Pierre-Yves
,
Hamon, Mathieu
in
Arrays
,
Climate change
,
Data assimilation
2020
Global ocean sampling with autonomous floats going to 4000–6000 m, known as the deep Argo array, constitutes one of the next challenges for tracking climate change. The question here is how such a global deep array will impact ocean reanalyses. Based on the different behavior of four ocean reanalyses, we first identified that large uncertainty exists in current reanalyses in representing local heat and freshwater fluxes in the deep ocean (1 W m−2 and 10 cm yr−1 regionally). Additionally, temperature and salinity comparison with deep Argo observations demonstrates that reanalysis errors in the deep ocean are of the same size as, or even stronger than, the deep ocean signal. An experimental approach, using the 1/4° GLORYS2V4 (Global Ocean Reanalysis and Simulation) system, is then presented to anticipate how the evolution of the global ocean observing system (GOOS), with the advent of deep Argo, would contribute to ocean reanalyses. Based on observing system simulation experiments (OSSE), which consist in extracting observing system datasets from a realistic simulation to be subsequently assimilated in an experimental system, this study suggests that a global deep Argo array of 1200 floats will significantly constrain the deep ocean by reducing temperature and salinity errors by around 50%. Our results also show that such a deep global array will help ocean reanalyses to reduce error in temperature changes below 2000 m, equivalent to global ocean heat fluxes from 0.15 to 0.07 W m−2, and from 0.26 to 0.19 W m−2 for the entire water column. This work exploits the capabilities of operational systems to provide comprehensive information for the evolution of the GOOS.
Journal Article
Contribution of future wide-swath altimetry missions to ocean analysis and forecasting
2018
The impact of forthcoming wide-swath altimetry missions on the ocean analysis and forecasting system was investigated by means of OSSEs (observing system simulation experiments). These experiments were performed with a regional data assimilation system, implemented in the Iberian–Biscay–Ireland (IBI) region, at 1∕12∘ resolution using simulated observations derived from a fully eddy-resolving free simulation at 1∕36∘ resolution over the same region. The objective of the experiments was to assess the ability of different satellite constellations to constrain the ocean analyses and forecasts, considering both along-track altimeters and future wide-swath missions; consequently, the capability of the data assimilation techniques used in the Mercator Ocean operational system to effectively combine the different kinds of measurements was also investigated. These assessments were carried out as part of a European Space Agency (ESA) study on the potential role of wide-swath altimetry in future versions of the European Union Copernicus programme. The impact of future wide-swath altimetry data is evident for investigating the reliability of sea level values in OSSEs. The most significant results were obtained when looking at the sensitivity of the system to wide-swath instrumental error: considering a constellation of three nadir and two “accurate” (small instrumental error) wide-swath altimeters, the error in ocean analysis was reduced by up to 50 % compared to conventional altimeters. Investigating the impact of the repetitivity of the future measurements, the results showed that two wide-swath missions had a major impact on sea-level forecasting – increasing the accuracy over the entire time window of the 5-day forecasts – compared with a single wide-swath instrument. A spectral analysis underlined that the contributions of wide-swath altimetry data observed in ocean analyses and forecast statistics were mainly due to the more accurate resolution, compared with along-track data, of ocean variability at spatial scales smaller than 100 km. Considering the ocean currents, the results confirmed that the information provided by wide-swath measurements at the surface is propagated down the water column and has a considerable impact (30 %) on ocean currents (up to a depth of 300 m), compared with the present constellation of altimeters. The ocean analysis and forecasting systems used here are those currently used by the Copernicus Marine Environment and Monitoring Service (CMEMS) to provide operational services and ocean reanalysis. The results obtained in the OSSEs considering along-track altimeters were consistent with those derived from real data (observing system experiments, OSEs). OSSEs can also be used to assess the potential of new observing systems, and in this study the results showed that future constellations of altimeters will have a major impact on constraining the CMEMS ocean analysis and forecasting systems and their applications.
Journal Article
Impact of Multiple Altimeter Data and Mean Dynamic Topography in a Global Analysis and Forecasting System
2019
Satellite altimetry is one of the main sources of information used to constrain global ocean analysis and forecasting systems. In addition to in situ vertical temperature and salinity profiles and sea surface temperature (SST) data, sea level anomalies (SLA) from multiple altimeters are assimilated through the knowledge of a surface reference, the mean dynamic topography (MDT). The quality of analyses and forecasts mainly depends on the availability of SLA observations and on the accuracy of the MDT. A series of observing system evaluations (OSEs) were conducted to assess the relative importance of the number of assimilated altimeters and the accuracy of the MDT in a Mercator Ocean global 1/4° ocean data assimilation system. Dedicated tools were used to quantify impacts on analyzed and forecast sea surface height and temperature/salinity in deeper layers. The study shows that a constellation of four altimeters associated with a precise MDT is required to adequately describe and predict upper-ocean circulation in a global 1/4° ocean data assimilation system. Compared to a one-altimeter configuration, a four-altimeter configuration reduces the mean forecast error by about 30%, but the reduction can reach more than 80% in western boundary current (WBC) regions. The use of the most recent MDT updates improves the accuracy of analyses and forecasts to the same extent as assimilating a fourth altimeter.
Journal Article
On Locating Malicious Code in Piggybacked Android Apps
by
Li Li;Daoyuan Li;Tegawende F. Bissyande;Jacques Klein;Haipeng Cai;David Lo;Yves Le Traon
in
Applications programs
,
Artificial Intelligence
,
Building codes
2017
To devise efficient approaches and tools for detecting malicious packages in the Android ecosystem, researchers are increasingly required to have a deep understanding of malware. There is thus a need to provide a framework for dissecting malware and locating malicious program fragments within app code in order to build a comprehensive dataset of malicious samples. Towards addressing this need, we propose in this work a tool-based approach called HookRanker, which provides ranked lists of potentially malicious packages based on the way malware behaviour code is triggered. With experiments on a ground truth of piggybacked apps, we are able to automatically locate the malicious packages from piggybacked Android apps with an accuracy@5 of 83.6% for such packages that are triggered through method invocations and an accuracy@5 of 82.2% for such packages that are triggered independently.
Journal Article
FixMiner: Mining relevant fix patterns for automated program repair
2020
Patching is a common activity in software development. It is generally performed on a source code base to address bugs or add new functionalities. In this context, given the recurrence of bugs across projects, the associated similar patches can be leveraged to extract generic fix actions. While the literature includes various approaches leveraging similarity among patches to guide program repair, these approaches often do not yield fix patterns that are tractable and reusable as actionable input to APR systems. In this paper, we propose a systematic and automated approach to mining relevant and actionable fix patterns based on an iterative clustering strategy applied to atomic changes within patches. The goal of FixMiner is thus to infer separate and reusable fix patterns that can be leveraged in other patch generation systems. Our technique, FixMiner, leverages Rich Edit Script which is a specialized tree structure of the edit scripts that captures the AST-level context of the code changes. FixMiner uses different tree representations of Rich Edit Scripts for each round of clustering to identify similar changes. These are abstract syntax trees, edit actions trees, and code context trees. We have evaluated FixMiner on thousands of software patches collected from open source projects. Preliminary results show that we are able to mine accurate patterns, efficiently exploiting change information in Rich Edit Scripts. We further integrated the mined patterns to an automated program repair prototype, PARFixMiner, with which we are able to correctly fix 26 bugs of the Defects4J benchmark. Beyond this quantitative performance, we show that the mined fix patterns are sufficiently relevant to produce patches with a high probability of correctness: 81% of PARFixMiner’s generated plausible patches are correct.
Journal Article
Diagnosing Surface Mixed Layer Dynamics from High-Resolution Satellite Observations: Numerical Insights
by
Le Traon, Pierre-Yves
,
Ponte, Aurelien L.
,
Klein, Patrice
in
Adiabatic
,
Dynamics
,
Dynamics of the ocean (upper and deep oceans)
2013
High-resolution numerical experiments of ocean mesoscale eddy turbulence show that the wind-driven mixed layer (ML) dynamics affects mesoscale motions in the surface layers at scales lower than O(60 km). At these scales, surface horizontal currents are still coherent to, but weaker than, those derived from sea surface height using geostrophy. Vertical motions, on the other hand, are stronger than those diagnosed using the adiabatic quasigeotrophic (QG) framework. An analytical model, based on a scaling analysis and on simple dynamical arguments, provides a physical understanding and leads to a parameterization of these features in terms of vertical mixing. These results are valid when the wind-driven velocity scale is much smaller than that associated with eddies and the Ekman number (related to the ratio between the Ekman and ML depth) is not small. This suggests that, in these specific situations, three-dimensional ML motions (including the vertical velocity) can be diagnosed from high-resolution satellite observations combined with a climatological knowledge of ML conditions and interior stratification.
Journal Article