Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
19 result(s) for "Lysecky, Roman"
Sort by:
FIRE: A Finely Integrated Risk Evaluation Methodology for Life-Critical Embedded Systems
Life-critical embedded systems, including medical devices, are becoming increasingly interconnected and interoperable, providing great efficiency to the healthcare ecosystem. These systems incorporate complex software that plays a significantly integrative and critical role. However, this complexity substantially increases the potential for cybersecurity threats, which directly impact patients’ safety and privacy. With software continuing to play a fundamental role in life-critical embedded systems, maintaining its trustworthiness by incorporating fail-safe modes via a multimodal design is essential. Comprehensive and proactive evaluation and management of cybersecurity risks are essential from the very design to deployment and long-term management. In this paper, we present FIRE, a finely integrated risk evaluation methodology for life-critical embedded systems. Security risks are carefully evaluated in a bottom-up approach from operations-to-system modes by adopting and expanding well-established vulnerability scoring schemes for life-critical systems, considering the impact to patient health and data sensitivity. FIRE combines a static risk evaluation with runtime dynamic risk evaluation to establish comprehensive risk management throughout the lifecycle of the life-critical embedded system. We demonstrate the details and effectiveness of our methodology in systematically evaluating risks and conditions for risk mitigation with a smart connected insulin pump case study. Under normal conditions and eight different malware threats, the experimental results demonstrate effective threat mitigation by mode switching with a 0% false-positive mode switching rate.
Mixed Cryptography Constrained Optimization for Heterogeneous, Multicore, and Distributed Embedded Systems
Embedded systems continue to execute computational- and memory-intensive applications with vast data sets, dynamic workloads, and dynamic execution characteristics. Adaptive distributed and heterogeneous embedded systems are increasingly critical in supporting dynamic execution requirements. With pervasive network access within these systems, security is a critical design concern that must be considered and optimized within such dynamically adaptive systems. This paper presents a modeling and optimization framework for distributed, heterogeneous embedded systems. A dataflow-based modeling framework for adaptive streaming applications integrates models for computational latency, mixed cryptographic implementations for inter-task and intra-task communication, security levels, communication latency, and power consumption. For the security model, we present a level-based modeling of cryptographic algorithms using mixed cryptographic implementations. This level-based security model enables the development of an efficient, multi-objective genetic optimization algorithm to optimize security and energy consumption subject to current application requirements and security policy constraints. The presented methodology is evaluated using a video-based object detection and tracking application and several synthetic benchmarks representing various application types and dynamic execution characteristics. Experimental results demonstrate the benefits of a mixed cryptographic algorithm security model compared to using a single, fixed cryptographic algorithm. Results also highlight how security policy constraints can yield increased security strength and cryptographic diversity for the same energy constraint.
Linking Activity to Information and Energy in Hardware
We propose to extend the framework for relating information and energy via activity [1]. to the design of hardware. Pifer et. al. [2] present an approach to DEVS-based Hardware Design, Synthesis, and Power Optimization that exploits activity concepts. Here we summarize this approach with emphasis on the role of activity. We also provide some results from [2] that show the utility of various mechanisms to reduce power consumption based on DEV Sand/or activity features.
Methods and Analysis of Automated Trace Alignment Under Power Obfuscation in Side Channel Attacks
Embedded systems are widely deployed in life-critical systems, but system constraints often limit the depth of security used in these devices, potentially leaving them open to numerous threats. Side channel attacks (SCAs) are a popular attack to extract sensitive information from embedded systems using only side channel leakage. Existing research has focused on obfuscating the sensitive data and operations with the assumption that attackers can readily and automatically identify the location of the sensitive operations in each trace, which is needed to align traces for a successful SCA. However, this is not always the true as the target sensitive data may be randomly located within side channel leakage trace, which necessitates the use of automatic preprocessing to identifying those locations. Limited research has focused on the evaluation of identifying these locations and the difficulty for attacker to identify the location of sensitive information within side channel leakage traces. This paper presents a methodology for evaluating power obfuscation approaches that seek to obfuscate the location of sensitive operation within the power trace, thereby significantly increasing the complexity of automated trace alignment. This paper presents a new adversary model and proposes a new metric, mean trials to success (MTTS), to evaluate different power obfuscation methods in the context of automated trace alignment. We evaluate two common obfuscation methods, namely, instruction shuffling and random instruction insertion, and we present a new obfuscation method using power shaping to intentionally mislead the attacker.
Scalability and Parallel Execution of Warp Processing: Dynamic Hardware/Software Partitioning
Warp processors are a novel architecture capable of autonomously optimizing an executing application by dynamically re-implementing critical kernels within the software as custom hardware circuits in an on-chip FPGA. Previous research on warp processing focused on low-power embedded systems, incorporating a low-end ARM processor as the main software execution resource. We provide a thorough analysis of the scalability of warp processing by evaluating several possible warp processor implementations, from low-power to high-performance, and by evaluating the potential for parallel execution of the partitioned software and hardware. We further demonstrate that even considering a high-performance 1 GHz embedded processor, warp processing provides the equivalent performance of a 2.4 GHz processor. By further enabling parallel execution between the processes and FPGA, the parallel warp processor execution provides the equivalent performance of a 3.2 GHz processor.
A high-level synthesis approach for precisely-timed, energy-efficient embedded systems
Embedded systems continue to rapidly proliferate in diverse fields, including medical devices, autonomous vehicles, and more generally, the Internet of Things (IoT). Many embedded systems require application-specific hardware components to meet precise timing requirements within limited resource (area and energy) constraints. High-level synthesis (HLS) is an increasingly popular approach for improving the productivity of designing hardware and reducing the time/cost by using high-level languages to specify computational functionality and automatically generate hardware implementations. However, current HLS methods provide limited or no support to incorporate or utilize precise timing specifications within the synthesis and optimization process. In this paper, we present a hybrid high-level synthesis (H-HLS) framework that integrates state-based high-level synthesis (SB-HLS) with performance-driven high-level synthesis (PD-HLS) methods to enable the design and optimization of application-specific embedded systems in which timing information is explicitly and precisely defined in state-based system models. We demonstrate the results achieved by this H-HLS approach using case studies including a wearable pregnancy monitoring device, an ECG-based biometric authentication system, and a synthetic system, and compare the design space exploration results using two PD-HLS tools to show how H-HLS can provide low energy and area under timing constraints.
Skip the Benchmark: Generating System-Level High-Level Synthesis Data using Generative Machine Learning
High-Level Synthesis (HLS) Design Space Exploration (DSE) is a widely accepted approach for efficiently exploring Pareto-optimal and optimal hardware solutions during the HLS process. Several HLS benchmarks and datasets are available for the research community to evaluate their methodologies. Unfortunately, these resources are limited and may not be sufficient for complex, multi-component system-level explorations. Generating new data using existing HLS benchmarks can be cumbersome, given the expertise and time required to effectively generate data for different HLS designs and directives. As a result, synthetic data has been used in prior work to evaluate system-level HLS DSE. However, the fidelity of the synthetic data to real data is often unclear, leading to uncertainty about the quality of system-level HLS DSE. This paper proposes a novel approach, called Vaegan, that employs generative machine learning to generate synthetic data that is robust enough to support complex system-level HLS DSE experiments that would be unattainable with only the currently available data. We explore and adapt a Variational Autoencoder (VAE) and Generative Adversarial Network (GAN) for this task and evaluate our approach using state-of-the-art datasets and metrics. We compare our approach to prior works and show that Vaegan effectively generates synthetic HLS data that closely mirrors the ground truth's distribution.
System-Level Design Space Exploration for High-Level Synthesis under End-to-End Latency Constraints
Many modern embedded systems have end-to-end (EtoE) latency constraints that necessitate precise timing to ensure high reliability and functional correctness. The combination of High-Level Synthesis (HLS) and Design Space Exploration (DSE) enables the rapid generation of embedded systems using various constraints/directives to find Pareto-optimal configurations. Current HLS DSE approaches often address latency by focusing on individual components, without considering the EtoE latency during the system-level optimization process. However, to truly optimize the system under EtoE latency, we need a holistic approach that analyzes individual system components' timing constraints in the context of how the different components interact and impact the overall design. This paper presents a novel system-level HLS DSE approach, called EtoE-DSE, that accommodates EtoE latency and variable timing constraints for complex multi-component application-specific embedded systems. EtoE-DSE employs a latency estimation model and a pathfinding algorithm to identify and estimate the EtoE latency for paths between any endpoints. It also uses a frequency-based segmentation process to segment and prune the design space, alongside a latency-constrained optimization algorithm for efficiently and accurately exploring the system-level design space. We evaluate our approach using a real-world use case of an autonomous driving subsystem compared to the state-of-the-art in HLS DSE. We show that our approach yields substantially better optimization results than prior DSE approaches, improving the quality of results by up to 89.26%, while efficiently identifying Pareto-optimal configurations in terms of energy and area.
Are LLMs Any Good for High-Level Synthesis?
The increasing complexity and demand for faster, energy-efficient hardware designs necessitate innovative High-Level Synthesis (HLS) methodologies. This paper explores the potential of Large Language Models (LLMs) to streamline or replace the HLS process, leveraging their ability to understand natural language specifications and refactor code. We survey the current research and conduct experiments comparing Verilog designs generated by a standard HLS tool (Vitis HLS) with those produced by LLMs translating C code or natural language specifications. Our evaluation focuses on quantifying the impact on performance, power, and resource utilization, providing an assessment of the efficiency of LLM-based approaches. This study aims to illuminate the role of LLMs in HLS, identifying promising directions for optimized hardware design in applications such as AI acceleration, embedded systems, and high-performance computing.
Coral: An Ultra-Simple Language For Learning to Program
University-level introductory programming courses, CS0 (non-majors) and CS1 (majors), often teach an industry language, such as Java, C++, and Python. However, such languages were designed for professionals, not learners. Some CS0 courses teach a graphical programming language, such as Scratch, Snap, and Alice. However, many instructors want a more serious feel for college students that leads more directly into an industry language. In late 2017, we designed Coral, an ultra-simple language for learning to program that has both textual code and a graphical flowchart view. Concurrently, Coral's educational simulator was designed hand-in-hand with the language. Coral was designed specifically for learning core programming concepts: input/output, variables, assignments, expressions, branches, loops, functions, and arrays. Coral is intended as a step in learning; once Coral is learned, students might transition to an industry language. This paper describes Coral, including the design philosophy and pedagogical considerations. This paper also includes data on student usage and perspectives of Coral during the Summer and Fall 2018.