Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
135
result(s) for
"Karri, Ramesh"
Sort by:
BioTrojans: viscoelastic microvalve-based attacks in flow-based microfluidic biochips and their countermeasures
2024
Flow-based microfluidic biochips (FMBs) are widely used in biomedical research and diagnostics. However, their security against potential material-level cyber-physical attacks remains inadequately explored, posing a significant future challenge. One of the main components, polydimethylsiloxane (PDMS) microvalves, is pivotal to FMBs' functionality. However, their fabrication, which involves thermal curing, makes them susceptible to chemical tampering-induced material degradation attacks. Here, we demonstrate one such material-based attack termed “BioTrojans,” which are chemically tampered and optically stealthy microvalves that can be ruptured through low-frequency actuations. To chemically tamper with the microvalves, we altered the associated PDMS curing ratio. Attack demonstrations showed that BioTrojan valves with 30:1 and 50:1 curing ratios ruptured quickly under 2 Hz frequency actuations, while authentic microvalves with a 10:1 ratio remained intact even after being actuated at the same frequency for 2 days (345,600 cycles). Dynamic mechanical analyzer (DMA) results and associated finite element analysis revealed that a BioTrojan valve stores three orders of magnitude more mechanical energy than the authentic one, making it highly susceptible to low-frequency-induced ruptures. To counter BioTrojan attacks, we propose a security-by-design approach using smooth peripheral fillets to reduce stress concentration by over 50% and a spectral authentication method using fluorescent microvalves capable of effectively detecting BioTrojans.
Journal Article
FEINT: Automated Framework for Efficient INsertion of Templates/Trojans into FPGAs
by
Pearce, Hammond
,
Trujillo, Joshua
,
Karri, Ramesh
in
Artificial intelligence
,
Automation
,
Composition
2024
Field-Programmable Gate Arrays (FPGAs) play a significant and evolving role in various industries and applications in the current technological landscape. They are widely known for their flexibility, rapid prototyping, reconfigurability, and design development features. FPGA designs are often constructed as compositions of interconnected modules that implement the various features/functionalities required in an application. This work develops a novel tool FEINT, which facilitates this module composition process and automates the design-level modifications required when introducing new modules into an existing design. The proposed methodology is architected as a “template” insertion tool that operates based on a user-provided configuration script to introduce dynamic design features as plugins at different stages of the FPGA design process to facilitate rapid prototyping, composition-based design evolution, and system customization. FEINT can be useful in applications where designers need to tailor system behavior without requiring expert FPGA programming skills or significant manual effort. For example, FEINT can help insert defensive monitoring, adversarial Trojan, and plugin-based functionality enhancement features. FEINT is scalable, future-proof, and cross-platform without a dependence on vendor-specific file formats, thus ensuring compatibility with FPGA families and tool versions and being integrable with commercial tools. To assess FEINT’s effectiveness, our tests covered the injection of various types of templates/modules into FPGA designs. For example, in the Trojan insertion context, our tests consider diverse Trojan behaviors and triggers, including key leakage and denial of service Trojans. We evaluated FEINT’s applicability to complex designs by creating an FPGA design that features a MicroBlaze soft-core processor connected to an AES-accelerator via an AXI-bus interface. FEINT can successfully and efficiently insert various templates into this design at different FPGA design stages.
Journal Article
Manufacturing and Security Challenges in 3D Printing
by
Rajendran, Jeyavijayan
,
Karri, Ramesh
,
Zeltmann, Steven Eric
in
3-D printers
,
Additive manufacturing
,
Automobile shows
2016
As the manufacturing time, quality, and cost associated with additive manufacturing (AM) continue to improve, more and more businesses and consumers are adopting this technology. Some of the key benefits of AM include customizing products, localizing production and reducing logistics. Due to these and numerous other benefits, AM is enabling a globally distributed manufacturing process and supply chain spanning multiple parties, and hence raises concerns about the reliability of the manufactured product. In this work, we first present a brief overview of the potential risks that exist in the cyber-physical environment of additive manufacturing. We then evaluate the risks posed by two different classes of modifications to the AM process which are representative of the challenges that are unique to AM. The risks posed are examined through mechanical testing of objects with altered printing orientation and fine internal defects. Finite element analysis and ultrasonic inspection are also used to demonstrate the potential for decreased performance and for evading detection. The results highlight several scenarios, intentional or unintentional, that can affect the product quality and pose security challenges for the additive manufacturing supply chain.
Journal Article
Exploiting Small Leakages in Masks to Turn a Second-Order Attack into a First-Order Attack and Improved Rotating Substitution Box Masking with Linear Code Cosets
2015
Masking countermeasures, used to thwart side-channel attacks, have been shown to be vulnerable to mask-extraction attacks. State-of-the-art mask-extraction attacks on the Advanced Encryption Standard (AES) algorithm target S-Box recomputation schemes but have not been applied to scenarios where S-Boxes are precomputed offline. We propose an attack targeting precomputed S-Boxes stored in nonvolatile memory. Our attack targets AES implemented in software protected by a low entropy masking scheme and recovers the masks with 91% success rate. Recovering the secret key requires fewer power traces (in fact, by at least two orders of magnitude) compared to a classical second-order attack. Moreover, we show that this attack remains viable in a noisy environment or with a reduced number of leakage points. Eventually, we specify a method to enhance the countermeasure by selecting a suitable coset of the masks set.
Journal Article
Low-Cost Concurrent Error Detection for GCM and CCM
2014
In many applications, encryption alone does not provide enough security. To enhance security, dedicated authenticated encryption (AE) mode are invented. Galios Counter Mode (GCM) and Counter with CBC-MAC mode (CCM) are the AE modes recommended by the National Institute of Standards and Technology. To support high data rates, AE modes are usually implemented in hardware. However, natural faults reduce its reliability and may undermine both its encryption and authentication capability. We present a low-cost concurrent error detection (CED) scheme for 7 AE architectures. The proposed technique explores idle cycles of the AE mode architectures. Experimental results shows that the performance overhead can be lower than 100 % for all architectures depending on the workload. FPGA implementation results show that the hardware overhead in the 0.1–23.3 % range and the power overhead is in the 0.2–23.2 % range. ASIC implementation results show that the hardware overhead in the 0.1–22.8 % range and the power overhead is in the 0.3–12.6 % range. The underlying block cipher and hash module need not have CED built in. Thus, it allows system designers to integrate block cipher and hash function intellectual property from different vendors.
Journal Article
Causative Cyberattacks on Online Learning-based Automated Demand Response Systems
by
Acharya, Samrat
,
Dvorkin, Yury
,
Karri, Ramesh
in
Artificial intelligence
,
Automation
,
Colleges & universities
2023
Power utilities are adopting Automated Demand Response (ADR) to replace the costly fuel-fired generators and to preempt congestion during peak electricity demand. Similarly, third-party Demand Response (DR) aggregators are leveraging controllable small-scale electrical loads to provide on-demand grid support services to the utilities. Some aggregators and utilities have started employing Artificial Intelligence (AI) to learn the energy usage patterns of electricity consumers and use this knowledge to design optimal DR incentives. Such AI frameworks use open communication channels between the utility/aggregator and the DR customers, which are vulnerable to \\textit{causative} data integrity cyberattacks. This paper explores vulnerabilities of AI-based DR learning and designs a data-driven attack strategy informed by DR data collected from the New York University (NYU) campus buildings. The case study demonstrates the feasibility and effects of maliciously tampering with (i) real-time DR incentives, (ii) DR event data sent to DR customers, and (iii) responses of DR customers to the DR incentives.
Survey of different Large Language Model Architectures: Trends, Benchmarks, and Challenges
2024
Large Language Models (LLMs) represent a class of deep learning models adept at understanding natural language and generating coherent responses to various prompts or queries. These models far exceed the complexity of conventional neural networks, often encompassing dozens of neural network layers and containing billions to trillions of parameters. They are typically trained on vast datasets, utilizing architectures based on transformer blocks. Present-day LLMs are multi-functional, capable of performing a range of tasks from text generation and language translation to question answering, as well as code generation and analysis. An advanced subset of these models, known as Multimodal Large Language Models (MLLMs), extends LLM capabilities to process and interpret multiple data modalities, including images, audio, and video. This enhancement empowers MLLMs with capabilities like video editing, image comprehension, and captioning for visual content. This survey provides a comprehensive overview of the recent advancements in LLMs. We begin by tracing the evolution of LLMs and subsequently delve into the advent and nuances of MLLMs. We analyze emerging state-of-the-art MLLMs, exploring their technical features, strengths, and limitations. Additionally, we present a comparative analysis of these models and discuss their challenges, potential limitations, and prospects for future development.
Evaluating LLMs for Hardware Design and Test
by
Karri, Ramesh
,
Hammond Pearce
,
Blocklove, Jason
in
Benchmarks
,
Hardware description languages
,
Large language models
2024
Large Language Models (LLMs) have demonstrated capabilities for producing code in Hardware Description Languages (HDLs). However, most of the focus remains on their abilities to write functional code, not test code. The hardware design process consists of both design and test, and so eschewing validation and verification leaves considerable potential benefit unexplored, given that a design and test framework may allow for progress towards full automation of the digital design pipeline. In this work, we perform one of the first studies exploring how a LLM can both design and test hardware modules from provided specifications. Using a suite of 8 representative benchmarks, we examined the capabilities and limitations of the state-of-the-art conversational LLMs when producing Verilog for functional and verification purposes. We taped out the benchmarks on a Skywater 130nm shuttle and received the functional chip.
Grounding LLMs For Robot Task Planning Using Closed-loop State Feedback
2025
Planning algorithms decompose complex problems into intermediate steps that can be sequentially executed by robots to complete tasks. Recent works have employed Large Language Models (LLMs) for task planning, using natural language to generate robot policies in both simulation and real-world environments. LLMs like GPT-4 have shown promising results in generalizing to unseen tasks, but their applicability is limited due to hallucinations caused by insufficient grounding in the robot environment. The robustness of LLMs in task planning can be enhanced with environmental state information and feedback. In this paper, we introduce a novel approach to task planning that utilizes two separate LLMs for high-level planning and low-level control, improving task-related success rates and goal condition recall. Our algorithm, \\textit{BrainBody-LLM}, draws inspiration from the human neural system, emulating its brain-body architecture by dividing planning across two LLMs in a structured, hierarchical manner. BrainBody-LLM implements a closed-loop feedback mechanism, enabling learning from simulator errors to resolve execution errors in complex settings. We demonstrate the successful application of BrainBody-LLM in the VirtualHome simulation environment, achieving a 29\\% improvement in task-oriented success rates over competitive baselines with the GPT-4 backend. Additionally, we evaluate our algorithm on seven complex tasks using a realistic physics simulator and the Franka Research 3 robotic arm, comparing it with various state-of-the-art LLMs. Our results show advancements in the reasoning capabilities of recent LLMs, which enable them to learn from raw simulator/controller errors to correct plans, making them highly effective in robotic task planning.
Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs
2025
Zero-Knowledge Proofs (ZKPs) are rapidly gaining importance in privacy-preserving and verifiable computing. ZKPs enable a proving party to prove the truth of a statement to a verifying party without revealing anything else. ZKPs have applications in blockchain technologies, verifiable machine learning, and electronic voting, but have yet to see widespread adoption due to the computational complexity of the proving process. Recent works have accelerated the key primitives of state-of-the-art ZKP protocols on GPU and ASIC. However, the protocols accelerated thus far face one of two challenges: they either require a trusted setup for each application, or they generate larger proof sizes with higher verification costs, limiting their applicability in scenarios with numerous verifiers or strict verification time constraints. This work presents an accelerator, zkSpeed, for HyperPlonk, a state-of-the-art ZKP protocol that supports both one-time, universal setup and small proof sizes for typical ZKP applications in publicly verifiable, consensus-based systems. We accelerate the entire protocol, including two major primitives: SumCheck and Multi-scalar Multiplications (MSMs). We develop a full-chip architecture using 366.46 mm\\(^2\\) and 2 TB/s of bandwidth to accelerate the entire proof generation process, achieving geometric mean speedups of 801\\(\\times\\) over CPU baselines.