Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
116
result(s) for
"Lam, Herman"
Sort by:
Analytical method validation and instrument performance verification
by
Lam, Herman
,
Zhang, Xue-Ming
,
Chan, Chung Chow
in
Analysis
,
Chemistry
,
Chemistry, Pharmaceutical -- instrumentation
2004
Validation describes the procedures used to analyze pharmaceutical products so that the data generated will comply with the requirements of regulatory bodies of the US, Canada, Europe and Japan. Calibration of Instruments describes the process of fixing, checking or correcting the graduations of instruments so that they comply with those regulatory bodies. This book provides a thorough explanation of both the fundamental and practical aspects of biopharmaceutical and bioanalytical methods validation. It teaches the proper procedures for using the tools and analysis methods in a regulated lab setting. Readers will learn the appropriate procedures for calibration of laboratory instrumentation and validation of analytical methods of analysis. These procedures must be executed properly in all regulated laboratories, including pharmaceutical and biopharmaceutical laboratories, clinical testing laboratories (hospitals, medical offices) and in food and cosmetic testing laboratories.
Accelerate Scientific Deep Learning Models on Heterogeneous Computing Platform with FPGA
by
Patel, Bhavesh
,
Ojika, David
,
Kurth, Thorsten
in
Accelerators
,
Artificial neural networks
,
Central processing units
2020
AI and deep learning are experiencing explosive growth in almost every domain involving analysis of big data. Deep learning using Deep Neural Networks (DNNs) has shown great promise for such scientific data analysis applications. However, traditional CPU-based sequential computing without special instructions can no longer meet the requirements of mission-critical applications, which are compute-intensive and require low latency and high throughput. Heterogeneous computing (HGC), with CPUs integrated with GPUs, FPGAs, and other science-targeted accelerators, offers unique capabilities to accelerate DNNs. Collaborating researchers at SHREC 1 at the University of Florida, CERN Openlab, NERSC 2 at Lawrence Berkeley National Lab, Dell EMC, and Intel are studying the application of heterogeneous computing (HGC) to scientific problems using DNN models. This paper focuses on the use of FPGAs to accelerate the inferencing stage of the HGC workflow. We present case studies and results in inferencing state-of-the-art DNN models for scientific data analysis, using Intel distribution of OpenVINO, running on an Intel Programmable Acceleration Card (PAC) equipped with an Arria 10 GX FPGA. Using the Intel Deep Learning Acceleration (DLA) development suite to optimize existing FPGA primitives and develop new ones, we were able accelerate the scientific DNN models under study with a speedup from 2.46x to 9.59x for a single Arria 10 FPGA against a single core (single thread) of a server-class Skylake CPU.
Journal Article
FaaM: FPGA-as-a-Microservice - A Case Study for Data Compression
2019
Field-programmable gate arrays (FPGAs) have largely been used in communication and high-performance computing and given the recent advances in big data and emerging trends in cloud computing (e.g., serverless [18]), FPGAs are increasingly being introduced into these domains (e.g., Microsoft’s datacenters [6] and Amazon Web Services [10]). To address these domains’ processing needs, recent research has focused on using FPGAs to accelerate workloads, ranging from analytics and machine learning to databases and network function virtualization. In this paper, we present an ongoing effort to realize a high-performance FPGA-as-a-microservice (FaaM) architecture for the cloud. We discuss some of the technical challenges and propose several solutions for efficiently integrating FPGAs into virtualized environments. Our case study deploying a multithreaded, multi-user compression as a microservice using the FaaM architecture indicate that microservices-based FPGA acceleration can sustain high-performance compared to straightforward implementation with minimal to no communication overhead despite the hardware abstraction.
Journal Article
Core-Level Modeling and Frequency Prediction for DSP Applications on FPGAs
2015
Field-programmable gate arrays (FPGAs) provide a promising technology that can improve performance of many high-performance computing and embedded applications. However, unlike software design tools, the relatively immature state of FPGA tools significantly limits productivity and consequently prevents widespread adoption of the technology. For example, the lengthy design-translate-execute (DTE) process often must be iterated to meet the application requirements. Previous works have enabled model-based, design-space exploration to reduce DTE iterations but are limited by a lack of accurate model-based prediction of key design parameters, the most important of which is clock frequency. In this paper, we present a core-level modeling and design (CMD) methodology that enables modeling of FPGA applications at an abstract level and yet produces accurate predictions of parameters such as clock frequency, resource utilization (i.e., area), and latency. We evaluate CMD’s prediction methods using several high-performance DSP applications on various families of FPGAs and show an average clock-frequency prediction error of 3.6%, with a worst-case error of 20.4%, compared to the best of existing high-level prediction methods, 13.9% average error with 48.2% worst-case error. We also demonstrate how such prediction enables accurate design-space exploration without coding in a hardware-description language (HDL), significantly reducing the total design time.
Journal Article
Practical Approaches to Method Validation and Essential Instrument Qualification
2008,2010
Practical approaches to ensure that analytical methods and instruments meet GMP standards and requirements Complementing the authors' first book, Analytical Method Validation and Instrument Performance Verification, this new volume provides coverage of more advanced topics, focusing on additional and supplemental methods, instruments, and.
A real-time, power-efficient architecture for mean-shift image segmentation
2018
Image segmentation is essential to image processing because it provides a solution to the task of separating the objects in an image from the background and from each other, which is an important step in object recognition, tracking, and other high-level image-processing applications. By partitioning the input image into smaller regions, segmentation performs the balancing act of extracting the main areas of interest (objects and important features) that further help to interpret the image, while remaining immune to irrelevant noise and less important background scenes. Image-segmentation applications branch off into a plethora of domains, from decision-making applications in computer vision to medical imaging and quality control to name just a few. The mean-shift algorithm provides a unique unsupervised clustering solution to image segmentation, and it has an established record of good performance for a wide variety of input images. However, mean-shift segmentation exhibits an unfavorable computational complexity of
O
(
k
N
2
)
, where
N
represents the number of pixels and
k
the number of iterations. As a result of this complexity, unsupervised image segmentation has had limited impact in autonomous applications, where a low-power, real-time solution is required. We propose a novel hardware architecture that exploits the customizable computing power of FPGAs and reduces the execution time by clustering pixels in parallel while meeting the low-power demands of embedded applications. The architecture performance is compared with existing CPU and GPU implementations to demonstrate its advantages in terms of both execution time and energy.
Journal Article
Toward a business process grid for utility computing
by
Liang-Jie Zhang
,
Lam, H.
,
Haifei Li
in
Application programming interface
,
Application software
,
Business
2004
Existing grid computing technologies take advantage of underused computing capacity to solve business problems and provide IT-level infrastructure to support business applications. A business grid's ultimate goal, however, is to apply the utility model of grid computing to business applications; that is, provide support services for charging users on a pay-per-use basis, much as a utility company charges for electricity. That way, the vendor takes the responsibility for application maintenance and upgrade. Thus, a business grid provides a virtualized infrastructure to support the transparent use and sharing of business functions on demand.
Journal Article
RISCBench: Benchmarking RISC-V Orchestration Efficiency in FPGA and FPGA-Like Computing Engines
2025
Heterogeneous systems increasingly rely on RISC-V cores as orchestration engines to manage data movement, synchronization, and scheduling across accelerators and reconfigurable fabrics. Conventional performance metrics, such as FLOPs, TOPS/W, or energy per operation, do not capture orchestration efficiency, even though it often dictates sustained system behavior. This gap is increasingly relevant as systems evolve toward tightly coupled heterogeneous fabrics and co-packaged accelerators, where control-plane behavior determines whether these platforms achieve their promised performance. We present RISCBench, a kernel benchmark suite and open methodology for quantifying orchestration efficiency. RISCBench introduces the Sustained Instantaneous Throughput (SIT) metric, which accumulates instantaneous throughput over near-aggregate execution intervals, capturing sustained efficiency beyond peak rates. The methodology is evaluated across representative platforms spanning soft and hard RISC-V orchestration engines, including FPGA-based prototyping and accelerator-class implementations. Results highlight synchronization and data residency driven tradeoffs that limit realized throughput beyond peak performance, motivating SIT as a practical, platform-independent descriptor for evaluating orchestration efficiency in heterogeneous systems and AI inference applications.
Practical Approaches to Method Validation and Essential Instrument Verification
2011
Practical approaches to ensure that analytical methods and instruments meet GMP standards and requirements Complementing the authors' first book, Analytical Method Validation and Instrument Performance Verification, this new volume provides coverage of more advanced topics, focusing on additional and supplemental methods, instruments, and electronic systems that are used in pharmaceutical, biopharmaceutical, and clinical testing. Readers will gain new and valuable insights that enable them to avoid common pitfalls in order to seamlessly conduct analytical method validation as well as instrument operation qualification and performance verification. Part 1, Method Validation, begins with an overview of the book's risk-based approach to phase appropriate validation and instrument qualification; it then focuses on the strategies and requirements for early phase drug development, including validation of specific techniques and functions such as process analytical technology, cleaning validation, and validation of laboratory information management systems Part 2, Instrument Performance Verification, explores the underlying principles and techniques for verifying instrument performance-coverage includes analytical instruments that are increasingly important to the pharmaceutical industry, such as NIR spectrometers and particle size analyzers-and offers readers a variety of alternative approaches for the successful verification of instrument performance based on the needs of their labs At the end of each chapter, the authors examine important practical problems and share their solutions. All the methods covered in this book follow Good Analytical Practices (GAP) to ensure that reliable data are generated in compliance with current Good Manufacturing Practices (cGMP). Analysts, scientists, engineers, technologists, and technical managers should turn to this book to ensure that analytical methods and instruments are accurate and meet GMP standards and requirements.
Practical approaches to method validation and essential instrument performance verification
2010
\"The objective of this book is provide information in same practical, hands-on manner as the first book \"Analytical Method Validation and Instrument Performance Verification\", on important, but more advance topics. It will focus on additional and supplemental methods, instruments, and electronic systems that are used in pharmaceutical, biopharmaceutical, and clinical testings. These tests will generate reliable data that is in compliance with current Good Manufacturing Practices (cGMP) and will follow Good Analytical Practices\"--Provided by publisher.