Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
89 result(s) for "Computer Storage Devices - history"
Sort by:
Disks back from the dead
Getting data off an ancient floppy disk or computer tape isn't easy, but it can be done with the help of clever software and hardware.
Ethical Implications of User Perceptions of Wearable Devices
Health Wearable Devices enhance the quality of life, promote positive lifestyle changes and save time and money in medical appointments. However, Wearable Devices store large amounts of personal information that is accessed by third parties without user consent. This creates ethical issues regarding privacy, security and informed consent. This paper aims to demonstrate users’ ethical perceptions of the use of Wearable Devices in the health sector. The impact of ethics is determined by an online survey which was conducted from patients and users with random female and male division. Results from this survey demonstrate that Wearable Device users are highly concerned regarding privacy issues and consider informed consent as “very important” when sharing information with third parties. However, users do not appear to relate privacy issues with informed consent. Additionally, users expressed the need for having shorter privacy policies that are easier to read, a more understandable informed consent form that involves regulatory authorities and there should be legal consequences the violation or misuse of health information provided to Wearable Devices. The survey results present an ethical framework that will enhance the ethical development of Wearable Technology.
A Novel Design and Performance Assessment of a Blockchain-Powered Remote Patient Monitoring System
The healthcare industry has integrated Internet of Things (IoT) and Blockchain technologies extensively, with remote patient monitoring (RPM) being one such domain. The rapid advancement of wearable IoT medical devices has enabled the real-time collection and processing of sensory data from patients. However, the centralization of IoT data storage poses challenges such as single-point failure (SPoF), data tampering, and privacy concerns. Blockchain offers a solution to these issues through its decentralized architecture. This study presents a novel design for blockchain-based RPM system that enables the doctor to securely monitor the patient’s vital signs and prescribe medication accordingly. The proposed system is implemented with Hyperledger Fabric and also evaluates the efficiency of the proposed RPM model using the Hyperledger Caliper tool. The results have shown low latency and high throughput which is acceptable for real-time applications. It’s observed that during the write operations, the peak of throughput reached to 104.9tps when the achieved send rate was 122.8 tps. Likewise, the peak of average latency is reached at 4.91 s at 175 tps, whereas during the query round, the latency remains consistent (0.01 s) across all send rates except for 250 tps. In contrast to traditional systems for RPM employing client–server architecture, the proposed system leverages blockchain to enhance efficiency and security.
Enhancing Workplace Safety through Personalized Environmental Risk Assessment: An AI-Driven Approach in Industry 5.0
This paper describes an integrated monitoring system designed for individualized environmental risk assessment and management in the workplace. The system incorporates monitoring devices that measure dust, noise, ultraviolet radiation, illuminance, temperature, humidity, and flammable gases. Comprising monitoring devices, a server-based web application for employers, and a mobile application for workers, the system integrates the registration of workers’ health histories, such as common diseases and symptoms related to the monitored agents, and a web-based recommendation system. The recommendation system application uses classifiers to decide the risk/no risk per sensor and crosses this information with fixed rules to define recommendations. The system generates actionable alerts for companies to improve decision-making regarding professional activities and long-term safety planning by analyzing health information through fixed rules and exposure data through machine learning algorithms. As the system must handle sensitive data, data privacy is addressed in communication and data storage. The study provides test results that evaluate the performance of different machine learning models in building an effective recommendation system. Since it was not possible to find public datasets with all the sensor data needed to train artificial intelligence models, it was necessary to build a data generator for this work. By proposing an approach that focuses on individualized environmental risk assessment and management, considering workers’ health histories, this work is expected to contribute to enhancing occupational safety through computational technologies in the Industry 5.0 approach.
Punched-card systems and the early information explosion, 1880-1945
At a time when Internet use is closely tracked and social networking sites supply data for targeted advertising, Lars Heide presents the first academic study of the invention that fueled today’s information revolution: the punched card. Early punched cards helped to process the United States census in 1890. They soon proved useful in calculating invoices and issuing pay slips. As demand for more sophisticated systems and reading machines increased in both the United States and Europe, punched cards served ever-larger data-processing purposes. Insurance companies, public utilities, businesses, and governments all used them to keep detailed records of their customers, competitors, employees, citizens, and enemies. The United States used punched-card registers in the late 1930s to pay roughly 21 million Americans their Social Security pensions, Vichy France used similar technologies in an attempt to mobilize an army against the occupying German forces, and the Germans in 1941 developed several punched-card registers to make the war effort—and surveillance of minorities—more effective. Heide’s analysis of these three major punched-card systems, as well as the impact of the invention on Great Britain, illustrates how different cultures collected personal and financial data and how they adapted to new technologies.This comparative study will interest students and scholars from a wide range of disciplines, including the history of technology, computer science, business history, and management and organizational studies.
IBM TotalStorage Tape Selection and Differentiation Guide
This IBM Redbooks publication will help users to select the appropriate tape solution for various backup scenarios found in open systems environments. This book is a tape product selection and differentiation guide that is designed to assist users in finding all the information needed to select the best tape solution for the designated backup environment.This guide describes the information gathering process and product selection criteria to differentiate among the available IBM tape offerings. It provides a basis for tape differentiation. It is not, however, intended as a tape system sizing guide. For this purpose, users should use the sizing tools provided by each product family.This guide focuses primarily on identifying backup environments for the IBM 358x Ultrium product family (LTO) and the environments for the IBM TotalStorage Enterprise Tape System 3590 and 3592. Single user or departmental type backup environments are also addressed through providing information on the entry level tape product lines such as 4mm or 8 mm, and so on. Total backup solution offerings are supported by the Tivoli Storage Manager, as well as other backup applications offered by various vendors.This edition of the book has been updated with information about the following: IBM 3592-J1A tape drive; WORM and Economy cartridge support for the IBM 3592; new models of the IBM TotalStorage UltraScalable Tape Library 3584; and new models of the IBM TotalStorage Ultrium Tape 2U Autoloader 3581.
IBM TotalStorage DS6000 Series
This IBM Redbooks publication provides guidance about how to configure, monitor, and manage your IBM TotalStorage DS6000 to achieve optimum performance. We describe the DS6000 performance features and characteristics and how they can be exploited with the different server platforms that can attach to it. Then in consecutive chapters we detail the specific performance recommendations and discussions that apply for each server environment, as well as for database and Copy Services environments. We also outline the various tools available for monitoring and measuring I/O performance for the different server environments, as well as how to monitor performance of the entire DS6000 subsystem.
The Effect of Flashcache and Bcache on I/O Performance
Solid state drives (SSDs) provide significant improvements in random I/O performance over traditional rotating SATA and SAS drives. While the cost of SSDs has been steadily declining over the past few years, high density SSDs continue to remain prohibitively expensive when compared to traditional drives. Currently, 1 TB SSDs generally cost more than USD \\(1,000, while 1 TB SATA drives typically retail for under USD \\)100. With ever-increasing x86_64 server CPU core counts, and therefore job slot counts, local scratch space density and random I/O performance have become even more important for HEP/NP applications. Flashcache and Bcache are Linux kernel modules which implement caching of SATA/SAS hard drive data on SSDs, effectively allowing one to create hybrid SSD drives using software. In this paper, we discuss our experience with Flashcache and Bcache, and the effects of this software on local scratch storage performance.
La phase initiale de l’informatisation du programme Tuvaaluk (1975-1982)
From 1975 to 1982, under the leadership of Patrick Plumet, then a professor at the Department of Earth Sciences of the Universite du Quebec a Montreal, the Tuvaaluk Program was undertaken with a substantial grant from the Canada Council for the Arts. Its main aim was to achieve a better understanding of the prehistory of Arctic Quebec. Plumet sought to develop a largely computer-based methodology for archaeological analysis. This paper describes the main issues raised by such a methodology, keeping in mind that computers were at that time largely at the stage of huge machines that lacked the speed and memory capacity now available. These computers were clearly less efficient and used languages that were still being developed. Furthermore, the Tuvaaluk program was carried out before laptops came on the market during the 1980s with their already more sophisticated software.
Tree-based scheme for reducing shared cache miss rate leveraging regional, statistical and temporal similarities
Cache miss can have a major impact on overall performance of many-core systems. A miss may result in extra traffic and delay because of coherency messages. This has been reduced in coarse-grain coherency protocols where only shared misses require a coherency message. Conventional off-chip methods manage the shared miss rate by relying on reuse histories. However the pertinent memory overhead that comes with reuse histories makes them impractical for on-chip multi-processor systems. In this study, a new scheme has been proposed to reduce shared cache miss rate in multi-processor system-on-chips that benefits from novel prefetching techniques to L2 caches from off-chip memories or other remote L2 caches located on-chip. In the proposed scheme, the previously proposed Virtual Tree Coherence (VTC) method has been extended to limit block forwarding messages to true sharers within each region. Instead of relying on exact reuse histories, shared regions are searched for regional, temporal and statistical similarities. These similarities are exploited for determining the sharers that should receive the forwarded blocks. The proposed method has been evaluated with Splash-2 workloads. Simulation results indicate that the proposed method has reduced shared miss count by up to 75%, and improved interconnect traffic by up to 47% compared with VTC.