Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Item Type
      Item Type
      Clear All
      Item Type
  • Subject
      Subject
      Clear All
      Subject
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Source
    • Language
271 result(s) for "Automatic Data Processing - instrumentation"
Sort by:
Computing with networks of nonlinear mechanical oscillators
As it is getting increasingly difficult to achieve gains in the density and power efficiency of microelectronic computing devices because of lithographic techniques reaching fundamental physical limits, new approaches are required to maximize the benefits of distributed sensors, micro-robots or smart materials. Biologically-inspired devices, such as artificial neural networks, can process information with a high level of parallelism to efficiently solve difficult problems, even when implemented using conventional microelectronic technologies. We describe a mechanical device, which operates in a manner similar to artificial neural networks, to solve efficiently two difficult benchmark problems (computing the parity of a bit stream, and classifying spoken words). The device consists in a network of masses coupled by linear springs and attached to a substrate by non-linear springs, thus forming a network of anharmonic oscillators. As the masses can directly couple to forces applied on the device, this approach combines sensing and computing functions in a single power-efficient device with compact dimensions.
Technical note: Validation of a system for monitoring rumination in dairy cows
Increased rumination in dairy cattle has been associated with increased saliva production and improved rumen health. Most estimates of rumination are based on direct visual observations. Recently, an electronic system was developed that allows for automated monitoring of rumination in cattle. The objective was to validate the data generated by this electronic (Hi-Tag, SCR Engineers Ltd., Netanya, Israel) rumination monitoring system. Assessments of 2 independent observers were highly correlated (r=0.99, n=23), indicating that direct human observations were suitable as the reference method. Measures from the Hi-Tag electronic system were validated by comparing values with those from a human observer for fifty-one 2-h observation periods from 27 Holstein cows. Rumination times (35.1±3.2min) from the electronic system were highly correlated with those from direct observation (r=0.93, R2=0.87, n=51), indicating that the electronic system was an accurate tool for monitoring this behavior in dairy cows.
Impact of accelerometer data processing decisions on the sample size, wear time and physical activity level of a large cohort study
Background Accelerometers objectively assess physical activity (PA) and are currently used in several large-scale epidemiological studies, but there is no consensus for processing the data. This study compared the impact of wear-time assessment methods and using either vertical (V)-axis or vector magnitude (VM) cut-points on accelerometer output. Methods Participants (7,650 women, mean age 71.4 y) were mailed an accelerometer (ActiGraph GT3X+), instructed to wear it for 7 days, record dates and times the monitor was worn on a log, and return the monitor and log via mail. Data were processed using three wear-time methods (logs, Troiano or Choi algorithms) and V-axis or VM cut-points. Results Using algorithms alone resulted in \"mail-days\" incorrectly identified as \"wear-days\" (27-79% of subjects had >7-days of valid data). Using only dates from the log and the Choi algorithm yielded: 1) larger samples with valid data than using log dates and times, 2) similar wear-times as using log dates and times, 3) more wear-time (V, 48.1 min more; VM, 29.5 min more) than only log dates and Troiano algorithm. Wear-time algorithm impacted sedentary time (~30-60 min lower for Troiano vs. Choi) but not moderate-to-vigorous (MV) PA time. Using V-axis cut-points yielded ~60 min more sedentary time and ~10 min less MVPA time than using VM cut-points. Conclusions Combining log-dates and the Choi algorithm was optimal, minimizing missing data and researcher burden. Estimates of time in physical activity and sedentary behavior are not directly comparable between V-axis and VM cut-points. These findings will inform consensus development for accelerometer data processing in ongoing epidemiologic studies.
Design and Development of a Medical Big Data Processing System Based on Hadoop
Secondary use of medical big data is increasingly popular in healthcare services and clinical research. Understanding the logic behind medical big data demonstrates tendencies in hospital information technology and shows great significance for hospital information systems that are designing and expanding services. Big data has four characteristics – Volume, Variety, Velocity and Value (the 4 Vs) – that make traditional systems incapable of processing these data using standalones. Apache Hadoop MapReduce is a promising software framework for developing applications that process vast amounts of data in parallel with large clusters of commodity hardware in a reliable, fault-tolerant manner. With the Hadoop framework and MapReduce application program interface (API), we can more easily develop our own MapReduce applications to run on a Hadoop framework that can scale up from a single node to thousands of machines. This paper investigates a practical case of a Hadoop-based medical big data processing system. We developed this system to intelligently process medical big data and uncover some features of hospital information system user behaviors. This paper studies user behaviors regarding various data produced by different hospital information systems for daily work. In this paper, we also built a five-node Hadoop cluster to execute distributed MapReduce algorithms. Our distributed algorithms show promise in facilitating efficient data processing with medical big data in healthcare services and clinical research compared with single nodes. Additionally, with medical big data analytics, we can design our hospital information systems to be much more intelligent and easier to use by making personalized recommendations.
Technical note: Quantifying and characterizing behavior in dairy calves using the IceTag automatic recording device
The objectives of the current study were 1) to validate the IceTag (http://www.icerobotics.com) automatic recording device for measuring lying, standing, and moving behavior in dairy calves, and 2) to improve the information yield from this device by applying a filtering procedure allowing for the detection of lying versus upright. The IceTag device provides measures of intensity (I) of lying, standing, and activity measured as percent lying, percent standing, and percent active, but does not directly measure lying, standing, and moving behavior because body movements occurring while lying (e.g., shifts in lying position) and while upright (e.g., grooming) are recorded as activity. Therefore, the following 3-step procedure was applied. First, thresholds for I were determined by choosing the cutoff that maximized the sum of sensitivity (Se) and specificity (Sp). Second, a lying period criterion (LPC) was established empirically, and IceTag data were filtered according to the LPC, providing information on the posture of the animal as lying versus being upright. Third, a new threshold of I was estimated for moving activity conditional on the animal being upright. IceTag recordings from 9 calves were compared with video recordings during a 12-h period and analyzed using 2 × 2 contingency tables. Data from the first 4 calves were used to determine an LPC, whereas the remaining 5 calves served for validation of the procedure. An optimal LPC was found by modeling the deviance between IceTag and video recordings as a function of the LPC and choosing the LPC threshold that minimized the deviance. The IceTag device was found to accurately measure the high-prevalence behaviors (lying and standing; Se+Sp >1.90) and less accurately measure the low-prevalence behavior (moving; Se+Sp=1.39). Application of the 3-step procedure using an optimal LPC estimate of 24.8s resulted in an improved description of calf behavior, yielding a valid representation of the number and duration of lying and upright periods (Se+Sp=2.00) within a precision of 0 to 49s (95% confidence interval). In group-housed dairy calves, valid measures of the number and duration of lying and upright periods may be obtained from the IceTag device when applying the presented filtering procedure to the data. Measures regarding locomotion, on the other hand, should be used with caution.
A Distance-Based Energy Aware Routing Algorithm for Wireless Sensor Networks
Energy efficiency and balancing is one of the primary challenges for wireless sensor networks (WSNs) since the tiny sensor nodes cannot be easily recharged once they are deployed. Up to now, many energy efficient routing algorithms or protocols have been proposed with techniques like clustering, data aggregation and location tracking etc. However, many of them aim to minimize parameters like total energy consumption, latency etc., which cause hotspot nodes and partitioned network due to the overuse of certain nodes. In this paper, a Distance-based Energy Aware Routing (DEAR) algorithm is proposed to ensure energy efficiency and energy balancing based on theoretical analysis of different energy and traffic models. During the routing process, we consider individual distance as the primary parameter in order to adjust and equalize the energy consumption among involved sensors. The residual energy is also considered as a secondary factor. In this way, all the intermediate nodes will consume their energy at similar rate, which maximizes network lifetime. Simulation results show that the DEAR algorithm can reduce and balance the energy consumption for all sensor nodes so network lifetime is greatly prolonged compared to other routing algorithms.
LiftingWiSe: A Lifting-Based Efficient Data Processing Technique in Wireless Sensor Networks
Monitoring thousands of objects which are deployed over large-hard-to-reach areas, is an important application of the wireless sensor networks (WSNs). Such an application requires disseminating a large amount of data within the WSN. This data includes, but is not limited to, the object’s location and the environment conditions at that location. WSNs require efficient data processing and dissemination processes due to the limited storage, processing power, and energy available in the WSN nodes. The aim of this paper is to propose a data processing technique that can work under constrained storage, processing, and energy resource conditions. The proposed technique utilizes the lifting procedure in processing the disseminated data. Lifting is usually used in discrete wavelet transform (DWT) operations. The proposed technique is referred to as LiftingWiSe, which stands for Lifting-based efficient data processing technique for Wireless Sensor Networks. LiftingWiSe has been tested and compared to other relevant techniques from the literature. The test has been conducted via a simulation of the monitored field and the deployed wireless sensor network nodes. The simulation results have been analyzed and discussed.
ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers
A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.
The integration of barcode scanning technology into Canadian public health immunization settings
•We compared barcode scanning of vaccines with manual electronic approaches.•Barcode scanning led to more accurate immunization records.•Barcode scanning was equally or more efficient than manual methods.•Users had generally positive perceptions of this technology.•More sensitive scanners and improved barcode readability may facilitate adoption. As part of a series of feasibility studies following the development of Canadian vaccine barcode standards, we compared barcode scanning with manual methods for entering vaccine data into electronic client immunization records in public health settings. Two software vendors incorporated barcode scanning functionality into their systems so that Algoma Public Health (APH) in Ontario and four First Nations (FN) communities in Alberta could participate in our study. We compared the recording of client immunization data (vaccine name, lot number, expiry date) using barcode scanning of vaccine vials vs. pre-existing methods of entering vaccine information into the systems. We employed time and motion methodology to evaluate time required for data recording, record audits to assess data quality, and qualitative analysis of immunization staff interviews to gauge user perceptions. We conducted both studies between July and November 2012, with 628 (282 barcoded) vials processed for the APH study, and 749 (408 barcoded) vials for the study in FN communities. Barcode scanning led to significantly fewer immunization record errors than using drop-down menus (APH study: 0% vs. 1.7%; p=0.04) or typing in vaccine data (FN study: 0% vs. 5.6%; p<0.001). There was no significant difference in time to enter vaccine data between scanning and using drop-down menus (27.6s vs. 26.3s; p=0.39), but scanning was significantly faster than typing data into the record (30.3s vs. 41.3s; p<0.001). Seventeen immunization nurses were interviewed; all noted improved record accuracy with scanning, but the majority felt that a more sensitive scanner was needed to reduce the occasional failures to read the 2D barcodes on some vaccines. Entering vaccine data into immunization records through barcode scanning led to improved data quality, and was generally well received. Further work is needed to improve barcode readability, particularly for unit-dose vials.
PhysioScripts: An extensible, open source platform for the processing of physiological data
A commonality across research involving physiological measures is the need to process large amounts of data. Such data processing typically involves the use of software tools to achieve several methodological steps, including identifying and correcting artifacts and defining epochs of time for the reduction and analysis of one or more physiological measures. This article describes a new tool to aid in the processing of physiological data: PhysioScripts. Key elements of PhysioScripts include a graphical interface to view and edit the results of processing steps, as well as a flexible framework to automate the creation of uniform or variable length epochs. The software comprises freely available scripts implemented in the R computing environment. Consequently, PhysioScripts can be readily modified to process other data types through the addition of new subroutines that can be plugged into the existing data processing framework. For illustrative purposes, we describe the steps involved in two data processing examples: (1) heart rate variability from the electrocardiogram and (2) respiratory rate derived from a chest strain gauge. The software, accompanying documentation, and an example data set are available online at israelchristie.com/software.