Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
1,106 result(s) for "Forensic sciences Data processing."
Sort by:
Introduction to Data Analysis with R for Forensic Scientists
Statistical methods provide a logical, coherent framework in which data from experimental science can be analyzed. Minimizing theory and mathematics, this book focuses on the application and practice of statistics used in data analysis. The book includes a refresher on basic statistics and an introduction to R, techniques for the visual display of data through graphics, an overview of statistical hypothesis tests, a comprehensive guide to the use of the linear model, an introduction to extensions to the linear model for commonly encountered scenarios, and instruction on how to plan and design experiments.
Android Forensics
Android Forensics: Investigation, Analysis, and Mobile Security for Google Android examines the Android mobile platform and shares techniques for the forensic acquisition and subsequent analysis of Android devices. Organized into seven chapters, the book looks at the history of the Android platform and its internationalization; it discusses the Android Open Source Project (AOSP) and the Android Market; it offers a brief tutorial on Linux and Android forensics; and it explains how to create an Ubuntu-based virtual machine (VM). The book also considers a wide array of Android-supported hardware and device types, the various Android releases, the Android software development kit (SDK), the Davlik VM, key components of Android security, and other fundamental concepts related to Android forensics, such as the Android debug bridge and the USB debugging setting. In addition, it analyzes how data are stored on an Android device and describes strategies and specific utilities that a forensic analyst or security engineer can use to analyze an acquired Android device. Core Android developers and manufacturers, app developers, corporate security officers, and anyone with limited forensic experience will find this book extremely useful. Named a 2011 Best Digital Forensics Book by InfoSec ReviewsAbility to forensically acquire Android devices using the techniques outlined in the bookDetailed information about Android applications needed for forensics investigationsImportant information about SQLite, a file based structured data storage relevant for both Android and many other platforms.
Social Media Investigation for Law Enforcement
Social media is becoming an increasingly important—and controversial—investigative source for law enforcement. Social Media Investigation for Law Enforcement provides an overview of the current state of digital forensic investigation of Facebook and other social media networks and the state of the law, touches on hacktivism, and discusses the implications for privacy and other controversial areas. The authors also point to future trends. \"This project provides an overview of the current state of digital forensic investigation of Facebook and other social media networks and the state of the law, touches on hacktivism, and discusses the implications for privacy and other controversial areas. The authors also point to future trends.\"-- The Journal, Fall/Winter 2013 \"This book is very informative and it will serve well as a first point of contact for law enforcement personnel, especially those who have no understanding or very basic understanding of Social Media, and it is indeed an asset for any law enforcement library.\"— Thane Pierre, Interfaces Magazine (The Chartered Society of Forensic Sciences) Joshua L. Brunty is Assistant Professor of Digital Forensics at Marshall University . Josh holds numerous certifications within the digital forensics discipline including: AccessData Certified Examiner (ACE), Computer Hacking Forensic Examiner (CHFI), Seized Computer Evidence Recovery Specialist (SCERS), Certified Malware Investigator, Certified Steganography Examiner, and is certified by the National Security Agency in Information Assessment Methodology (NSA-IAM). He is a member of the Institute of Computer Forensics Professionals (ICFP), the Mid-Atlantic Association of the High Technology Crime Investigation Association (HTCIA), the Digital-Multimedia Sciences section of the American Academy of Forensic Sciences (AAFS), the West Virginia Cyber Crimes Task Force, and the West Virginia Chapter of FBI INFRAGARD. Katherine Helenek holds a Master’s of Forensic Science specializing in Digital Forensics, Forensic Chemistry, and Crime Scene Investigation from Marshall University. She is an AccessData Certified Examiner (ACE) and a member of the Appalachian Institute of Digital Evidence (AIDE). She is now a Forensic Examiner with Digital Intelligence.
A bioavailable strontium isoscape for Western Europe: A machine learning approach
Strontium isotope ratios (87Sr/86Sr) are gaining considerable interest as a geolocation tool and are now widely applied in archaeology, ecology, and forensic research. However, their application for provenance requires the development of baseline models predicting surficial 87Sr/86Sr variations (\"isoscapes\"). A variety of empirically-based and process-based models have been proposed to build terrestrial 87Sr/86Sr isoscapes but, in their current forms, those models are not mature enough to be integrated with continuous-probability surface models used in geographic assignment. In this study, we aim to overcome those limitations and to predict 87Sr/86Sr variations across Western Europe by combining process-based models and a series of remote-sensing geospatial products into a regression framework. We find that random forest regression significantly outperforms other commonly used regression and interpolation methods, and efficiently predicts the multi-scale patterning of 87Sr/86Sr variations by accounting for geological, geomorphological and atmospheric controls. Random forest regression also provides an easily interpretable and flexible framework to integrate different types of environmental auxiliary variables required to model the multi-scale patterning of 87Sr/86Sr variability. The method is transferable to different scales and resolutions and can be applied to the large collection of geospatial data available at local and global levels. The isoscape generated in this study provides the most accurate 87Sr/86Sr predictions in bioavailable strontium for Western Europe (R2 = 0.58 and RMSE = 0.0023) to date, as well as a conservative estimate of spatial uncertainty by applying quantile regression forest. We anticipate that the method presented in this study combined with the growing numbers of bioavailable 87Sr/86Sr data and satellite geospatial products will extend the applicability of the 87Sr/86Sr geo-profiling tool in provenance applications.
Current applications of high-resolution mass spectrometry for the analysis of new psychoactive substances: a critical review
The proliferation of new psychoactive substances (NPS) in recent years has resulted in the development of numerous analytical methods for the detection and identification of known and unknown NPS derivatives. High-resolution mass spectrometry (HRMS) has been identified as the method of choice for broad screening of NPS in a wide range of analytical contexts because of its ability to measure accurate masses using data-independent acquisition (DIA) techniques. Additionally, it has shown promise for non-targeted screening strategies that have been developed in order to detect and identify novel analogues without the need for certified reference materials (CRMs) or comprehensive mass spectral libraries. This paper reviews the applications of HRMS for the analysis of NPS in forensic drug chemistry and analytical toxicology. It provides an overview of the sample preparation procedures in addition to data acquisition, instrumental analysis, and data processing techniques. Furthermore, it gives an overview of the current state of non-targeted screening strategies with discussion on future directions and perspectives of this technique. Graphical Abstract Missing the bullseye - a graphical respresentation of non-targeted screening. Image courtesy of Christian Alonzo
Abuse of fentanyl: An emerging problem to face
•Fentanyl-related morbility and mortality has alarmingly increased in recent years.•Most fentanyl abuse involves illicitly produced drug mixed up with heroin.•The drug can be also obtained by the diversion of fentanyl-containing medicines.•Fentanyl overdose rescue is based on a rapid administration of naloxone. Fentanyl is a potent synthetic opioid used as a narcotic analgesic supplement in general and regional anesthesia as well as in management of persistent, severe chronic pain. Alarming epidemiological and forensic medicine reports, accumulated mainly during the last two decades, point to a growing increase in illicit use of fentanyl, mainly in North America and Europe. Toxicological data indicates that fentanyl use is inextricably linked with polydrug use. There are two main sources of fentanyl on the “recreational” drug market. First, the most common, combines illicitly manufactured fentanyl from clandestine sources. The drug is often mixed up with heroin (“fake heroin”) to increase its potency at a little cost, or included in cocaine products. It can also be mixed into and sold as oxycodone-, hydrocodone- or alprazolam-containing tablets. The other way to gain fentanyl is through the diversion of fentanyl-containing medicines, especially transdermal patches (FTPs). Fentanyl extracted from FTP can be administered intravenously, insufflated or inhaled after volatilization. The drug can also be delivered by oral or transmucosal application of the whole patch, or by rectal insertion. The most common overdose symptoms are coma, lethargy, respiratory depression and arrest. Although naloxone, an opioid receptor antagonist, is the standard drug for fentanyl overdose rescue, attempts to revive patients with naloxone could be unsuccessful, due to the rapid onset of fentanyl’s action. As the fentanyl problem is constantly growing, there is an urgent need for new, effective harm-reduction strategies and technologies, as well as overdose maintenance.
Digital whole-slide image analysis for automated diatom test in forensic cases of drowning using a convolutional neural network algorithm
•Automated diatom identification in human tissues using convolutional neural network.•Designing a methodology competitive with forensic experts in diatom quantification.•Applying a deep learning method to analyze digital whole-slide image. Diatom examinations have been widely used to perform drowning diagnosis in forensic practice. However, current methods for recognizing diatoms, which use light or electron microscopy, are time-consuming and laborious and often result in false positive or negative decisions. In this study, we demonstrated an artificial intelligence (AI)-based system to automatically identify diatoms in conjunction with a classical chemical digestion approach. By employing transfer learning and data augmentation methods, we trained convolutional neural network (CNN) models on thousands or tens of thousands of tiles from digital whole-slide images of diatom smears. The results showed that the trained model identified the regions containing diatoms in the tiles. In an independent test, where the slide samples were collected in forensic casework, the best CNN model demonstrated a performance competitive with those of 5 forensic pathologists with experience in diatom quantification. This pilot study paves the way for future intelligent diatom examinations; many efficient diatom extraction methods could be incorporated into our automated system.
Dimensionality reduction and visualisation of hyperspectral ink data using t-SNE
•The t-SNE algorithm is introduced into forensic ink data analysis.•Created hyperspectra database of inks from 60 pens, from different manufactures, type and colour.•Compared the clustering quality of t-SNE against PCA on hyperspectral ink data.•Clustering quality compared using four different clustering quality indexes.•The t-SNE provided better visualization and clustering score. Ink analysis is an important tool in forensic science and document analysis. Hyperspectral imaging (HSI) captures large number of narrowband images across the electromagnetic spectrum. HSI is one of the non-invasive tools used in forensic document analysis, especially for ink analysis. The substantial information from multiple bands in HSI images empowers us to make non-destructive diagnosis and identification of forensic evidence in questioned documents. The presence of numerous band information in HSI data makes processing and storing becomes a computationally challenging task. Therefore, dimensionality reduction and visualization play a vital role in HSI data processing to achieve efficient processing and effortless understanding of the data. In this paper, an advanced approach known as t-Distributed Stochastic Neighbor embedding (t-SNE) algorithm is introduced into the ink analysis problem. t-SNE extracts the non-linear similarity features between spectra to scale them into a lower dimension. This capability of the t-SNE algorithm for ink spectral data is verified visually and quantitatively, the two-dimensional data generated by the t-SNE showed a better visualization and a greater improvement in clustering quality in comparison with Principal Component Analysis (PCA).
Age estimation in adults by dental imaging assessment systematic review
•First systematic review about the most recent methods for age estimation in adults.•Comparison of the accuracy of three different methods for age estimation in adults.•Data report from all the papers reporting the use of these methods around the world. The need to rely on proper, simple, and accurate methods for age estimation in adults is still a world-wide issue. It has been well documented that teeth are more resistant than bones to the taphonomic processes, and that the use of methods for age estimation based on dental imaging assessment are not only less invasive than those based on osseous analysis, but also have shown similar or superior accuracy in adults. To summarise the results of some of the recently most recently cited methods for dental age estimation in adults, based on odontometric dental imaging analysis, to establish which is more accurate, accessible, and simple. A literature search from several databases was conducted from January 1995 to July 2016 with previously defined inclusion criteria. Based on the findings of this review, it could be possible to suggest pulp/tooth area ratio calculation from first, upper canines and other single rooted teeth (lower premolars, upper central incisors), and a specific statistical analysis that considers the non-linear production of secondary dentine with age, as a reliable, easy, faster, and predictable method for dental age estimation in adults. The second recommended method is the pulp/tooth width–length ratio calculation. The use of specific population formulae is recommended, but to include data of individuals from different groups of population in the same analysis is not discouraged. A minimum sample size of at least 120 participants is recommended to obtain more reliable results. Methods based on volume calculation are time consuming and still need improvement.