Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
762
result(s) for
"History Computer-assisted technology."
Sort by:
Pastplay : teaching and learning history with technology
\"In the field of history, the Web and other technologies have become important tools in research and teaching of the past. Yet the use of these tools is limited--many historians and history educators have resisted adopting them because they fail to see how digital tools supplement and even improve upon conventional tools (such as books). In Pastplay, a collection of essays by leading history and humanities researchers and teachers, editor Kevin Kee works to address these concerns head-on. How should we use technology? Playfully, Kee contends. Why? Because doing so helps us think about the past in new ways; through the act of creating technologies, our understanding of the past is re-imagined and developed. From the insights of numerous scholars and teachers, Pastplay argues that we should play with technology in history because doing so enables us to see the past in new ways by helping us understand how history is created; honoring the roots of research, teaching, and technology development; requiring us to model our thoughts; and then allowing us to build our own understanding.\"-- provided by publisher.
Artificial intelligence in healthcare: past, present and future
by
Jiang, Yong
,
Dong, Yi
,
Wang, Yilong
in
Algorithms
,
Artificial intelligence
,
Artificial Intelligence - history
2017
Artificial intelligence (AI) aims to mimic human cognitive functions. It is bringing a paradigm shift to healthcare, powered by increasing availability of healthcare data and rapid progress of analytics techniques. We survey the current status of AI applications in healthcare and discuss its future. AI can be applied to various types of healthcare data (structured and unstructured). Popular AI techniques include machine learning methods for structured data, such as the classical support vector machine and neural network, and the modern deep learning, as well as natural language processing for unstructured data. Major disease areas that use AI tools include cancer, neurology and cardiology. We then review in more details the AI applications in stroke, in the three major areas of early detection and diagnosis, treatment, as well as outcome prediction and prognosis evaluation. We conclude with discussion about pioneer AI systems, such as IBM Watson, and hurdles for real-life deployment of AI.
Journal Article
Magnetic resonance linear accelerator technology and adaptive radiation therapy: An overview for clinicians
2022
Radiation therapy (RT) continues to play an important role in the treatment of cancer. Adaptive RT (ART) is a novel method through which RT treatments are evolving. With the ART approach, computed tomography or magnetic resonance (MR) images are obtained as part of the treatment delivery process. This enables the adaptation of the irradiated volume to account for changes in organ and/or tumor position, movement, size, or shape that may occur over the course of treatment. The advantages and challenges of ART maybe somewhat abstract to oncologists and clinicians outside of the specialty of radiation oncology. ART is positioned to affect many different types of cancer. There is a wide spectrum of hypothesized benefits, from small toxicity improvements to meaningful gains in overall survival. The use and application of this novel technology should be understood by the oncologic community at large, such that it can be appropriately contextualized within the landscape of cancer therapies. Likewise, the need to test these advances is pressing. MR-guided ART (MRgART) is an emerging, extended modality of ART that expands upon and further advances the capabilities of ART. MRgART presents unique opportunities to iteratively improve adaptive image guidance. However, although the MRgART adaptive process advances ART to previously unattained levels, it can be more expensive, time-consuming, and complex. In this review, the authors present an overview for clinicians describing the process of ART and specifically MRgART.
Journal Article
Thermal ablation of tumours: biological mechanisms and advances in therapy
2014
Minimally invasive thermal ablation of tumours has become common since the advent of modern imaging. This Opinion article examines the mechanisms of tumour cell death that are induced by the most common thermoablative techniques and discusses the rapidly developing areas of research in the field.
Minimally invasive thermal ablation of tumours has become common since the advent of modern imaging. From the ablation of small, unresectable tumours to experimental therapies, percutaneous radiofrequency ablation, microwave ablation, cryoablation and irreversible electroporation have an increasing role in the treatment of solid neoplasms. This Opinion article examines the mechanisms of tumour cell death that are induced by the most common thermoablative techniques and discusses the rapidly developing areas of research in the field, including combinatorial ablation and immunotherapy, synergy with conventional chemotherapy and radiation, and the development of a new ablation modality in irreversible electroporation.
Journal Article
FSL
by
Behrens, Timothy E.J.
,
Smith, Stephen M.
,
Beckmann, Christian F.
in
Algorithms
,
Brain - anatomy & histology
,
Brain - physiology
2012
FSL (the FMRIB Software Library) is a comprehensive library of analysis tools for functional, structural and diffusion MRI brain imaging data, written mainly by members of the Analysis Group, FMRIB, Oxford. For this NeuroImage special issue on “20 years of fMRI” we have been asked to write about the history, developments and current status of FSL. We also include some descriptions of parts of FSL that are not well covered in the existing literature. We hope that some of this content might be of interest to users of FSL, and also maybe to new research groups considering creating, releasing and supporting new software packages for brain image analysis.
Journal Article
FreeSurfer
2012
FreeSurfer is a suite of tools for the analysis of neuroimaging data that provides an array of algorithms to quantify the functional, connectional and structural properties of the human brain. It has evolved from a package primarily aimed at generating surface representations of the cerebral cortex into one that automatically creates models of most macroscopically visible structures in the human brain given any reasonable T1-weighted input image. It is freely available, runs on a wide variety of hardware and software platforms, and is open source.
Journal Article
Two-stage framework for optic disc localization and glaucoma classification in retinal fundus images using deep learning
by
Bajwa, Muhammad Naseer
,
Neumeier, Wolfgang
,
Siddiqui, Shoaib Ahmed
in
Algorithms
,
Annotations
,
Artificial neural networks
2019
Background
With the advancement of powerful image processing and machine learning techniques, Computer Aided Diagnosis has become ever more prevalent in all fields of medicine including ophthalmology. These methods continue to provide reliable and standardized large scale screening of various image modalities to assist clinicians in identifying diseases. Since optic disc is the most important part of retinal fundus image for glaucoma detection, this paper proposes a two-stage framework that first detects and localizes optic disc and then classifies it into healthy or glaucomatous.
Methods
The first stage is based on Regions with Convolutional Neural Network (RCNN) and is responsible for localizing and extracting optic disc from a retinal fundus image while the second stage uses Deep Convolutional Neural Network to classify the extracted disc into healthy or glaucomatous. Unfortunately, none of the publicly available retinal fundus image datasets provides any bounding box ground truth required for disc localization. Therefore, in addition to the proposed solution, we also developed a rule-based semi-automatic ground truth generation method that provides necessary annotations for training RCNN based model for automated disc localization.
Results
The proposed method is evaluated on seven publicly available datasets for disc localization and on ORIGA dataset, which is the largest publicly available dataset with healthy and glaucoma labels, for glaucoma classification. The results of automatic localization mark new state-of-the-art on six datasets with accuracy reaching 100% on four of them. For glaucoma classification we achieved Area Under the Receiver Operating Characteristic Curve equal to 0.874 which is 2.7% relative improvement over the state-of-the-art results previously obtained for classification on ORIGA dataset.
Conclusion
Once trained on carefully annotated data, Deep Learning based methods for optic disc detection and localization are not only robust, accurate and fully automated but also eliminates the need for dataset-dependent heuristic algorithms. Our empirical evaluation of glaucoma classification on ORIGA reveals that reporting only Area Under the Curve, for datasets with class imbalance and without pre-defined train and test splits, does not portray true picture of the classifier’s performance and calls for additional performance metrics to substantiate the results.
Journal Article
SPM: A history
2012
Karl Friston began the SPM project around 1991. The rest is history
Journal Article
Brain templates and atlases
by
Baillet, Sylvain
,
Collins, D. Louis
,
Evans, Alan C.
in
Anatomy, Artistic - history
,
Atlases as Topic - history
,
Brain - anatomy & histology
2012
The core concept within the field of brain mapping is the use of a standardized, or “stereotaxic”, 3D coordinate frame for data analysis and reporting of findings from neuroimaging experiments. This simple construct allows brain researchers to combine data from many subjects such that group-averaged signals, be they structural or functional, can be detected above the background noise that would swamp subtle signals from any single subject. Where the signal is robust enough to be detected in individuals, it allows for the exploration of inter-individual variance in the location of that signal. From a larger perspective, it provides a powerful medium for comparison and/or combination of brain mapping findings from different imaging modalities and laboratories around the world. Finally, it provides a framework for the creation of large-scale neuroimaging databases or “atlases” that capture the population mean and variance in anatomical or physiological metrics as a function of age or disease.
However, while the above benefits are not in question at first order, there are a number of conceptual and practical challenges that introduce second-order incompatibilities among experimental data. Stereotaxic mapping requires two basic components: (i) the specification of the 3D stereotaxic coordinate space, and (ii) a mapping function that transforms a 3D brain image from “native” space, i.e. the coordinate frame of the scanner at data acquisition, to that stereotaxic space. The first component is usually expressed by the choice of a representative 3D MR image that serves as target “template” or atlas. The native image is re-sampled from native to stereotaxic space under the mapping function that may have few or many degrees of freedom, depending upon the experimental design. The optimal choice of atlas template and mapping function depend upon considerations of age, gender, hemispheric asymmetry, anatomical correspondence, spatial normalization methodology and disease-specificity. Accounting, or not, for these various factors in defining stereotaxic space has created the specter of an ever-expanding set of atlases, customized for a particular experiment, that are mutually incompatible.
These difficulties continue to plague the brain mapping field. This review article summarizes the evolution of stereotaxic space in term of the basic principles and associated conceptual challenges, the creation of population atlases and the future trends that can be expected in atlas evolution.
Journal Article
Multivariate pattern analysis of fMRI: The early beginnings
2012
In 2001, we published a paper on the representation of faces and objects in ventral temporal cortex that introduced a new method for fMRI analysis, which subsequently came to be called multivariate pattern analysis (MVPA). MVPA now refers to a diverse set of methods that analyze neural responses as patterns of activity that reflect the varying brain states that a cortical field or system can produce. This paper recounts the circumstances and events that led to the original study and later developments and innovations that have greatly expanded this approach to fMRI data analysis, leading to its widespread application.
Journal Article