Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
69,384
result(s) for
"artefact"
Sort by:
Investigation Study of Ultrasound Practitioners’ Awareness about Artefacts of Hepatobiliary Imaging in Almadinah Almunawwarah
by
Alshoabi, Sultan Abdulwadoud
,
Alsharif, Walaa M.
,
Alsaedi, Hassan Ibrahim
in
Acoustic properties
,
Diagnostic imaging
,
Investigations
2022
Objectives: To investigate the knowledge and awareness of ultrasound practitioners’ concerning ultrasound artefacts in evaluating the hepatobiliary system. Methods: This electronic questionnaire-based comparative study involved the ultrasound practitioners’ who work in the radiology departments in Almadinah Almunawwarah governmental hospitals during the period from 1 November 2020 to 30 April 2021. Spearman’s rho correlation test was used to correlate between knowledge and job, academic qualification, and years of experience. A T-test and cross tabulation test were done to compare the knowledge about artefacts among radiologists and radiologic technologists. Results: This study involved 94 participants distributed as 22 (23.4%) radiologists and 72 (76.6%) radiologic technologists. The results shows that 85%, 71%, 73%, 69%, 54% and 53% of the participants assigned the acoustic shadowing, acoustic enhancement, ring down, side lobe, reverberation and mirror artefacts, as artefacts respectively. However, 68%, 53%, 19%, 19%, 18%, and 40% of the participants gave correct final diagnosis of acoustic shadowing, acoustic enhancement, ring down, side lobes, reverberation, and mirror artifacts, respectively. Spearman’s rho correlation test shows significant correlation between participants with more than three years experience and knowledge related mirror artefacts (r=0.328, p=0.001). It shows significant correlation between radiologists with knowledge related mirror artefacts (r=0.367, p<0.001). A significant correlation was found between highly qualified participants and knowledge related mirror artefacts (r=0.336, p=0.001) and side lobe artefacts (r=0.237, p=0.008). Conclusion: The questionnaire-based comparative study of knowledge about artefacts of hepatobiliary ultrasound imaging reveals a high level of Ultrasound practitioners’ knowledge in differentiating artefacts from pathology with a high level of knowledge in identifying hepatobiliary acoustic shadowing and acoustic enhancement artefacts. However, insufficient knowledge was noted in identifying mirror, side lobe, reverberation and ring down artefacts. A direct link was found between academic qualification, years of experience and practioners’ knowledge among. doi: https://doi.org/10.12669/pjms.38.6.5084 How to cite this:Alsaedi HI, Krsoom AM, Alshoabi SA, Alsharif WM. Investigation Study of Ultrasound Practitioners’ Awareness about Artefacts of Hepatobiliary Imaging in Almadinah Almunawwarah. Pak J Med Sci. 2022;38(6):1526-1533. doi: https://doi.org/10.12669/pjms.38.6.5084 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Journal Article
Theorising the Digital Artefact in Dark Sides Research
2024
Rapid advancements in the sophistication and diffusion of advanced digital technologies such as AI warrant repose to consider their unintended consequences or ‘dark sides’. While more attention has been directed towards the ethical implications of disruptive technologies, discussions on the underlying materiality of the digital artefacts are often missing. In this article, we call for IS researchers to better conceptualise how technical objects contribute towards the emergence of negative outcomes for users, either intentionally or unintentionally. Examples are provided of conceptual and empirical papers that have sought to open the ‘black box’ of technology to elucidate this issue. We propose sociomateriality as a theoretical lens to guide studies in this area and present a future research agenda that encourages novel methodological approaches such as design science to uncover the dark side of emerging digital artefacts.
Journal Article
Reference layer artefact subtraction (RLAS): A novel method of minimizing EEG artefacts during simultaneous fMRI
by
Chowdhury, Muhammad E.H.
,
Mullinger, Karen J.
,
Bowtell, Richard
in
Algorithms
,
Artefact correction
,
Artefact removal
2014
Large artefacts compromise EEG data quality during simultaneous fMRI. These artefact voltages pose heavy demands on the bandwidth and dynamic range of EEG amplifiers and mean that even small fractional variations in the artefact voltages give rise to significant residual artefacts after average artefact subtraction. Any intrinsic reduction in the magnitude of the artefacts would be highly advantageous, allowing data with a higher bandwidth to be acquired without amplifier saturation, as well as reducing the residual artefacts that can easily swamp signals from brain activity measured using current methods. Since these problems currently limit the utility of simultaneous EEG–fMRI, new approaches for reducing the magnitude and variability of the artefacts are required. One such approach is the use of an EEG cap that incorporates electrodes embedded in a reference layer that has similar conductivity to tissue and is electrically isolated from the scalp. With this arrangement, the artefact voltages produced on the reference layer leads by time-varying field gradients, cardiac pulsation and subject movement are similar to those induced in the scalp leads, but neuronal signals are not detected in the reference layer. Taking the difference of the voltages in the reference and scalp channels will therefore reduce the artefacts, without affecting sensitivity to neuronal signals. Here, we test this approach by using a simple experimental realisation of the reference layer to investigate the artefacts induced on the leads attached to the reference layer and scalp and to evaluate the degree of artefact attenuation that can be achieved via reference layer artefact subtraction (RLAS). Through a series of experiments on phantoms and human subjects, we show that RLAS significantly reduces the gradient (GA), pulse (PA) and motion (MA) artefacts, while allowing accurate recording of neuronal signals. The results indicate that RLAS generally outperforms AAS when motion is present in the removal of the GA and PA, while the combination of AAS and RLAS always produces higher artefact attenuation than AAS. Additionally, we demonstrate that RLAS greatly attenuates the unpredictable and highly variable MAs that are very hard to remove using post-processing methods.
•The efficacy of RLAS was compared with standard EEG artefact removal methods.•RLAS significantly reduces the major EEG artefacts, but retains neuronal signals.•RLAS significantly attenuates the unpredictable motion artefact from the EEG data.•RLAS generally out-performs standard post-processing correction methods.•RLAS and post-processing methods combined provide the highest data quality.
Journal Article
An evaluation of the efficacy, reliability, and sensitivity of motion correction strategies for resting-state functional MRI
2018
Estimates of functional connectivity derived from resting-state functional magnetic resonance imaging (rs-fMRI) are sensitive to artefacts caused by in-scanner head motion. This susceptibility has motivated the development of numerous denoising methods designed to mitigate motion-related artefacts. Here, we compare popular retrospective rs-fMRI denoising methods, such as regression of head motion parameters and mean white matter (WM) and cerebrospinal fluid (CSF) (with and without expansion terms), aCompCor, volume censoring (e.g., scrubbing and spike regression), global signal regression and ICA-AROMA, combined into 19 different pipelines. These pipelines were evaluated across five different quality control benchmarks in four independent datasets associated with varying levels of motion. Pipelines were benchmarked by examining the residual relationship between in-scanner movement and functional connectivity after denoising; the effect of distance on this residual relationship; whole-brain differences in functional connectivity between high- and low-motion healthy controls (HC); the temporal degrees of freedom lost during denoising; and the test-retest reliability of functional connectivity estimates. We also compared the sensitivity of each pipeline to clinical differences in functional connectivity in independent samples of people with schizophrenia and obsessive-compulsive disorder. Our results indicate that (1) simple linear regression of regional fMRI time series against head motion parameters and WM/CSF signals (with or without expansion terms) is not sufficient to remove head motion artefacts; (2) aCompCor pipelines may only be viable in low-motion data; (3) volume censoring performs well at minimising motion-related artefact but a major benefit of this approach derives from the exclusion of high-motion individuals; (4) while not as effective as volume censoring, ICA-AROMA performed well across our benchmarks for relatively low cost in terms of data loss; (5) the addition of global signal regression improved the performance of nearly all pipelines on most benchmarks, but exacerbated the distance-dependence of correlations between motion and functional connectivity; and (6) group comparisons in functional connectivity between healthy controls and schizophrenia patients are highly dependent on preprocessing strategy. We offer some recommendations for best practice and outline simple analyses to facilitate transparent reporting of the degree to which a given set of findings may be affected by motion-related artefact.
[Display omitted]
•We examine 19 denoising pipelines for resting-state fMRI across 4 datasets.•No single method offers perfect motion control.•Censoring and ICA-AROMA pipelines perform well across most benchmarks.•Pipeline choice impacts case-control differences in functional connectivity.
Journal Article
Why security and privacy research lies at the centre of the information systems (IS) artefact: proposing a bold research agenda
by
Willison, Robert
,
Dinev, Tamara
,
Lowry, Paul Benjamin
in
Big Data
,
Business and Management
,
Business Information Systems
2017
In this essay, we outline some important concerns in the hope of improving the effectiveness of security and privacy research. We discuss the need to re-examine our understanding of information technology and information system (IS) artefacts and to expand the range of the latter to include those artificial phenomena that are crucial to information security and privacy research. We then briefly discuss some prevalent limitations in theory, methodology, and contributions that generally weaken security/privacy studies and jeopardise their chances of publication in a top IS journal. More importantly, we suggest remedies for these weaknesses, identifying specific improvements that can be made and offering a couple of illustrations of such improvements. In particular, we address the notion of loose re-contextualisation, using deterrence theory research as an example. We also provide an illustration of how the focus on intentions may have resulted in an underuse of powerful theories in security and privacy research, because such theories explain more than just intentions. We then outline three promising opportunities for IS research that should be particularly compelling to security and privacy researchers: online platforms, the Internet of things, and big data. All of these carry innate information security and privacy risks and vulnerabilities that can be addressed only by researching each link of the systems chain, that is, technologies-policies-processes-people-society-economy-legislature. We conclude by suggesting several specific opportunities for new research in these areas.
Journal Article
Artefacts in software engineering: a fundamental positioning
by
Mund, Jakob
,
Weyer, Thorsten
,
Böhm, Wolfgang
in
Compilers
,
Computer Science
,
Design engineering
2019
Artefacts play a vital role in software and systems development processes. Other terms like documents, deliverables, or work products are widely used in software development communities instead of the term artefact. In the following, we use the term ‘artefact’ including all these other terms. Despite its relevance, the exact denotation of the term ‘artefact’ is still not clear due to a variety of different understandings of the term and to a careless negligent usage. This often leads to approaches being grounded in a fuzzy, unclear understanding of the essential concepts involved. In fact, there does not exist a common terminology. Therefore, it is our goal that the term artefact be standardised so that researchers and practitioners have a common understanding for discussions and contributions. In this position paper, we provide a positioning and critical reflection upon the notion of artefacts in software engineering at different levels of perception and how these relate to each other. We further contribute a metamodel that provides a description of an artefact that is independent from any underlying process model. This metamodel defines artefacts at three levels. Abstraction and refinement relations between these levels allow correlating artefacts to each other and defining the notion of related, refined, and equivalent artefacts. Our contribution shall foster the long overdue and too often underestimated terminological discussion on what artefacts are to provide a common ground with clearer concepts and principles for future software engineering contributions, such as the design of artefact-oriented development processes and tools.
Journal Article
Novel artefact removal algorithms for co-registered EEG/fMRI based on selective averaging and subtraction
by
de Munck, Jan C.
,
Ossenblok, Pauly P.W.
,
van Wegen, Erwin
in
Algorithms
,
Artefact correction
,
Artefacts
2013
Co-registered EEG and functional MRI (EEG/fMRI) is a potential clinical tool for planning invasive EEG in patients with epilepsy. In addition, the analysis of EEG/fMRI data provides a fundamental insight into the precise physiological meaning of both fMRI and EEG data. Routine application of EEG/fMRI for localization of epileptic sources is hampered by large artefacts in the EEG, caused by switching of scanner gradients and heartbeat effects. Residuals of the ballistocardiogram (BCG) artefacts are similarly shaped as epileptic spikes, and may therefore cause false identification of spikes. In this study, new ideas and methods are presented to remove gradient artefacts and to reduce BCG artefacts of different shapes that mutually overlap in time.
Gradient artefacts can be removed efficiently by subtracting an average artefact template when the EEG sampling frequency and EEG low-pass filtering are sufficient in relation to MR gradient switching (Gonçalves et al., 2007). When this is not the case, the gradient artefacts repeat themselves at time intervals that depend on the remainder between the fMRI repetition time and the closest multiple of the EEG acquisition time. These repetitions are deterministic, but difficult to predict due to the limited precision by which these timings are known. Therefore, we propose to estimate gradient artefact repetitions using a clustering algorithm, combined with selective averaging. Clustering of the gradient artefacts yields cleaner EEG for data recorded during scanning of a 3T scanner when using a sampling frequency of 2048Hz. It even gives clean EEG when the EEG is sampled with only 256Hz.
Current BCG artefacts-reduction algorithms based on average template subtraction have the intrinsic limitation that they fail to deal properly with artefacts that overlap in time. To eliminate this constraint, the precise timings of artefact overlaps were modelled and represented in a sparse matrix. Next, the artefacts were disentangled with a least squares procedure. The relevance of this approach is illustrated by determining the BCG artefacts in a data set consisting of 29 healthy subjects recorded in a 1.5T scanner and 15 patients with epilepsy recorded in a 3T scanner. Analysis of the relationship between artefact amplitude, duration and heartbeat interval shows that in 22% (1.5T data) to 30% (3T data) of the cases BCG artefacts show an overlap. The BCG artefacts of the EEG/fMRI data recorded on the 1.5T scanner show a small negative correlation between HBI and BCG amplitude.
In conclusion, the proposed methodology provides a substantial improvement of the quality of the EEG signal without excessive computer power or additional hardware than standard EEG-compatible equipment.
► With gradient artefact clustering templates are created to correct EEG/fMRI. ► With gradient artefact clustering clean EEG can be obtained at low sampling frequency. ► BCG artefacts overlap in time 20 to 30 % of the cases. ► Overlapping BCG artefacts are effectively removed from the data.
Journal Article
An Unsupervised Method for Artefact Removal in EEG Signals
by
Dormido, Raquel
,
Duro, Natividad
,
Mur, Angel
in
Algorithms
,
artefact detection
,
artefact removal
2019
Objective: The activity of the brain can be recorded by means of an electroencephalogram (EEG). An EEG is a multichannel signal related to brain activity. However, EEG presents a wide variety of undesired artefacts. Removal of these artefacts is often done using blind source separation methods (BSS) and mainly those based on Independent Component Analysis (ICA). ICA-based methods are well-accepted in the literature for filtering artefacts and have proved to be satisfactory in most scenarios of interest. Our goal is to develop a generic and unsupervised ICA-based algorithm for EEG artefacts removal. Approach: The proposed algorithm makes use of a new unsupervised artefact detection, ICA and a statistical criterion to automatically select the artefact related independent components (ICs) requiring no human intervention. The algorithm is evaluated using both simulated and real EEG data with artefacts (SEEG and AEEG). A comparison between the proposed unsupervised selection of ICs related to the artefact and other supervised selection is also presented. Main results: A new unsupervised ICA-based algorithm to filter artefacts, where ICs related to each artefact are automatically selected. It can be used in online applications, it preserves most of the original information among the artefacts and removes different types of artefacts. Significance: ICA-based methods for filtering artefacts prevail in the literature. The work in this article is important insofar as it addresses the problem of automatic selection of ICs in ICA-based methods. The selection is unsupervised, avoiding the manual ICs selection or a learning process involved in other methods. Our method is a generic algorithm that allows removing EEG artefacts of various types and, unlike some ICA-based algorithms, it retains most of the original information among the artefacts. Within the algorithm, the artefact detection method implemented does not require human intervention either.
Journal Article
From awe to experience, from artefacts to edufacts? On the post-war revolution in the objects of science communication
by
Schirrmacher, Arne
in
Museums
2024
The central message of this paper is that in the 20 th century there was a revolution in the way science was presented. It became most visible in a significant change in the kinds of objects used to explain science to a wider public. For centuries the objects came from the academy, university, laboratory or industry. Either directly as artefacts or as modifications and simplifications that might work better in lectures and demonstrations, they inhabited the display cases in schools, universities, science collections and museums. It was not until the 1960s in North America that science museums – or rather science centres, as they soon became known – began to build their own exhibits in such a way as to present scientific phenomena as vividly as possible. How did this turn from artefacts to ‘edufacts’ come about and what implications did it have?
Journal Article
Comparative Model Efficiency Analysis Based on Dissimilar Algorithms for Image Learning and Correction as a Means of Fault-Finding
2025
The introduction of technology in different sectors to optimise efficiency is increasing rapidly. As a result of the opportunities that artificial intelligence presents to different sectors by optimally performing tasks with less error compared to humans or traditional models, the use of AI in artefact detection is being investigated. This research paper thus presents a comparative model efficiency analysis based on dissimilar algorithms, namely CNN, VGG16, Inception_V3, and ResNet_50. The model developed was based on images that were obtained from a Toshiba CT scanner for two types of datasets (88 image datasets) and 170 image datasets, both comprising metal and ring artefacts. Furthermore, the results demonstrate higher data losses in the data transfer learning due to data recycling, suggesting that the model is prone to image feature losses when the model threshold is set at 75%. Additionally, two data transfer models were evaluated against “our model”. The results demonstrate that VGG16 performed better in terms of data accuracy than both the testing and training models, while the Resnet_50 algorithm performed poorly in terms of the loss encountered compared to the other three algorithms.
Journal Article