Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Reading LevelReading Level
-
Content TypeContent Type
-
YearFrom:-To:
-
More FiltersMore FiltersItem TypeIs Full-Text AvailableSubjectPublisherSourceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
28,257
result(s) for
"Archives -- Information technology"
Sort by:
Museum and Archive on the Move
by
Rühse, Viola
,
Grau, Oliver
,
Coones, Wendy
in
Archives
,
Archives -- Information technology
,
Archives -- Technological innovations
2017
The digital revolution fundamentally changed how cultural heritage is created, documented, analyzed, and preserved.The book focuses on this transformation's impact.How must museums and archives meet the challenges of digitally generated cultures and how does the digital revolution influence traditional object collection, research, and education?.
Students’ perceptions of the infopreneurship education in the Department of Records and Archives Management at the National University of Science and Technology
2016
Background: The infopreneurship education course forms part of the final year Bachelor of Science Honours Degree in Records and Archives Management (BScRAM) at the National University of Science (NUST). The course looks unique and out of place in relation to other records and archives courses which specifically focus on the management of records and archives.Objectives: The study examined the students’ perceptions regarding the relevance of the infopreneurship course in the BScRAM that is offered in the Department of Records and Archives Management at NUST, Zimbabwe. The aim of the study was to determine student evaluation of relevance of the course to the BScRAM.Method: Both quantitative and qualitative methods of collecting data were used. Using a census method, data was collected through a focus group interview and a self-administered questionnaire from a study population that comprised 17 students who were in their final year of the BScRAM at NUST.Results: The results revealed students found the infopreneurial education module quite relevant to their degree. Although the lecturer was helpful in providing resources, students felt that they needed to visit some infopreneurial businesses for familiarisation and looked forward to having guest lecturers from the infopreneurial world.Conclusion: Although the BScRAM was not well known at high school level, students found the infopreneurial education in this degree quite stimulating. Having gone through an infopreneurship course, students were prepared to undertake infopreneurial businesses after graduating from the university.
Journal Article
Developing a Strategy for Managing Electronic Records—The Findings of the Indiana University Electronic Records Project
1998
From June 1995 through December 1997, staff from the Indiana University Archives and University Information Technology Services undertook and completed an electronic records project partially funded by the National Historical Publications and Records Commission, designed to implement and test the \"Functional Requirements for Evidence in Recordkeeping\" model developed at the University of Pittsburgh. In this article, the findings of the IU project are reviewed in the context of several questions project personnel addressed during the project, including 1) Does the Pitt model ask the right questions? 2) What set of activities are required to use and implement the model? 3) What are the costs associated with implementing the model? and 4) What types of skills are required to apply the methodology.
Journal Article
Big Data, Little Data, No Data
by
Borgman, Christine L
in
Big data
,
Communication in learning and scholarship
,
Communication in learning and scholarship -- Technological innovations
2015,2016,2017
\"Big Data\" is on the covers ofScience, Nature, theEconomist, andWiredmagazines, on the front pages of theWall Street Journaland theNew York Times.But despite the media hyperbole, as Christine Borgman points out in this examination of data and scholarly research, having the right data is usually better than having more data; little data can be just as valuable as big data. In many cases, there are no data -- because relevant data don't exist, cannot be found, or are not available. Moreover, data sharing is difficult, incentives to do so are minimal, and data practices vary widely across disciplines.Borgman, an often-cited authority on scholarly communication, argues that data have no value or meaning in isolation; they exist within a knowledge infrastructure -- an ecology of people, practices, technologies, institutions, material objects, and relationships. After laying out the premises of her investigation -- six \"provocations\" meant to inspire discussion about the uses of data in scholarship -- Borgman offers case studies of data practices in the sciences, the social sciences, and the humanities, and then considers the implications of her findings for scholarly practice and research policy. To manage and exploit data over the long term, Borgman argues, requires massive investment in knowledge infrastructures; at stake is the future of scholarship.
The Ambivalent Ontology of Digital Artifacts
by
Marton, Attila
,
Kallinikos, Jannis
,
Aaltonen, Aleksi
in
Archives & records
,
Information systems
,
Issues and Opinions
2013
Digital artifacts are embedded in wider and constantly shifting ecosystems such that they become increasingly editable, interactive, reprogrammable, and distributable. This state of flux and constant transfiguration renders the value and utility of these artifacts contingent on shifting webs of functional relations with other artifacts across specific contexts and organizations. By the same token, it apportions control over the development and use of these artifacts over a range of dispersed stakeholders and makes their management a complex technical and social undertaking. These ideas are illustrated with reference to (1) provenance and authenticity of digital documents within the overall context of archiving and social memory and (2) the content dynamics occasioned by the findability of content mediated by Internet search engines. We conclude that the steady change and transfiguration of digital artifacts signal a shift of epochal dimensions that calls for rethinking some of the inherited wisdom in IS research and practice.
Journal Article
An empirical survey of data augmentation for time series classification with neural networks
by
Iwana, Brian Kenji
,
Uchida, Seiichi
in
Analysis
,
Archives & records
,
Artificial neural networks
2021
In recent times, deep artificial neural networks have achieved many successes in pattern recognition. Part of this success can be attributed to the reliance on big data to increase generalization. However, in the field of time series recognition, many datasets are often very small. One method of addressing this problem is through the use of data augmentation. In this paper, we survey data augmentation techniques for time series and their application to time series classification with neural networks. We propose a taxonomy and outline the four families in time series data augmentation, including transformation-based methods, pattern mixing, generative models, and decomposition methods. Furthermore, we empirically evaluate 12 time series data augmentation methods on 128 time series classification datasets with six different types of neural networks. Through the results, we are able to analyze the characteristics, advantages and disadvantages, and recommendations of each data augmentation method. This survey aims to help in the selection of time series data augmentation for neural network applications.
Journal Article
The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances
by
Bostrom, Aaron
,
Large, James
,
Lines, Jason
in
Academic Surveys and Tutorials
,
Algorithms
,
Archives
2017
In the last 5 years there have been a large number of new time series classification algorithms proposed in the literature. These algorithms have been evaluated on subsets of the 47 data sets in the University of California, Riverside time series classification archive. The archive has recently been expanded to 85 data sets, over half of which have been donated by researchers at the University of East Anglia. Aspects of previous evaluations have made comparisons between algorithms difficult. For example, several different programming languages have been used, experiments involved a single train/test split and some used normalised data whilst others did not. The relaunch of the archive provides a timely opportunity to thoroughly evaluate algorithms on a larger number of datasets. We have implemented 18 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other) by performing 100 resampling experiments on each of the 85 datasets. We use these results to test several hypotheses relating to whether the algorithms are significantly more accurate than the benchmarks and each other. Our results indicate that only nine of these algorithms are significantly more accurate than both benchmarks and that one classifier, the collective of transformation ensembles, is significantly more accurate than all of the others. All of our experiments and results are reproducible: we release all of our code, results and experimental details and we hope these experiments form the basis for more robust testing of new algorithms in the future.
Journal Article
Bake off redux: a review and experimental evaluation of recent time series classification algorithms
by
Middlehurst, Matthew
,
Bagnall, Anthony
,
Schäfer, Patrick
in
Algorithms
,
Archives & records
,
Classification
2024
In 2017, a research paper (Bagnall et al. Data Mining and Knowledge Discovery 31(3):606-660. 2017) compared 18 Time Series Classification (TSC) algorithms on 85 datasets from the University of California, Riverside (UCR) archive. This study, commonly referred to as a ‘bake off’, identified that only nine algorithms performed significantly better than the Dynamic Time Warping (DTW) and Rotation Forest benchmarks that were used. The study categorised each algorithm by the type of feature they extract from time series data, forming a taxonomy of five main algorithm types. This categorisation of algorithms alongside the provision of code and accessible results for reproducibility has helped fuel an increase in popularity of the TSC field. Over six years have passed since this bake off, the UCR archive has expanded to 112 datasets and there have been a large number of new algorithms proposed. We revisit the bake off, seeing how each of the proposed categories have advanced since the original publication, and evaluate the performance of newer algorithms against the previous best-of-category using an expanded UCR archive. We extend the taxonomy to include three new categories to reflect recent developments. Alongside the originally proposed distance, interval, shapelet, dictionary and hybrid based algorithms, we compare newer convolution and feature based algorithms as well as deep learning approaches. We introduce 30 classification datasets either recently donated to the archive or reformatted to the TSC format, and use these to further evaluate the best performing algorithm from each category. Overall, we find that two recently proposed algorithms, MultiROCKET+Hydra (Dempster et al. 2022) and HIVE-COTEv2 (Middlehurst et al. Mach Learn 110:3211-3243. 2021), perform significantly better than other approaches on both the current and new TSC problems.
Journal Article
Towards practical, high-capacity, low-maintenance information storage in synthesized DNA
2013
An efficient and scalable strategy with robust error correction is reported for encoding a record amount of information (including images, text and audio files) in DNA strands; a ‘DNA archive’ has been synthesized, shipped from the USA to Germany, sequenced and the information read.
Long-term DNA archives make sense
This multidisciplinary study in synthetic biology both proposes and demonstrates a system for the DNA-based storage of digital information. Digital information is being produced at an ever-growing rate, requiring an increasing commitment to ongoing maintenance of digital media in the archives. Surprisingly, this provides a niche for DNA, which can serve as a dense and stable information-storage medium. Nick Goldman
et al
. report an efficient and scalable strategy with robust error correction for encoding a record amount of information (including images, text and audio files) in DNA strands. After synthesizing a 'DNA archive' and shipping it from California to Germany, the DNA was sequenced and the information read. At the current rate of DNA synthesis cost reduction, DNA-based information storage is expected to become cost effective within a decade for archives likely to be accessed only rarely, after about 50 years.
Digital production, transmission and storage have revolutionized how we access and use information but have also made archiving an increasingly complex task that requires active, continuing maintenance of digital media. This challenge has focused some interest on DNA as an attractive target for information storage
1
because of its capacity for high-density information encoding, longevity under easily achieved conditions
2
,
3
,
4
and proven track record as an information bearer. Previous DNA-based information storage approaches have encoded only trivial amounts of information
5
,
6
,
7
or were not amenable to scaling-up
8
, and used no robust error-correction and lacked examination of their cost-efficiency for large-scale information archival
9
. Here we describe a scalable method that can reliably store more information than has been handled before. We encoded computer files totalling 739 kilobytes of hard-disk storage and with an estimated Shannon information
10
of 5.2 × 10
6
bits into a DNA code, synthesized this DNA, sequenced it and reconstructed the original files with 100% accuracy. Theoretical analysis indicates that our DNA-based storage scheme could be scaled far beyond current global information volumes and offers a realistic technology for large-scale, long-term and infrequently accessed digital archiving. In fact, current trends in technological advances are reducing DNA synthesis costs at a pace that should make our scheme cost-effective for sub-50-year archiving within a decade.
Journal Article