Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
146,958
result(s) for
"INFORMATION STORAGE"
Sort by:
Improving the translation of search strategies using the Polyglot Search Translator: a randomized controlled trial
by
Clark, Justin Michael
,
Carter, Matthew
,
Jones, Mark
in
Analysis
,
automation
,
Bibliographic data bases
2020
Background: Searching for studies to include in a systematic review (SR) is a time- and labor-intensive process with searches of multiple databases recommended. To reduce the time spent translating search strings across databases, a tool called the Polyglot Search Translator (PST) was developed. The authors evaluated whether using the PST as a search translation aid reduces the time required to translate search strings without increasing errors.Methods: In a randomized trial, twenty participants were randomly allocated ten database search strings and then randomly assigned to translate five with the assistance of the PST (PST-A method) and five without the assistance of the PST (manual method). We compared the time taken to translate search strings, the number of errors made, and how close the number of references retrieved by a translated search was to the number retrieved by a reference standard translation.Results: Sixteen participants performed 174 translations using the PST-A method and 192 translations using the manual method. The mean time taken to translate a search string with the PST-A method was 31 minutes versus 45 minutes by the manual method (mean difference: 14 minutes). The mean number of errors made per translation by the PST-A method was 8.6 versus 14.6 by the manual method. Large variation in the number of references retrieved makes results for this outcome unreliable, although the number of references retrieved by the PST-A method was closer to the reference standard translation than the manual method.Conclusion: When used to assist with translating search strings across databases, the PST can increase the speed of translation without increasing errors. Errors in search translations can still be a problem, and search specialists should be aware of this.
Journal Article
Make scientific data FAIR
2019
All disciplines should follow the geosciences and demand best practice for publishing and sharing data, argue Shelley Stall and colleagues.
All disciplines should follow the geosciences and demand best practice for publishing and sharing data, argue Shelley Stall and colleagues.
Researchers repairs a broken GPS module at a research station in Greenland
Journal Article
Molecular digital data storage using DNA
by
Strauss, Karin
,
Nivala Jeff
,
Ceze Luis
in
Biotechnology
,
Deoxyribonucleic acid
,
Information storage
2019
Molecular data storage is an attractive alternative for dense and durable information storage, which is sorely needed to deal with the growing gap between information production and the ability to store data. DNA is a clear example of effective archival data storage in molecular form. In this Review, we provide an overview of the process, the state of the art in this area and challenges for mainstream adoption. We also survey the field of in vivo molecular memory systems that record and store information within the DNA of living cells, which, together with in vitro DNA data storage, lie at the growing intersection of computer systems and biotechnology.Throughout evolution, DNA has been the primary medium of biological information storage. In this article, Ceze, Nivala and Strauss discuss how DNA can be adopted as a storage medium for custom data, as a potential future complement to current data storage media such as computer hard disks, optical disks and tape. They discuss strategies for coding, decoding and error correction and give examples of implementation both in vitro and in vivo.
Journal Article
Terminator-free template-independent enzymatic DNA synthesis for digital information storage
2019
DNA is an emerging medium for digital data and its adoption can be accelerated by synthesis processes specialized for storage applications. Here, we describe a de novo enzymatic synthesis strategy designed for data storage which harnesses the template-independent polymerase terminal deoxynucleotidyl transferase (TdT) in kinetically controlled conditions. Information is stored in transitions between non-identical nucleotides of DNA strands. To produce strands representing user-defined content, nucleotide substrates are added iteratively, yielding short homopolymeric extensions whose lengths are controlled by apyrase-mediated substrate degradation. With this scheme, we synthesize DNA strands carrying 144 bits, including addressing, and demonstrate retrieval with streaming nanopore sequencing. We further devise a digital codec to reduce requirements for synthesis accuracy and sequencing coverage, and experimentally show robust data retrieval from imperfectly synthesized strands. This work provides distributive enzymatic synthesis and information-theoretic approaches to advance digital information storage in DNA.
Adoption of DNA as a data storage medium could be accelerated with specialized synthesis processes and codecs. The authors describe TdT-mediated DNA synthesis in which data is stored in transitions between non-identical nucleotides and the use of synchronization markers to provide error tolerance.
Journal Article
Geographies of information
Almost fifty years after the spatial experiments with the architecture of communication in the 1960s, and twenty years after the death of distance prophecies of the 1990s, we are witnessing the emergence of a new spatial turn in information and communication technologies (ICTs). These digital technologies are fostering innovative means for communication, participation, sociability, and commerce that are different from the real space of homes, city squares, and streets. Yet at the same time, various material and infrastructural imprints required by contemporary ICTs such as data centers, fiber-optic cables, and IT office parks have contributed to a great buildup in physical space. A hybrid condition has emerged from the interaction of virtual spatiality and the physical imprints of ICTs, resulting in forms, places, and territories in which the dynamism and fluidity of contemporary networks of information become solidified. 'New geographies, 7' presents historical perspectives, theoretical framings, and new design paradigms that contribute to a more grounded understanding of the kind of hybrid spaces that ICTs engender, the scales at which they operate, and the processes by which this production of space is manifested in both advanced and emerging economies.\"
Big Data, Little Data, No Data
by
Borgman, Christine L
in
Big data
,
Communication in learning and scholarship
,
Communication in learning and scholarship -- Technological innovations
2015,2016,2017
\"Big Data\" is on the covers ofScience, Nature, theEconomist, andWiredmagazines, on the front pages of theWall Street Journaland theNew York Times.But despite the media hyperbole, as Christine Borgman points out in this examination of data and scholarly research, having the right data is usually better than having more data; little data can be just as valuable as big data. In many cases, there are no data -- because relevant data don't exist, cannot be found, or are not available. Moreover, data sharing is difficult, incentives to do so are minimal, and data practices vary widely across disciplines.Borgman, an often-cited authority on scholarly communication, argues that data have no value or meaning in isolation; they exist within a knowledge infrastructure -- an ecology of people, practices, technologies, institutions, material objects, and relationships. After laying out the premises of her investigation -- six \"provocations\" meant to inspire discussion about the uses of data in scholarship -- Borgman offers case studies of data practices in the sciences, the social sciences, and the humanities, and then considers the implications of her findings for scholarly practice and research policy. To manage and exploit data over the long term, Borgman argues, requires massive investment in knowledge infrastructures; at stake is the future of scholarship.