Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Series TitleSeries Title
-
Reading LevelReading Level
-
YearFrom:-To:
-
More FiltersMore FiltersContent TypeItem TypeIs Full-Text AvailableSubjectCountry Of PublicationPublisherSourceTarget AudienceDonorLanguagePlace of PublicationContributorsLocation
Done
Filters
Reset
143,693
result(s) for
"information retrieval"
Sort by:
Big Data, Little Data, No Data
by
Borgman, Christine L
in
Big data
,
Communication in learning and scholarship
,
Communication in learning and scholarship -- Technological innovations
2015,2016,2017
\"Big Data\" is on the covers of Science, Nature , the Economist , and Wired magazines, on the front pages of the Wall Street Journal and the New York Times. But despite the media hyperbole, as Christine Borgman points out in this examination of data and scholarly research, having the right data is usually better than having more data; little data can be just as valuable as big data. In many cases, there are no data -- because relevant data don't exist, cannot be found, or are not available. Moreover, data sharing is difficult, incentives to do so are minimal, and data practices vary widely across disciplines. Borgman, an often-cited authority on scholarly communication, argues that data have no value or meaning in isolation; they exist within a knowledge infrastructure -- an ecology of people, practices, technologies, institutions, material objects, and relationships. After laying out the premises of her investigation -- six \"provocations\" meant to inspire discussion about the uses of data in scholarship -- Borgman offers case studies of data practices in the sciences, the social sciences, and the humanities, and then considers the implications of her findings for scholarly practice and research policy. To manage and exploit data over the long term, Borgman argues, requires massive investment in knowledge infrastructures; at stake is the future of scholarship.
Search foundations : toward a science of technology-mediated experience
\"This book contributes to discussions within Information Retrieval and Science (IR&S) by improving our conceptual understanding of the relationship between humans and technology\"-- Provided by publisher.
Optical cryptosystems
2020
Advanced technologies such as artificial intelligence, big data, cloud computing, and the Internet of Things have changed the digital landscape, providing many new and exciting opportunities. However, they also provide ever-shifting gateways for information theft or misuse. Staying ahead requires the development of innovative and responsive security measures, and recent advances in optical technology have positioned it as a promising alternative to digital cryptography. Optical Cryptosystems introduces the subject of optical cryptography and provides up-to-date coverage of optical security schemes. Optical principles, approaches, and algorithms are discussed as well as applications, including image/data encryption-decryption, watermarking, image/data hiding, and authentication verification. This book also includes MATLAB[reg] codes, enabling students and research professionals to carry out exercises and develop newer methods of image/data security and authentication.
When we are no more : how digital memory is shaping our future
Examines how humanity records and passes on its culture to future generations, from the libraries of antiquity to the excess of information available in the digital age, and how ephemeral digital storage methods present a challenge for passing on current cultural memory to the future.
Informatica
2023
Informatica -the updated
edition of Alex Wright's previously published Glut-continues the
journey through the history of the information age to show how
information systems emerge . Today's \"information
explosion\" may seem like a modern phenomenon, but we are not the
first generation-or even the first species-to wrestle with the
problem of information overload. Long before the advent of
computers, human beings were collecting, storing, and organizing
information: from Ice Age taxonomies to Sumerian archives, Greek
libraries to Christian monasteries.
Wright weaves a narrative that connects such seemingly far-flung
topics as insect colonies, Stone Age jewelry, medieval monasteries,
Renaissance encyclopedias, early computer networks, and the World
Wide Web. He suggests that the future of the information age may
lie deep in our cultural past.
We stand at a precipice struggling to cope with a tsunami of
data. Wright provides some much-needed historical perspective. We
can understand the predicament of information overload not just as
the result of technological change but as the latest chapter in an
ancient story that we are only beginning to understand.
Discovering knowledge in data : an introduction to data mining
2005,2004
DANIEL T. LAROSE received his PhD in statistics from the University of Connecticut. An associate professor of statistics at Central Connecticut State University, he developed and directs Data Mining@CCSU, the world's first online master of science program in data mining. He has also worked as a data mining consultant for Connecticut-area companies. He is currently working on the next two books of his three-volume series on Data Mining: Data Mining Methods and Models and Data Mining the Web: Uncovering Patterns in Web Content, scheduled to publish respectively in 2005 and 2006.
Data science for dummies
Begins by explaining large data sets and data formats, including sample Python code for manipulating data. The book explains how to work with relational databases and unstructured data, including NoSQL. The book then moves into preparing data for analysis by cleaning it up or \"munging\" it. From there the book explains data visualization techniques and types of data sets. Part II of the book is all about supervised machine learning, including regression techniques and model validation techniques. Part III explains unsupervised machine learning, including clustering and recommendation engines. Part IV overviews big data processing, including MapReduce, Hadoop, Dremel, Storm, and Spark. The book finishes up with real world applications of data science and how data science fits into organizations.
Machine learning reduced workload with minimal risk of missing studies: development and evaluation of a randomized controlled trial classifier for Cochrane Reviews
by
Marshall, Iain J.
,
Elliott, Julian
,
Mavergames, Chris
in
Algorithms
,
Automation
,
Bibliographic data bases
2021
This study developed, calibrated, and evaluated a machine learning classifier designed to reduce study identification workload in Cochrane for producing systematic reviews.
A machine learning classifier for retrieving randomized controlled trials (RCTs) was developed (the “Cochrane RCT Classifier”), with the algorithm trained using a data set of title–abstract records from Embase, manually labeled by the Cochrane Crowd. The classifier was then calibrated using a further data set of similar records manually labeled by the Clinical Hedges team, aiming for 99% recall. Finally, the recall of the calibrated classifier was evaluated using records of RCTs included in Cochrane Reviews that had abstracts of sufficient length to allow machine classification.
The Cochrane RCT Classifier was trained using 280,620 records (20,454 of which reported RCTs). A classification threshold was set using 49,025 calibration records (1,587 of which reported RCTs), and our bootstrap validation found the classifier had recall of 0.99 (95% confidence interval 0.98–0.99) and precision of 0.08 (95% confidence interval 0.06–0.12) in this data set. The final, calibrated RCT classifier correctly retrieved 43,783 (99.5%) of 44,007 RCTs included in Cochrane Reviews but missed 224 (0.5%). Older records were more likely to be missed than those more recently published.
The Cochrane RCT Classifier can reduce manual study identification workload for Cochrane Reviews, with a very low and acceptable risk of missing eligible RCTs. This classifier now forms part of the Evidence Pipeline, an integrated workflow deployed within Cochrane to help improve the efficiency of the study identification processes that support systematic review production.
•Systematic review processes need to become more efficient.•Machine learning is sufficiently mature for real-world use.•A machine learning classifier was built using data from Cochrane Crowd.•It was calibrated to achieve very high recall.•It is now live and in use in Cochrane review production systems.
Journal Article