Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Series Title
      Series Title
      Clear All
      Series Title
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Content Type
    • Item Type
    • Is Full-Text Available
    • Subject
    • Country Of Publication
    • Publisher
    • Source
    • Target Audience
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
111,398 result(s) for "INFORMATION RETRIEVAL"
Sort by:
Big Data, Little Data, No Data
\"Big Data\" is on the covers ofScience, Nature, theEconomist, andWiredmagazines, on the front pages of theWall Street Journaland theNew York Times.But despite the media hyperbole, as Christine Borgman points out in this examination of data and scholarly research, having the right data is usually better than having more data; little data can be just as valuable as big data. In many cases, there are no data -- because relevant data don't exist, cannot be found, or are not available. Moreover, data sharing is difficult, incentives to do so are minimal, and data practices vary widely across disciplines.Borgman, an often-cited authority on scholarly communication, argues that data have no value or meaning in isolation; they exist within a knowledge infrastructure -- an ecology of people, practices, technologies, institutions, material objects, and relationships. After laying out the premises of her investigation -- six \"provocations\" meant to inspire discussion about the uses of data in scholarship -- Borgman offers case studies of data practices in the sciences, the social sciences, and the humanities, and then considers the implications of her findings for scholarly practice and research policy. To manage and exploit data over the long term, Borgman argues, requires massive investment in knowledge infrastructures; at stake is the future of scholarship.
Search foundations : toward a science of technology-mediated experience
\"This book contributes to discussions within Information Retrieval and Science (IR&S) by improving our conceptual understanding of the relationship between humans and technology\"-- Provided by publisher.
Improving the translation of search strategies using the Polyglot Search Translator: a randomized controlled trial
Background: Searching for studies to include in a systematic review (SR) is a time- and labor-intensive process with searches of multiple databases recommended. To reduce the time spent translating search strings across databases, a tool called the Polyglot Search Translator (PST) was developed. The authors evaluated whether using the PST as a search translation aid reduces the time required to translate search strings without increasing errors.Methods: In a randomized trial, twenty participants were randomly allocated ten database search strings and then randomly assigned to translate five with the assistance of the PST (PST-A method) and five without the assistance of the PST (manual method). We compared the time taken to translate search strings, the number of errors made, and how close the number of references retrieved by a translated search was to the number retrieved by a reference standard translation.Results: Sixteen participants performed 174 translations using the PST-A method and 192 translations using the manual method. The mean time taken to translate a search string with the PST-A method was 31 minutes versus 45 minutes by the manual method (mean difference: 14 minutes). The mean number of errors made per translation by the PST-A method was 8.6 versus 14.6 by the manual method. Large variation in the number of references retrieved makes results for this outcome unreliable, although the number of references retrieved by the PST-A method was closer to the reference standard translation than the manual method.Conclusion: When used to assist with translating search strings across databases, the PST can increase the speed of translation without increasing errors. Errors in search translations can still be a problem, and search specialists should be aware of this.
When we are no more : how digital memory is shaping our future
Examines how humanity records and passes on its culture to future generations, from the libraries of antiquity to the excess of information available in the digital age, and how ephemeral digital storage methods present a challenge for passing on current cultural memory to the future.
Informatica
Informatica -the updated edition of Alex Wright's previously published Glut-continues the journey through the history of the information age to show how information systems emerge . Today's \"information explosion\" may seem like a modern phenomenon, but we are not the first generation-or even the first species-to wrestle with the problem of information overload. Long before the advent of computers, human beings were collecting, storing, and organizing information: from Ice Age taxonomies to Sumerian archives, Greek libraries to Christian monasteries. Wright weaves a narrative that connects such seemingly far-flung topics as insect colonies, Stone Age jewelry, medieval monasteries, Renaissance encyclopedias, early computer networks, and the World Wide Web. He suggests that the future of the information age may lie deep in our cultural past. We stand at a precipice struggling to cope with a tsunami of data. Wright provides some much-needed historical perspective. We can understand the predicament of information overload not just as the result of technological change but as the latest chapter in an ancient story that we are only beginning to understand.
Discovering knowledge in data : an introduction to data mining
DANIEL T. LAROSE received his PhD in statistics from the University of Connecticut. An associate professor of statistics at Central Connecticut State University, he developed and directs Data Mining@CCSU, the world's first online master of science program in data mining. He has also worked as a data mining consultant for Connecticut-area companies. He is currently working on the next two books of his three-volume series on Data Mining: Data Mining Methods and Models and Data Mining the Web: Uncovering Patterns in Web Content, scheduled to publish respectively in 2005 and 2006.
Data science for dummies
Begins by explaining large data sets and data formats, including sample Python code for manipulating data. The book explains how to work with relational databases and unstructured data, including NoSQL. The book then moves into preparing data for analysis by cleaning it up or \"munging\" it. From there the book explains data visualization techniques and types of data sets. Part II of the book is all about supervised machine learning, including regression techniques and model validation techniques. Part III explains unsupervised machine learning, including clustering and recommendation engines. Part IV overviews big data processing, including MapReduce, Hadoop, Dremel, Storm, and Spark. The book finishes up with real world applications of data science and how data science fits into organizations.
Optical cryptosystems
Advanced technologies such as artificial intelligence, big data, cloud computing, and the Internet of Things have changed the digital landscape, providing many new and exciting opportunities. However, they also provide ever-shifting gateways for information theft or misuse. Staying ahead requires the development of innovative and responsive security measures, and recent advances in optical technology have positioned it as a promising alternative to digital cryptography. Optical Cryptosystems introduces the subject of optical cryptography and provides up-to-date coverage of optical security schemes. Optical principles, approaches, and algorithms are discussed as well as applications, including image/data encryption-decryption, watermarking, image/data hiding, and authentication verification. This book also includes MATLAB[reg] codes, enabling students and research professionals to carry out exercises and develop newer methods of image/data security and authentication.